CN114223182A - Method and system for isolating leaf switches in a network - Google Patents

Method and system for isolating leaf switches in a network Download PDF

Info

Publication number
CN114223182A
CN114223182A CN201980099305.3A CN201980099305A CN114223182A CN 114223182 A CN114223182 A CN 114223182A CN 201980099305 A CN201980099305 A CN 201980099305A CN 114223182 A CN114223182 A CN 114223182A
Authority
CN
China
Prior art keywords
server
leaf switch
switch
notification
leaf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980099305.3A
Other languages
Chinese (zh)
Other versions
CN114223182B (en
Inventor
郑海洋
喻湘宁
王永灿
刘永锋
王国辉
王海勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Publication of CN114223182A publication Critical patent/CN114223182A/en
Application granted granted Critical
Publication of CN114223182B publication Critical patent/CN114223182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure provides methods and systems for isolating a leaf switch in a network, the leaf switch being connected to a server in the network. The method can comprise the following steps: sending a notification to a server through a leaf switch in response to receiving a request for isolating the leaf switch in the network, wherein the notification instructs the server to stop sending egress traffic to the leaf switch; determining whether an acknowledgement of the notification is received from the server; and stopping ingress traffic towards the server in response to determining that the acknowledgement has been received.

Description

Method and system for isolating leaf switches in a network
Background
A server center (e.g., a large-scale scalable data center) may include a plurality of network servers and switches, providing offline services, including remote storage services, cloud processing services, mass data distribution, and the like. High Availability (HA) is critical in large-scale scalable data centers (MSDCs) due to the offline requirement.
To provide high bandwidth, low latency and non-blocking connections, MSDCs widely employ a Clos network topology. Clos networks can be based on a spine-leaf topology, including multiple spine switches and multiple leaf switches. In a spine-leaf topology, one leaf switch may be connected to all spine switches to improve resiliency and scalability, as well as to connect to multiple servers. One server may also be connected to multiple leaf switches. Due to hardware failures or software upgrades, the leaf switch may have to be isolated from the Clos network for maintenance or upgrade. However, the isolation of leaf switches may result in real-time traffic degradation, resulting in undesirable service outages.
Disclosure of Invention
An embodiment of the present invention provides a method for isolating a first leaf switch in a network, the first leaf switch connected to a server in the network. The method can comprise the following steps: in response to receiving a request from a first leaf switch in the isolated network, sending a notification to a server via the first leaf switch, the notification instructing the server to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from a server; and in response to receiving a determination that the acknowledgement is accepted, stopping ingress traffic towards the server.
The disclosed embodiments further provide a first leaf switch connected to a server in a network. The first leaf switch may include: a memory storing a set of instructions; and at least one processor coupled to the memory and configured to execute the set of instructions to cause the first leaf switch to perform: in response to a received request for a switch in the isolated network, sending a notification to a server, the notification instructing the server to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from a server; and stopping ingress traffic towards the server in response to a determination that the acknowledgement has been received.
Embodiments of the present invention also provide a non-transitory computer-readable medium that stores a set of instructions executable by at least one processor of a leaf switch to cause the leaf switch to perform a method of isolating the leaf switch in a network. The leaf switches may be connected to servers in the network. The method can comprise the following steps: in response to receiving a request to isolate a first leaf switch in a network, sending a notification to a server through the first leaf switch, the notification instructing the server to stop sending egress traffic to the first leaf switch; determining whether an acknowledgement to the notification is received from a server; and in response to a determination that the acknowledgement has been received, stopping ingress traffic towards the server.
Drawings
Embodiments and aspects of the disclosure are illustrated in the following detailed description and drawings. The various features shown in the drawings are not drawn to scale.
Figure 1 shows a schematic diagram of a Clos network.
Fig. 2 illustrates a schematic diagram of an example network, in accordance with certain embodiments of the present disclosure.
Fig. 3 illustrates a schematic diagram of the isolated network, in accordance with certain embodiments of the present disclosure.
Fig. 4 is a flow diagram of a method of isolating a first leaf switch in a network, in accordance with certain embodiments of the present disclosure.
Fig. 5 illustrates a block diagram of an exemplary leaf switch, in accordance with some embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
As described above, with conventional systems, isolating leaf switches from a Clos network can disrupt the network, e.g., resulting in real-time traffic loss. The techniques described in this disclosure may minimize these types of interrupts.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, composition, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, composition, article, or apparatus. The term "exemplary" refers to "exemplary" rather than "ideal".
Fig. 1 shows a schematic diagram of a network 100. Although the network 100 is contemplated as a Clos network, it should be noted that any network having at least a three-tier architecture may be used.
As shown in fig. 1, the Clos network 100 is a three-tier architecture including a spine tier 110, a leaf tier 120, and a server tier 130. The spine layer 110 is the backbone of the Clos network 100 and is responsible for interconnecting all leaf switches in the leaf layer 120, which may include multiple spine switches (e.g., spine switches 112, 114, 116, and 118). Leaf layer 120 may provide access to devices such as servers and include a plurality of leaf switches (e.g., leaf switches 122, 124, 126, and 128). The server layer 130 may include a plurality of servers (e.g., servers 132, 134, 136, and 138).
In such a three-tier architecture, a plurality of leaf switches may be connected to spine switches in a plurality of full-network topologies. In other words, each leaf switch (e.g., leaf switch 122) is connected to each spine switch (e.g., spine switches 112, 114, 116, and 118) in spine layer 110 to create multiple links. The links between leaf switches (e.g., 122) and spine switches may be randomly selected among the plurality of links such that traffic load between leaf layer 120 and spine layer 130 is evenly distributed. The connections between these leaf switches and spine switches may also be referred to as L3 connections.
Each leaf switch (e.g., leaf switch 122) may also be connected to at least one server (e.g., server 132 and server 134) in server layer 130. On the other hand, each server (e.g., server 132) may be connected to at least two leaf switches (e.g., leaf switches 122 and 124) to ensure connectivity. In other words, for example, server 132 may establish a first link with leaf switch 122 and a second link with leaf switch 124. The first and second links between the server and the leaf switch may be referred to as L2 links.
Under such a three-layer architecture of the Clos network 100, if overload of the Clos network 100 occurs, the process of expanding the capacity of the Clos network 100 can be simple. For example, an additional spine switch may be added and connected to each leaf switch, providing additional inter-layer bandwidth between spine layer 110 and leaf layer 120 to reduce the overload.
Similarly, a new leaf switch may be added by simply connecting it to each spine switch. However, when existing leaf switches are isolated from a Clos network, such isolation may result in unnecessary service disruption. For example, according to the prior art, the L2 link between a leaf switch and a server may be closed from the side of the leaf switch without the server being aware of the closure. Thus, the server may continue to send traffic to the leaf switch until the server detects a shutdown and switches the traffic to another L2 link. Thus, traffic sent to the leaf switch prior to the switch will never be processed and will have to be dropped, resulting in an undesirable disruption of traffic.
Embodiments of the present disclosure further provide methods and systems for isolating leaf switches in a network that minimize traffic disruption.
Fig. 2 illustrates a schematic diagram of an example network 200, according to some embodiments of the invention. As shown in fig. 2, network 200 may include spine switches 212 and 214, leaf switches 222 and 224, and server 232. Each leaf switch 222 and 224 is connected to a spine switch 212 and 214. Server 232 is connected to leaf switches 222 and 224. The connection between the server 232 and the leaf switch 222 may be referred to as a first L2 link and the connection between the server 232 and the leaf switch 224 may be referred to as a second L2 link.
In some embodiments, the leaf switch 222 may receive a request to isolate the leaf switch 222 from the network 200. For example, an administrator of network 200 may make requests for maintenance, software upgrades, and the like. It is noted that the request may also be made via the network 200 itself. For example, when the network 200 detects a failure of a leaf switch 222, the network 200 may automatically request isolation of the switch 222 so as not to cause further service disruption.
In certain embodiments, when the leaf switch 222 receives the isolation request, the leaf switch 222 can determine whether the second L2 link associated with the leaf switch 224 has sufficient bandwidth to handle the traffic associated with the leaf switch 222.
The leaf switch 222 may indicate that isolation cannot be performed if the second L2 link associated with the leaf switch 224 cannot handle the additional traffic associated with the leaf switch 222. For example, the leaf switch 222 may generate a message informing an administrator of the network 200 that isolation cannot be performed for a while. It should be noted that the bandwidth of the second L2 link may be released due to the decreased traffic of the second L2 link. For example, the second L2 link may not have sufficient bandwidth to handle traffic associated with the leaf switch 222 at a first time, but may have appropriate bandwidth at a second time. Thus, in some embodiments, the message generated by the leaf switch 222 may further indicate another time to perform this isolation.
If the second L2 link associated with the leaf switch 224 is unable to handle the additional traffic associated with the leaf switch 222, the leaf switch 222 may continue to handle the isolation request. For example, in response to a request to isolate a leaf switch 222, the leaf switch 222 can send a notification 202 to the server 232. The notification 202 may be used to notify the server 232 that egress traffic to the leaf switch 222 should stop. In some embodiments, notification 202 may include an identification (e.g., a Media Access Control (MAC) address) of leaf switch 222. Upon receiving the notification 202, the server 232 may stop sending egress traffic to the leaf switch 222. Egress traffic that would have been sent to leaf switch 222 can now be sent to another leaf switch (e.g., leaf switch 224) and, thus, eventually reach the spine layer.
Notably, ingress traffic towards the server 232 may be continuously sent by the leaf switch 222 when egress traffic to the leaf switch 222 stops.
The server 232 may then send an acknowledgement 204 to the leaf switch 222 for the notification 202. The acknowledgement 204 can inform the leaf switch 222 that the server 232 is aware of the request. In some embodiments, the acknowledgement 204 may further inform the leaf switch 222 that egress traffic has been sent to another leaf switch (e.g., leaf switch 224).
The Link Aggregation Control Protocol (LACP) may be used to manage the L2 link and the communication between the leaf switch and the server. LACP allows a network device (e.g., leaf switch 222) to negotiate an auto-bind link by sending LACP data units (LACPDUs) to a peer device (e.g., server 232) that also performs LACP, which peer device (e.g., server 232) also performs LACP. In some embodiments, the notification 202 and acknowledgement 204 may be transmitted between the leaf switch 222 and the server 232 using LACPDUs.
Thus, the leaf switch 222 can determine whether the acknowledgment 204 was received from the server 232. In some embodiments, the leaf switch 222 may further determine whether to receive the acknowledgement 204 from the server 232 within a given time (e.g., 3 seconds). The leaf switch 222 may not receive the acknowledgement 204 within a given time for various reasons. For example, the reasons may include at least one of the leaf switch 222 being unable to send the notification 202, the server 232 being unable to receive the notification 202, the server 232 being unable to send the acknowledgement 204, the leaf switch 222 being unable to receive the acknowledgement 204, and so on.
In response to a determination that the acknowledgement 204 was not received, the leaf switch 222 can retransmit the notification 202 to the server 232 to further notify the server 232 of the quarantine request.
In response to determining that the acknowledgement 204 is received, the leaf switch 222 may stop ingress traffic towards the server 232. Thus, the server 232 has stopped egress traffic to the leaf switch 222 before determining that the acknowledgement 204 was received. And the leaf switch 222 may stop ingress traffic to the server 232 after determining that the acknowledgement 204 was received. In other words, traffic between the leaf switch 222 and the server 232 (including egress traffic of the server 232 and ingress traffic of the server 232) may be completely stopped upon receipt of the acknowledgement 204.
It is noted that some ingress traffic towards the server 232 may still be sent and become real-time traffic before the acknowledgement 204 is received. Thus, the leaf switch 222 preferably does not disconnect from the server 232 until real-time traffic towards the server 232 is processed. If the leaf switch 222 disconnects from the server 232 immediately after receiving the acknowledgement 204, the real-time traffic may not be completely processed.
Since the duration of real-time traffic processing may be short, the leaf switch 222 may disconnect from the server 232 for a period of time after ingress traffic towards the server at the leaf switch 222 is stopped. The time period is configurable and can be set to a few milliseconds. It is noted that although the leaf switch 222 has been isolated from the server 232, traffic between the spine switch and the server 232 at the spine layer may be communicated through the leaf switch 224 after the leaf switch 222 is disconnected.
In some embodiments, leaf switch 222 may also use the last packet of traffic to server 232 as an acknowledgement. In response to receiving the acknowledgement, the server 232 may further confirm that all traffic has been processed and that it is safe to disconnect the leaf switch 222.
It will be appreciated that when the leaf switch 222 comes back online, then the L2 link may be reestablished between the leaf switch 222 and the server 232.
As can be seen from the above, notifications 202 and acknowledgements 204 can be used to coordinate the termination of egress and ingress traffic of the server 232 in sequence during the isolation of the leaf switch 222 so that real-time traffic between the leaf switch 222 and the server 232 can be processed prior to the isolation to avoid traffic disruption.
As described above, LACP may be used to manage links and communications between leaf switches and servers. In some embodiments, the LACP may further be used to perform sequential termination of egress and ingress traffic of the server.
In some embodiments, the LACP port status field of the LACPDU may be used as a synchronization field to coordinate leaf switches and servers during quarantine. The LACP port status field contains at least three bits, each bit being a flag indicating the particular status of the transmitting port. Table 1 below shows exemplary meanings of the three bits of the LACP port status field, including "sync", "collect", and "distribute".
Figure BDA0003502343500000061
Figure BDA0003502343500000071
TABLE 1
The bit "sync" may be used to indicate whether the transmitting device and the receiving device are in sync. As shown in table 1 above, if the "synchronization" bit is "0", it indicates that the receiving end and the transmitting end are not synchronized, and the receiving end device may resynchronize a plurality of physical ports of the receiving end and the transmitting end. Resynchronization may also be referred to as "flapping". After the physical ports are synchronized, the physical ports can be aggregated into a high-bandwidth data path, so that better connectivity is provided. The aggregated physical ports may also be referred to as Link Aggregation Groups (LAGs).
If the bit "Synchronization" is "1", it indicates that the receiving end and the transmitting end are synchronized, and at least one acquisition and distribution can be performed. As shown in table 1, when three bits of the port status field are "101", it indicates that the sending end device is sending traffic to the receiving end and expects the sending end to stop sending traffic. More particularly, in embodiments of the present disclosure, the three port status bits of a leaf switch are "101," indicating that the leaf switch is still sending ingress traffic to the servers connected to the leaf switch, and that each server is expected to stop sending egress traffic to the leaf switch. For example, when the leaf switch 222 sends the notification 202 to the server 232 using the LACPDU, the leaf switch 222 may also set the LACP port status field to "101". Thus, upon receiving the notification 202, the server 232 may read the leaf switch's port status and stop egress traffic to the leaf switch and send an acknowledgement (e.g., acknowledgement 204 in FIG. 2).
In response to the notification 202, when the server 232 sends the acknowledgement 204, three bits of the port status field of the server 232 may be set to "110", indicating that the server 232 is still receiving traffic from the leaf switch 222 and expecting an acknowledgement from the leaf switch 222 of no traffic transmission. Since the server 232 continuously processes real-time traffic on the link between the leaf switch 222 and the server 232, real-time traffic can be processed from the link, thereby avoiding traffic disruption or minimizing traffic disruption when the leaf switch 222 is quarantined.
After the leaf switch 222 receives the acknowledgement 204, the leaf switch 222 can further acknowledge that there is no traffic being transmitted on the link between the leaf switch 222 and the server 232. For example, the last packet of real-time traffic sent by the leaf switch 222 may be the acknowledgement.
In this way, the leaf switch 222 is ready to disconnect from the server. Fig. 3 illustrates a schematic diagram of the isolated network 300, according to some embodiments of the invention. As shown in fig. 3, after the leaf switch 222 receives the acknowledgement, the leaf switch 222 has disconnected from the server 232. It will be appreciated that traffic sent by server 232 may still reach spine switches 212 and 214 through leaf switch 224. Thus, a user of network 300 does not perceive the isolation of leaf switch 222.
Referring back to fig. 2 and table 1, when the three bits of the sync field of the leaf switch 222 and the server 232 are "111", it may indicate that the leaf switch 222 and the server 232 are in bidirectional communication.
It will be appreciated that the above multiplexing of LACP port status fields of LACPDUs may be activated when a leaf switch receives a request to quarantine the leaf switch from the network. It will also be appreciated that the synchronization field may be transmitted between the leaf switch and the server using three different bits of the LACP port status field of the LACPDU. In some embodiments, the synchronization field may be transmitted using data units other than LACPDUs.
Fig. 4 is a flow diagram of a method 400 of isolating a first leaf switch in a network according to some embodiments of the invention. The network may include a second leaf switch and a spine switch in addition to the first leaf switch. Both the first and second leaf switches may be connected to one server (e.g., server 134 of server tier 130), while the spine switch is connected to both the first and second leaf switches. The method 400 may be performed by an electronic device. The electronic device may include a memory storing a set of instructions and at least one processor executing the set of instructions to cause the electronic device to perform method 400. For example, the electronic device may be a leaf switch (e.g., leaf switch 222 in fig. 2-3) of a leaf layer (e.g., leaf layer 120). Referring to fig. 4, the method 400 may include the following steps.
At step 402, in response to receiving a request to quarantine a first leaf switch in a network, the first leaf switch may send a notification to a server. The notification may instruct the server to stop sending egress traffic to the first leaf switch. In some embodiments, the notification may also include a first port state of the first leaf switch. For example, the notification may be carried by a first LACPDU (link Aggregation Control Protocol Data unit), and the first port status may be represented by an LACP port status field of the first LACPDU. In some embodiments, the LACP port status field may include 3 bits to indicate the port status of the initiator (e.g., the first leaf switch). In this step 402, the first LACP port status field of the first LACPDU may be "101" indicating that the port of the first leaf switch is still distributing traffic but not receiving traffic and that the first leaf switch expects the server to stop egress traffic towards the first leaf switch. Thus, after sending the notification, the first leaf switch may continually send ingress traffic to the server for processing.
At step 404, the first leaf switch may determine whether an acknowledgement of the notification is received from the server. When the server receives the notification, the server may stop sending egress traffic to the first leaf switch and send an acknowledgement back to the first leaf switch. The acknowledgement information may be carried by the second LACPDU. Similarly, the second port status of the server may be indicated by the LACP port status field of the second LACPDU. In this step 404, the LACP port status field of the second LACPDU may be "101" indicating that the port of the server is still receiving traffic from the first leaf switch and expecting an acknowledgement of the no-traffic transmission from the first leaf switch.
As previously mentioned, the server is also connected to the second leaf switch. In some embodiments, the notification may further cause egress traffic from the server to be sent to the spine switch through the second leaf switch.
In response to a determination that the acknowledgement is received, the first leaf switch may stop ingress traffic towards the server at step 406.
In some embodiments, in response to a determination that the acknowledgement was not received within the first period of time of step 404, the first leaf switch may send another notification to the server at step 402. It is noted that if the notification is sent a given number of times, the first leaf switch may generate an error code indicating that the isolation has failed.
At step 408, the first leaf switch may disconnect the first leaf switch from the server. Traffic between the spine switch and the server is communicated through the second leaf switch after disconnecting the first leaf switch. In some embodiments, the first leaf switch may disconnect the first leaf switch from the server for a second period of time after ingress traffic towards the server stops at the first leaf switch. In some embodiments, when the server receives the last packet of the ingress traffic, the server may further confirm that the last packet has been processed. In response to the acknowledgement, the first leaf switch may disconnect the first leaf switch from the server.
Fig. 5 illustrates a block diagram of an example leaf switch 500, in accordance with some embodiments of the present disclosure. The leaf switch 500 may be connected to a server in a network and configured to perform the method 400. The network may also include a spine switch.
The leaf switch 500 may include a plurality of portals 502a-502n, a memory 504, and a processor 506 that couples the plurality of portals 502a-502n and memory 504.
The ports 502a-502n may be used to transceive traffic for spine switches and servers. The memory 504 may store a set of instructions for performing the method 400. In addition, the memory 504 may also store an address lookup table that includes addresses of devices in the network and addresses of corresponding ports. Processor 506 may execute a set of instructions to cause leaf switch 500 to perform method 400.
Embodiments of the present disclosure also provide computer program products. The computer program product may include a non-transitory computer readable storage medium having computer readable program instructions for causing a processor to perform the above-described method.
A computer-readable storage medium may be a tangible device that may store instructions for use by an instruction execution apparatus. For example, a computer readable storage medium may be, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. A non-exhaustive list of computer-readable storage media includes the following more specific examples: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a Static Random Access Memory (SRAM), a flash memory, a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device such as a raised structure in a punch card or slot having instructions recorded thereon, and any suitable combination of the foregoing.
The computer-readable program instructions for performing the methods described above may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language and a conventional procedural programming language. The computer readable program instructions may execute entirely on the computer system as a stand-alone software package or may execute partially on a first computer and partially on a second computer remote from the first computer. In the latter scenario, the second remote computer may be connected to the first computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN).
The computer-readable program instructions may be provided to one or more processors of a computer or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the one or more processors of the computer or other programmable data processing apparatus, create a method that implements the methods described above.
The flowcharts and schematic diagrams in the figures illustrate exemplary architectures, functions, and operations of possible implementations of apparatus, methods, and computer program products according to various embodiments of the present specification. In this regard, the blocks in the flowchart or schematic may represent software programs, segments, or portions of code, which include one or more executable instructions for implementing the specified functions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the diagrams and/or flowchart illustration, and combinations of blocks in the diagrams and flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is appreciated that certain features of the specification, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the specification that are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination of the specification or in any other described embodiment of the specification. Certain features described in the context of various embodiments should not be considered essential features of those embodiments unless the embodiment is inoperative without those elements.
While the description has been described in conjunction with specific embodiments, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, the present disclosure includes all such alternatives, modifications, and variations as fall within the spirit and broad scope of the appended claims.

Claims (20)

1. A method for isolating a first leaf switch in a network, the first leaf switch connected to a server in the network, comprising:
in response to receiving a request to isolate a first leaf switch in a network, sending a notification to a server through the first leaf switch, wherein the notification instructs the server to stop sending egress traffic to the first leaf switch;
determining whether an acknowledgement to the notification is received from a server; and
in response to a determination that the acknowledgement has been received, stopping ingress traffic towards the server.
2. The method of claim 1, further comprising:
disconnecting the first leaf switch from the server.
3. The method of claim 2, wherein the network further comprises a second leaf switch connected to the server and a spine switch connected to the first leaf switch and the second leaf switch.
4. The method of claim 3, wherein the notification further causes egress traffic from the server to be sent to the spine switch through the second leaf switch.
5. The method of any of claims 1-4, wherein
And after the notification is sent, the first leaf switch sends the inlet flow to a server for processing.
6. The method of any of claims 1-5, further comprising:
in response to a determination that the acknowledgement was not received within the first time period, sending another notification to the server.
7. The method of claim 3, wherein disconnecting the first leaf switch from the server causes traffic between the spine switch and the server to communicate through the second leaf switch.
8. The method of any of claims 2-7, wherein disconnecting the first leaf switch from the server further comprises:
disconnecting the first leaf switch from the server for a second period of time after ingress traffic toward the server stops at the first leaf switch.
9. The method according to any of claims 1-8, wherein the notification is carried by a first link aggregation control protocol data unit, LACPDU, and the acknowledgement is carried by a second LACPDU.
10. The method according to any of claims 1-9, wherein the notification comprises a first port status of the first leaf switch and the acknowledgement comprises a second port status of the server.
11. A first leaf switch connected to a server in a network, comprising:
a memory storing a set of instructions; and
at least one processor coupled to the memory and configured to execute a set of instructions to cause the first leaf switch to:
in response to receiving a request to isolate switches in a network, sending a notification to a server, the notification instructing the server to stop sending egress traffic to a first leaf switch;
determining whether an acknowledgement to the notification is received from a server; and
in response to a determination that the acknowledgement has been received, stopping ingress traffic toward the server.
12. The first leaf switch of claim 11, wherein the at least one processor is further configured to execute the set of instructions to cause the first switch to further perform:
the first leaf switch is disconnected from the server.
13. The first leaf switch of claim 12, wherein the network further comprises a second leaf switch connected to the server and a spine switch connected to the first leaf switch and the second leaf switch.
14. The first leaf switch of claim 13, wherein
The notification further causes egress traffic from the server to be sent to the spine switch through the second leaf switch.
15. The first leaf switch of any of claims 11-14, wherein at least one processor is further configured to execute a set of instructions to cause the first switch to further perform:
and after the notification is sent, sending the inlet flow to a server for processing.
16. The first leaf switch of any of claims 11-15, wherein the at least one processor is further configured to execute a set of instructions to cause the first switch to further perform:
in response to a determination that the acknowledgement was not received within the first time period, sending another notification to the server.
17. The first leaf switch of claim 13, wherein disconnecting the switch from the server further comprises:
after the first leaf switch is disconnected, traffic between the spine switch and the server is communicated through the second leaf switch.
18. The first leaf switch of any of claims 12-17, wherein disconnecting the first switch from the server further comprises:
disconnecting the first leaf switch from the server for a second period of time after ingress traffic toward the server stops at the first leaf switch.
19. The first leaf switch of any of claims 11-18, wherein the notification is carried by a first link aggregation control protocol data unit, LACPDU, and the acknowledgement is carried by a second LACPDU.
20. A non-transitory computer-readable medium storing a set of instructions executable by at least one processor of a leaf switch to cause the leaf switch to perform a method for isolating the leaf switch in a network, the leaf switch connected to a server in the network, the method comprising:
in response to receiving a request to isolate a first leaf switch in a network, sending a notification to a server via the first leaf switch, wherein the notification instructs the server to stop sending egress traffic to the first leaf switch;
determining whether an acknowledgement to the notification is received from a server; and
in response to a determination that the acknowledgement is received, stopping ingress traffic toward the server.
CN201980099305.3A 2019-08-19 2019-08-19 Method and system for isolating leaf switches in a network Active CN114223182B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/101379 WO2021031070A1 (en) 2019-08-19 2019-08-19 Method and system for isolating a leaf switch in a network

Publications (2)

Publication Number Publication Date
CN114223182A true CN114223182A (en) 2022-03-22
CN114223182B CN114223182B (en) 2024-01-05

Family

ID=74660161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980099305.3A Active CN114223182B (en) 2019-08-19 2019-08-19 Method and system for isolating leaf switches in a network

Country Status (2)

Country Link
CN (1) CN114223182B (en)
WO (1) WO2021031070A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100023658A1 (en) * 2008-07-25 2010-01-28 Broadcom Corporation System and method for enabling legacy medium access control to do energy efficent ethernet
CN102710486A (en) * 2012-05-17 2012-10-03 杭州华三通信技术有限公司 S-channel status notification method and equipment
CN103067291A (en) * 2012-12-24 2013-04-24 杭州华三通信技术有限公司 Method and device of up-down link correlation
US20160191374A1 (en) * 2014-12-31 2016-06-30 Juniper Networks, Inc. Fast convergence on link failure in multi-homed ethernet virtual private networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606167B1 (en) * 2002-04-05 2009-10-20 Cisco Technology, Inc. Apparatus and method for defining a static fibre channel fabric

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100023658A1 (en) * 2008-07-25 2010-01-28 Broadcom Corporation System and method for enabling legacy medium access control to do energy efficent ethernet
CN102710486A (en) * 2012-05-17 2012-10-03 杭州华三通信技术有限公司 S-channel status notification method and equipment
CN103067291A (en) * 2012-12-24 2013-04-24 杭州华三通信技术有限公司 Method and device of up-down link correlation
US20160191374A1 (en) * 2014-12-31 2016-06-30 Juniper Networks, Inc. Fast convergence on link failure in multi-homed ethernet virtual private networks

Also Published As

Publication number Publication date
CN114223182B (en) 2024-01-05
WO2021031070A1 (en) 2021-02-25

Similar Documents

Publication Publication Date Title
CN105763359B (en) Distributed bidirectional forwarding detection protocol (D-BFD) for an interleaved fabric switch cluster
EP2523403B1 (en) Network system and network redundancy method
EP3036873B1 (en) Dedicated control path architecture for stacked packet switches
US10033629B2 (en) Switch, device and method for constructing aggregated link
US10205660B2 (en) Apparatus and method for packet header compression
EP3251304A1 (en) Method and apparatus for connecting a gateway router to a set of scalable virtual ip network appliances in overlay networks
US9350665B2 (en) Congestion mitigation and avoidance
CN103873472A (en) Method for automatically having access to network
WO2016172926A1 (en) Communication method and device, and system in communication system
JP2023130372A (en) Communication device, method, program, and recording medium
WO2021008591A1 (en) Data transmission method, device, and system
CN109587822B (en) Information transmission control method, information reception control device, and storage medium
CN111147573A (en) Data transmission method and device
US8060628B2 (en) Technique for realizing high reliability in inter-application communication
US10680888B2 (en) State synchronization between a controller and a switch in a communications network
JP2020529749A (en) Methods and equipment for conditional broadcasting of network configuration data
CN109347674B (en) Data transmission method and device and electronic equipment
US9537764B2 (en) Communication apparatus, control apparatus, communication system, communication method, method for controlling communication apparatus, and program
CN114223182B (en) Method and system for isolating leaf switches in a network
US10057132B2 (en) Apparatus and method for detecting connection relationships among switches in a communication network
US9521014B2 (en) Network system and data transmission method
CN103618630B (en) A kind of data safe transmission method and equipment based on double up-links
WO2014019494A1 (en) Message forwarding in data center network
WO2016086693A1 (en) Message transmission method, backbone switch and access switch
US20200076718A1 (en) High bandwidth using multiple physical ports

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant