CN112311685A - Method and related device for processing network congestion - Google Patents

Method and related device for processing network congestion Download PDF

Info

Publication number
CN112311685A
CN112311685A CN201910913827.4A CN201910913827A CN112311685A CN 112311685 A CN112311685 A CN 112311685A CN 201910913827 A CN201910913827 A CN 201910913827A CN 112311685 A CN112311685 A CN 112311685A
Authority
CN
China
Prior art keywords
port
target
network device
network
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910913827.4A
Other languages
Chinese (zh)
Inventor
颜清华
郑合文
尹超
刘和洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to EP20843498.5A priority Critical patent/EP3972209A4/en
Priority to PCT/CN2020/099204 priority patent/WO2021012902A1/en
Publication of CN112311685A publication Critical patent/CN112311685A/en
Priority to US17/563,167 priority patent/US20220124036A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6245Modifications to standard FIFO or LIFO

Abstract

A method of handling network congestion is provided. The first network device determines a target port, which is an output port entering a pre-congestion state or a congestion state. The first network device sends a first notification to at least one second network device. The at least one second network device includes one or more network devices capable of transmitting data streams to the host under the destination port via at least two forwarding paths. The first notification includes information of the network device where the target port is located and information of the target port. The first notification can cause the at least one second network device to perform operations to avoid network congestion. By the method and the device, network congestion can be effectively avoided, and the utilization rate of network bandwidth is improved.

Description

Method and related device for processing network congestion
Technical Field
The present application relates to network communication technologies, and in particular, to a method and a related apparatus for handling network congestion.
Background
Network congestion can occur when the amount of data carried by a network node or link in a network exceeds the amount of data that the network node or link can handle. The effects of network congestion include transmission delays, packet loss or the inability to establish new connections. Severe network congestion can lead to congestion collapse.
Various congestion control techniques are used to avoid congestion collapse. For example, when network Congestion occurs, received packets are discarded or rearranged, Congestion control is implemented by using a TCP Congestion avoidance algorithm, and the transmission rate of the sender is adjusted by using an Explicit Congestion Notification (Explicit Congestion Notification) mechanism.
How to provide a more efficient congestion control technology in a network scenario with explosive growth of traffic is an urgent problem to be solved in the field.
Disclosure of Invention
The application provides a method and a related device for processing network congestion, which can effectively avoid network congestion and improve the utilization rate of network bandwidth.
A first aspect of the present application provides a method of handling network congestion. The first network equipment determines a target port; the target port is an egress port that enters a pre-congestion state or a congestion state. The first network device sends a first notification to at least one second network device. The at least one second network device includes one or more network devices capable of transmitting data streams to the host under the destination port via at least two forwarding paths. The first notification includes information of the network device where the target port is located and information of the target port. The at least one second network device is determined according to the role of the first network device, the attribute of the target port and the role of the network device where the target port is located.
In the method of the present application, when an egress port entering a pre-congestion state or a congestion state exists in a network, a first network device notifies a second network device in the network of the egress port. The second network device can acquire the information of the egress port and avoid sending the message to a forwarding path including the egress port when subsequently forwarding the message, so as to avoid network congestion.
Optionally, when the network device where the target port is located is the first network device, the first network device monitors the output port of the first network device; when the cache usage amount of an egress port of the first network device exceeds a port cache threshold, the first network device determines that the egress port is the target port.
Optionally, when the network device where the target port is located is the first network device, the first network device monitors an output port queue of the first network device; when the length of an egress port queue exceeds a queue buffer threshold, the first network device determines that the egress port where the egress port queue is located is the target port.
The method and the device can determine whether the output port enters the congestion state or the pre-congestion state according to the cache usage amount of the output port, and also determine whether the output port enters the congestion state or the pre-congestion state according to the length of the output port queue in the output port, so that notification and processing of network congestion can be flexibly realized.
Optionally, the network device where the target port is located is a third network device, and the first network device receives a second notification sent by the third network device, where the second notification includes information of the third network device and information of the target port. The first network device determines the target port based on the second notification.
In the application, the first network device further receives a notification sent by other network devices to obtain information of a port entering a pre-congestion state or a congestion state, which is discovered by other network devices, so that network congestion processing of the whole network can be realized.
Optionally, the information of the network device where the target port is located includes an identifier of the network device where the target port is located, and the information of the target port includes an identifier of the target port or an identifier of a forwarding path where the target port is located. Or the information of the network device where the target port is located further includes a role of the network device where the target port is located, where the role indicates a location of the network device where the target port is located; the information of the target port further includes an attribute of the target port, the attribute indicating a direction in which the target port transmits a data stream.
The notification of the application can include various types of information to adapt to different types of network architectures, so that the applicability of the technical scheme is improved.
Optionally, before the first network device sends the first notification to the at least one second network device, the first network device further determines that there is no idle egress port on the first network device that can forward the target data flow corresponding to the target port. The target data stream is a data stream corresponding to a target address range; the target address range is an address range corresponding to the host under the target port, and the target address range is determined according to the information of the network device where the target port is located and the information of the target port.
According to the method and the device, the first network equipment preferably forwards the target data stream through the idle output port on the first network equipment, so that the switching frequency of the target data stream can be reduced, and the influence of a forwarding path for switching the target data stream on other network equipment is reduced.
Optionally, the information of the target port may further include an identifier of a target egress port queue, where the target egress port queue is an egress port queue entering a congestion state or a pre-congestion state in the target port, the target data flow is a data flow corresponding to the target address range, and the priority is corresponding to the identifier of the egress port queue.
According to the method and the device, only the data flow corresponding to the output port queue entering the pre-congestion state or the congestion state can be processed for avoiding network congestion, and the influence on other data flows can be reduced while the network congestion is avoided.
Optionally, the first network device stores information of the network device where the target port is located and information of the target port. Further, the first network device may also store the status of the target port.
Further, the first network device also sets an aging time for the stored information. In this way, when the first network device receives a subsequent data flow, the first network device may process the received data flow according to the stored information, and avoid sending the data flow to a forwarding path where the target port is located, so as to alleviate network congestion.
A second aspect of the present application provides a method of handling network congestion. The second network equipment receives a first notice from the first network equipment, wherein the first notice comprises information of the network equipment where a target port is located and information of the target port, and the target port is a port entering a pre-blocking state or a congestion state; the second network device is a network device capable of sending a data stream to the host under the target port through at least two forwarding paths. The second network device determines a target data flow, a first forwarding path of the target data flow including the target port. And the second network equipment determines whether an idle output port capable of forwarding the target data stream exists on the second network equipment or not, and obtains a determination result. The second network device processes the target data stream according to the determination result.
In the application, the second network device processes the target data stream according to the received first notification including the information of the target port entering the pre-congestion state or the congestion state, so that the target data stream can be prevented from being sent to a forwarding path where the target port is located, and network congestion is further avoided.
Optionally, when an idle egress port capable of forwarding the target data flow exists on the second network device, the second network device sends the target data flow through the idle egress port, and a second forwarding path where the idle egress port is located does not include the target port.
The second network device forwards the target data stream through the idle output port on the second network device, so that the information of the target port can be prevented from being diffused to other network devices, and the network is prevented from being vibrated.
Optionally, when there is no idle egress port capable of forwarding the target data flow on the second network device, the second network device forwards the target data flow through the first forwarding path. Further, the second network device also generates a second notification, where the second notification includes information of the network device where the target port is located and information of the target port. The second network device sending the second notification to at least one third network device; the at least one third network device includes a device capable of transmitting data streams to the host under the destination port via at least two forwarding paths.
In the application, when the second network device does not have an idle output port capable of forwarding the target data stream, the second network device forwards the target data stream through the first forwarding path, so that loss of the received data stream can be avoided. Further, the second network device also floods the information of the target port to the third network device through the second notification. After receiving the second notification, the third network device may perform a process for avoiding network congestion, so as to avoid network congestion.
Optionally, when the second network device is directly connected to the source host of the target data flow, the second network device further sends a backpressure message to the source host of the target data flow, where the backpressure message is used to enable the source host to perform an operation of handling network congestion.
The second network message sends a back pressure message to the source host of the target data stream, so that excessive data streams can be prevented from entering the network from the source, and network congestion can be further avoided.
Optionally, the second network device determines a target address range according to the information of the network device where the target port is located and the information of the target port, where the target address range is an address range corresponding to the host under the target port; the second network device determines a data flow whose destination address belongs to the destination address range as the destination data flow.
Optionally, the first notification further includes an identifier of a target egress port queue, where the target egress port queue is an egress port queue entering a pre-congestion state or a congestion state in the target port; and the second network device determines the data flow of which the destination address belongs to the target address range and the priority corresponds to the identifier of the egress port queue as the target data flow.
Optionally, the second network device stores information of the network device where the target port is located and information of the target port. Further, the second network may also store the status of the target port.
A third aspect of the present application provides a network device that handles network congestion. The network device comprises a plurality of functional modules for performing the method of handling network congestion as provided by the first aspect or any possible design of the first aspect; the present application does not limit the division of the plurality of functional modules, and the plurality of functional modules may be correspondingly divided according to the flow steps of the method for processing network congestion in the first aspect, or may be divided according to specific implementation needs. The functional modules may be hardware modules or software modules, and the functional modules may be deployed on the same physical device or may be deployed on different physical devices.
A fourth aspect of the present application provides a network device that handles network congestion. The network device comprises a plurality of functional modules for performing the method of handling network congestion as provided by the second aspect or any possible design of the second aspect; the present application does not limit the division of the plurality of functional modules, and the plurality of functional modules may be correspondingly divided according to the flow steps of the method for processing network congestion in the second aspect, or may be divided according to specific implementation needs. The functional modules may be hardware modules or software modules, and the functional modules may be deployed on the same physical device or may be deployed on different physical devices.
A fifth aspect of the present application provides yet another network device that handles network congestion. The apparatus comprises a memory for storing program code and a processor for invoking the program code to implement the method of handling network congestion in the first aspect of the present application and any possible design thereof and to implement the method of handling network congestion in the second aspect of the present application and any possible design thereof.
A sixth aspect of the present application provides a chip, which when running, is capable of implementing the method for handling network congestion in the first aspect of the present application and any possible design thereof, and of implementing the method for handling network congestion in the second aspect of the present application and any possible design thereof.
A seventh aspect of the present application provides a storage medium having stored therein program code that, when executed, enables a device (switch, router, server, etc.) running the program code to implement the method for handling network congestion in the first aspect of the present application and any possible design thereof, and to implement the method for handling network congestion in the second aspect of the present application and any possible design thereof.
An eighth aspect of the present application provides a data center network, where the data center network includes a first network device and a second network device, the first network device is configured to implement the method for handling network congestion in the first aspect of the present application and any possible design thereof, and the second network device is configured to implement the method for handling network congestion in the second aspect of the present application and any possible design thereof.
For the beneficial effects of the third aspect to the eighth aspect of the present application, reference may be made to the description of the beneficial effects of the first aspect and the second aspect and their respective possible designs, which are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a network system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of another network system provided in an embodiment of the present application;
fig. 3 is a flowchart illustrating a method for handling network congestion according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a processing procedure when a target port is a downstream port of a core device in a multi-plane Clos architecture;
FIG. 5 is a schematic diagram illustrating a numbering scheme for switches and ports thereof according to the present application;
FIG. 6 is a schematic diagram of a processing procedure when a target port is a downstream port of a convergence device in a multi-plane Clos architecture;
FIG. 7 is a schematic diagram of a processing procedure when a target port is a downstream port of an access device in a multi-plane Clos architecture;
FIG. 8 is a schematic diagram of a processing procedure when a target port is an upstream port of a convergence device in a multi-plane Clos architecture;
FIG. 9 is a schematic diagram of a processing procedure when a target port is a downstream port of a core device in a single plane Clos architecture;
FIG. 10 is a schematic diagram of a processing procedure when a target port is a downstream port of a convergence device in a single plane Clos architecture;
FIG. 11 is a diagram illustrating a process performed by the architecture of FIG. 2 when a target port is an intra-group port;
FIG. 12 is a diagram illustrating a process of handling an inter-group port as a target port in the architecture of FIG. 2;
fig. 13 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of another network device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of another network device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method and a related device for processing network congestion, which are applied to a system comprising a plurality of network devices. The embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a network system according to an embodiment of the present application, where the network system employs a Clos architecture. The network system includes an access layer 11, a convergence layer 12, and a core layer 13. The access layer 11 comprises a plurality of access devices T1-T8, the convergence layer 12 comprises a plurality of convergence devices A1-A7, and the core layer 13 comprises a plurality of core devices C1-C4. Each access stratum device is connected to one or more hosts Hx. The Clos architecture in fig. 1 is a multi-plane architecture, where a multi-plane refers to that there are multiple core device groups, and each sink device is connected to only the core devices in one core device group, for example, there are core device groups (C1, C2) and core device groups (C3, C4) in fig. 1, where a core device group (C1, C2) is composed of core devices C1 and C2, and a core device group (C3, C4) is composed of core devices C3 and C4. Each core device group and the aggregation device connected with the core device group form a forwarding plane. For example, the core device group (C1, C2) and the convergence devices a1, A3, a5 and a7 form a forwarding plane, and the core device group (C3, C4) and the convergence devices a2, a4, A6 and A8 form a forwarding plane. Optionally, in fig. 1, the access devices and the aggregation devices may also constitute different distribution points (points), each point includes a certain number of access devices and aggregation devices, and an access device in one point is connected to all aggregation devices in the point. For example, pod 1 includes convergence devices a1 and a2, and access device a1 in pod 1 is connected to convergence devices a1 and a2, and access device a2 is also connected to convergence devices a1 and a 2. Each core device of the core layer is connected to all of the pods. Fig. 1 of the present application shows a plurality of pods for illustrating the connection relationship between the devices, and in the drawings related to the subsequent Clos network, the pods will not be drawn again for the sake of brevity. Further, the multi-plane Clos architecture of fig. 1 can be replaced with a single plane Clos architecture, i.e., each core device connects all the sink devices. The access device in this application may be a switch, and the aggregation device and the core device may be switches or routers.
Fig. 2 is a schematic structural diagram of another network system according to an embodiment of the present application, and as shown in fig. 2, the network architecture includes a plurality of switch groups (4 are shown in fig. 2), each switch group may be referred to as a pod, each switch group (pod) includes N switches, and a number (identifier) of each switch is in an xy format, where x indicates a pod to which the switch belongs, and y indicates a number of the switch in the pod to which the switch belongs. For example, in fig. 2, pod 1 includes switches 11,12, 13 … 1N, pod 2 includes switches 21, 22, 23 … 2N, pod 3 includes switches 31, 32, 33 … 3N, and pod 4 includes switches 41, 42, 43 … 4N. Wherein, two by two direct connections between N switch in every switch unit. Each switch is directly connected to a corresponding switch in the other pod, forming N inter-group planes. The corresponding switch refers to the switch with the same number (identification) in different switch groups. For example, switches 11,21,31, and 41 are corresponding switches to each other. Also, switches 11,21,31 and 41 are interconnected to form the left intergroup plane in FIG. 2, and switches 1N, 2N, 3N and 4N are interconnected to form the right intergroup plane in FIG. 2. The direct connection means that there is no other network device such as a switch or a router between the two switches, but there may be a device for providing a connection or a device for enhancing a signal. Ports connecting switches in different switch groups are called inter-group ports, and ports connecting switches in the same switch group are called intra-group ports. Switches in a pod have the same configuration or specification. Each pod forms an intragroup plane. Further, each switch shown in fig. 2 is also connected to one or more hosts, and only hosts H1 and H2 under switch 11 are shown in fig. 2.
Based on the network system shown in fig. 1 or fig. 2, as shown in fig. 3, the present application provides a method for handling network congestion. The method is realized by the cooperation of a first network device and a second network device. The first network device may be any of the devices in fig. 1 or fig. 2, and the second network device may be determined by the first network device or may be pre-configured.
The method is described below in connection with fig. 3.
In step 301, the first network device determines a target port.
The destination port is an egress port that enters a congested state or a pre-congested state. Here, the pre-congestion state refers to a state in which congestion is about to occur but congestion has not yet occurred.
In one implementation, the destination port is an egress port of the first network device, and step 301 may include 301-1 and 301-2.
In step 301-1, a first network device monitors an egress port of the first network device. In the present application, the first network device may be any network device. When the first network device forwards the message, the message to be sent enters an egress port queue of egress ports, where each egress port has a plurality of (e.g., 8) egress port queues. The first network device monitors an egress port of the first network device, and may monitor each egress port of the first network device, or may monitor each egress port queue of the first network device. For example, the first network device monitors whether the buffer amount of each egress port exceeds a first threshold, or the first network device monitors whether the length of each egress port queue exceeds a second threshold. The first threshold value indicates the occupied proportion or the number of bytes of the cache of one output port, and can also be called as a port cache threshold value; the second threshold indicates the occupied rate or number of bytes of the buffer of one egress port queue, which may also be referred to as a queue buffer threshold.
In step 301-2, the first network device determines a target port according to the monitoring result.
Optionally, when the cache usage amount of an egress port exceeds a first threshold, the first network device determines the egress port as the target port. The first threshold may be a pre-congestion threshold or a congestion threshold. When the cache usage of an egress port exceeds a pre-plugging threshold, the egress port enters a pre-plugged state. When the cache usage of an egress port exceeds a congestion threshold, the egress port enters a congestion state.
Optionally, when the length of an egress port queue exceeds a second threshold, the first network device determines that the egress port where the egress port queue is located is the target port. The egress port queue may be referred to as a target egress port queue. The first network device allocates a buffer area for each egress port queue, and the maximum length of the egress port queue refers to the size of the buffer area allocated for the egress port queue. When the message enters the buffer corresponding to the egress port queue, the data amount stored in the buffer is the length of the egress port queue. The second threshold may be a length (number of bytes) or a ratio. For example, the maximum length of egress port queue a is 2MB, the second threshold is 70%, and if the amount of data stored in the buffer of egress port queue a reaches or exceeds 1.4MB, it is determined that port queue a enters a pre-congestion state or a congestion state (determined according to the setting). The first network device determines that the output port where the port queue a is located is the target port. In another implementation, the first network device is not the network device where the target port is located, and step 301 includes the first network device receiving a notification a sent by the third network device. Wherein the third network device is the network device where the target port is located. The notification a includes information of the third network device and information of the target port. And the first network equipment determines the target port according to the information of the target port in the notification. Further, the notification a may also include an identification of an egress port queue of the target ports that entered a pre-congestion state or a congestion state.
Optionally, after determining the target port, the first network device further stores congestion information, where the congestion information includes information of the target port and information of a network device where the target port is located. The congestion information may also include the status of the target port so that when a data flow is subsequently received, the data flow is processed according to the congestion information. Further, the first network device sets an aging time for the congestion information, and deletes the congestion information when the aging time is reached.
In step 302, the first network device sends notification B to at least one second network device. The notification B includes information of the network device where the target port is located and information of the target port.
Optionally, the notification B may further include a type of the notification B, where the type is used to indicate that the target port carried in the notification B is a port entering a pre-congestion state or a congestion state. Optionally, the information of the target port in the notification B includes a status of the target port, where the status includes a pre-congestion status or a congestion status. Optionally, the notification B further includes an identifier of an egress port queue of the target egress port entering a pre-congestion state of the congested state. In this application, the information of the network device where the destination port is included in the notification B and the information of the destination port are collectively referred to as congestion information.
The first network device may send the notification B to the at least one second network device in a multicast manner, or may send the notification B to each second network device in the at least one second network device in a unicast manner.
In one embodiment, the information of the network device where the target port is located includes an identifier of the network device, and the information of the target port includes an identifier of the target port or an identifier of a path where the target port is located. The identifier of the path where the target port is located may be an identifier of a network device on a forwarding path where the target port is located. In another embodiment, the information of the network device where the target port is located includes an identifier of the network device and a role of the network device, and the information of the target port includes an identifier of the target port and an attribute of the target port.
The at least one second network device may be pre-configured or determined by the first network device according to a preset rule. The at least one second network device includes one or more network devices capable of transmitting data streams to the host under the destination port via at least two forwarding paths. Or, the at least one second network device includes one or more network devices capable of sending a data stream to the host under the target port through at least two forwarding paths and having a minimum number of hops away from the network device where the target port is located. The host under the target port is a near-end host capable of receiving data stream through the target port. The at least one second network device is determined based on the role of the network device in which the target port is located, the attributes of the target port, and the role of the first network device. The attribute of the target port indicates the forwarding direction of the data stream in the target port, and the role of the network device indicates the location of the network device in the network system.
In the network system shown in fig. 1, the network device may be an access device, an aggregation device or a core device. The attributes of the port include an upstream port or a downstream port. The access equipment is connected with a port of the convergence equipment, and the convergence equipment is connected with a port of the core equipment and is an uplink port; the port of the convergence device connected to the core device and the port of the access device connected to the convergence device are downlink ports. In the network system shown in fig. 1, the near-end host refers to a host that does not cross a core device. For example, in fig. 4, the near-end host under port 4 of the core device C2 refers to the host connected to the access devices T7 and T8; in fig. 6, the near-end host under port 3 of convergence device a7 refers to the hosts connected to access devices T7 and T8; in fig. 7, the near-end host under port 3 of the access device T7 refers to the host to which the access device T7 is connected; in fig. 8, the near-end host under port 1 of the convergence device a1 refers to the host connected to the access devices T1 and T2; in fig. 9, the near-end host under port 7 of the core device C1 refers to the host to which the access devices T7 and T8 are connected; in fig. 10, the near-end host under port 1 of the access device T7 refers to the host to which the access device T7 is connected.
In the network system shown in fig. 2, the attribute of the port includes an intra-group port or an inter-group port. Ports connecting switches within the same switch group are referred to as intra-group ports, e.g., ports connecting switch 11 and switch 12. Ports connecting switches in different switch groups are called inter-group ports. For example, ports connecting switch 1N and switch 2N. The role of the network device may be an intra-group switch or an inter-group switch. The switches belonging to the same switch group are inter-group switches, and the two switches belonging to different switches are inter-group switches, for example, the switches 11 and 12 … … 1N in Pod 1 are inter-group switches, and the switch 1N in Pod 1 is inter-group switch with respect to the switch 2N in Pod 2. In the network system shown in fig. 2, the near-end host refers to a host under a switch to which a target port is directly connected. For example, in fig. 11, the near-end host under port 3 of switch 3N refers to all hosts 34 to which switch 33 is connected; in fig. 12, the near-end hosts under port 2 of switch 1N are all the hosts connected to switch 2N. Before step 302, the first network device may further determine whether there is an idle egress port capable of forwarding the target data flow on the first network device, and when there is no idle egress port, step 302 is executed, and when there is an idle egress port, the first network device forwards the target data flow through the idle egress port.
The target data stream is a data stream corresponding to a target address range; the target address range is an address range corresponding to a host under the target port, and the target address range is determined according to the information of the network device where the target port is located and the information of the target port. When the first network device determines only the target port that entered the pre-congestion state or the congestion state, the target data flow includes data flows destined for hosts below the target port. When the first network device further determines that the output port queue enters the pre-congestion state or the congestion state, the target data flow includes a data flow which is sent to the host under the target port and has a priority corresponding to the identifier of the output port queue entering the congestion state or the pre-congestion state. Optionally, the target data flow may also be an elephant flow in a data flow addressed to the host under the target port, or an elephant flow in a data flow addressed to the host under the target port and having a priority corresponding to an identifier of an egress port queue entering a congested state or a pre-congested state. The elephant flow is a data flow in which the flow rate (total number of bytes) per unit time exceeds a set threshold.
The messages in the data stream carry the priority, and when the network device forwards the data stream, the data stream with the same priority is dispatched to the same output port queue, so that the messages with different priorities enter different output port queues in the output ports, and therefore, the priority of the message has a corresponding relation with the identifier of the output port queue. When all network devices in the network system forward the data stream by using the same scheduling rule, one network device can know the identifier of the corresponding egress port queue of the data stream on another network device according to the priority of the data stream received by the network device.
When the target port is a downstream port under the Clos architecture shown in fig. 1, the target address range corresponding to the target data stream indicates that the address of the target data stream belongs to the target address range. When the destination port is an upstream port under the Clos architecture shown in fig. 1, the destination address range corresponding to the destination data stream means that the address of the destination data stream does not belong to the destination address range. When the target port is an intra-group port or an inter-group port under the architecture shown in fig. 2, the target address range corresponding to the target data stream means that the address of the target data stream belongs to the target address range.
In step 303, the second network device receives the notification B.
The second network device is any one of the at least one second network device. Optionally, after receiving the notification B, the second network device stores the information of the target port and the information of the network device where the target port is located, which are carried in the notification B. The second network device may also store the status of the target port. For example, the second network device sets a first table for storing information of ports entering the pre-congestion state or the congestion state, and each entry of the first table includes information of one target port and information of the network device where the target port is located. For another example, the second network device sets a second table, and each entry of the second table includes information of a target port, information of the network device where the target port is located, and a status of the target port. Further, the second network device may set an aging time for the information of each destination port, and delete the information of the destination port after the aging time is reached.
In step 304, the second network device determines a target data stream.
Since the second network device receives notification B, the second network device is not the network device where the target port is located.
In one implementation, the second network device determines a target address range according to the information of the target port in the notification B and the information of the network device where the target port is located, stores the target address range, and determines a data stream, of which a subsequently received target address belongs to the target address range, as a target data stream. For example, the second network device obtains a destination address of the received data flow, and if the destination address belongs to the destination address range, or the destination address belongs to the destination address range and the priority of the data flow corresponds to the identifier of the destination egress port queue, the second network device determines the data flow as the target data flow. The target address range is an address range corresponding to a host under the target port, and the first forwarding path of the target data stream (i.e. the initial forwarding path before receiving the notification B) includes the target port.
In step 305, the second network device determines whether there is an idle egress port capable of forwarding the target data stream on the second network device, obtains a determination result, and processes the target data stream according to the determination result.
The idle egress port is another egress port on the second network device, which does not enter a congestion state or a pre-congestion state and is different from the current egress port of the target data flow. The port buffer amount of the idle output port does not exceed the first threshold, or the length of the queue of no output port in the idle output port exceeds the second threshold.
For example, in the Clos architecture shown in fig. 4, when the first network device is the core device C2, the destination port is the downstream port 4, and the second network device is the aggregation device a1, the destination address range determined by the aggregation device a1 is the address range corresponding to the hosts connected to the access devices T7 and T8. When the sink device a1 receives a data stream whose destination address belongs to the target address range, the sink device a1 determines whether there is an idle egress port in the upstream ports of the sink device a1, where the forwarding path of the idle egress port does not include the downstream port 4 of the core device C2.
The processing of the target data stream by the second network device according to the determination result includes steps 306 and 307.
In step 306, an idle egress port exists on the second network device, and the second network device transmits the target data stream through the idle egress port.
In this application, a forwarding path where an idle egress port determined by the second network device for the target data flow is located is referred to as a second forwarding path of the target data flow, where the second forwarding path does not include the target port.
In step 307, no idle egress port exists on the second network device, and the second network device forwards the target data flow through the initial forwarding path (i.e., the first forwarding path) of the target data flow, i.e., without changing the egress port of the target data flow on the second network device.
Further, since there is no idle output port on the second network device, the second network device further notifies the congestion state or the congestion state of the target port to at least one third network device capable of sending a data flow to the host under the target port through at least two forwarding paths. Optionally, the second network device generates a notification C according to the information of the network device where the target port is located and the information of the target port, and sends the notification C to the third network device. The at least one third network device may be configured in advance on the second network device, or the second network device may determine according to the information of the network device where the target port is located and the information of the target port.
With the method shown in fig. 3, when an egress port or an egress port queue of any one network device in the network system shown in fig. 1 or fig. 2 enters a pre-congestion state or a congestion state, the network device may send a notification, so that the network device receiving the notification performs a process of handling network congestion, where the process of handling network congestion includes re-selecting a forwarding path for a target data flow, avoiding sending the target data flow to the egress port, and sending a notification to other network devices to flood the target port. By the method shown in fig. 3, network congestion can be avoided. In addition, the method can also realize the load balance of the whole network and improve the utilization rate of network resources.
Different implementations of the various steps in the method shown in fig. 3 are described below in conjunction with fig. 4 through 12.
Fig. 4 is a schematic diagram of a processing procedure when a target port of the multi-plane Clos architecture shown in fig. 1 is a downstream port of a core device. As shown in fig. 4, the thin solid line indicates the link where the destination port is located, and the thick solid line indicates the notified forwarding path. A data flow (denoted as data flow 1) from host H2 to host H7 reaches core device C2 through access device T2 and aggregation device a1, and core device C2 forwards data flow 1 to aggregation device 7 through egress port queue 3(Q3) of port 4 (P4). In the process of forwarding the data flow 1, the core device C2 monitors that the length of the egress port queue 3 exceeds the second threshold, determines that the port queue 3 enters the pre-blocking state, and further determines that the port 4 is the target port (step 301).
The core device C2 first confirms whether there are other idle egress ports that can reach the host H7 on the core device C2, and when there are no other idle egress ports that can reach the host H7 on the core device C2, the core device C2 sends a multicast notification to a plurality of aggregation devices other than the aggregation device a7 connected to the port 4 (step 302). In a multi-plane scenario, the multiple aggregation devices belong to the same forwarding plane as the core device C2. In fig. 4, if the core device C2 sends a notification in a multicast manner, the core device 2 determines a target multicast group corresponding to the port 4, where a multicast source of the target multicast group is the core device C2, and multicast egress ports are ports connected to the convergence devices a1, A3, and a5, and are assumed to be port 1, port 2, and port 3. Then, the core device C2 sends a multicast notification through port 1, port 2 and port 3, where the multicast notification includes one or more of the identifier of the core device C2 (C2), the identifier of port 4(P4), and optionally the role of the core device C2, the port attribute of port 4 (downstream port) and the identifier of the egress port queue 3 (Q3). In addition, the core device C2 may also store congestion information for port 4. The multicast notification arrives at aggregation devices a1, A3, and a 5. The processing procedure of the convergence device will be described with the convergence device a1 as an example.
The sink device a1 receives the multicast notification sent by the core device C2 (step 303). Optionally, the sink device a1 obtains congestion information ("C2P 4" or "C2P 4Q 3" or "C2P 4Q3 downstream") in the multicast notification, stores the congestion information, and sets the aging time. The sink device a1 determines a target data stream (step 304). When determining the target data stream, the sink device a1 first determines an address range (target address range) of a host under the port P4 of the core device C2, determines a data stream whose destination address belongs to the target address range as the target data stream, or determines a data stream whose destination address belongs to the target address range and whose priority level corresponds to Q3 as the target data stream.
When determining the address range of the host corresponding to the port P4 of the core device C2, in an optional manner, since the target port P4 is a downstream port, the core device C2 determines the address ranges of all hosts connected under the convergence device a7 connected to the P4.
In one embodiment, the network device and the host may be assigned addresses according to a network architecture. For example, each network device in fig. 1 is assigned a number, which is the identifier of the network device. As shown in fig. 5, the number in each block representing a network device is one specific implementation of the identity of the switch. For example, 10 may be a value of C2. The combination of each network device identifier and the downlink port identifier can uniquely identify a lower layer device. For example, a core device 10 and port 00 combination may identify aggregation device 000, and a combination of port 1111 and an identification (00) of the pod in which aggregation device 000 is located may identify access device 1111. The address of the host includes the port of the access device connected to the host and the identification of the access device on the aggregation device. The address of host H2 may be xx.xx.001111.1110 according to the addressing rules described above.
Based on the addressing rule shown in fig. 5, if the identifier of the network device included in the multicast notification received by the aggregation device a1 is 10 and the port identifier is 11, the host address range determined by the aggregation device a1 according to the multicast notification is 110000 bits to 10 bits low or 111111 bits low, and the determined priority is the priority corresponding to Q3, for example, 3. The sink device a1 determines that the received destination address falls within the host address range and the data stream with priority 3 is the target data stream.
In another alternative, when determining the address ranges of all hosts connected to port P4 of core device C2, sink device a1 determines the address ranges of all hosts connected to port P4 of core device C2 by using a table lookup. For example, each network device stores three tables, and the first table stores the corresponding relationship among the core device, the ports of the core device, and the aggregation device; the second table stores the connection relation among the convergence equipment, the port of the convergence equipment and the access equipment; the third table stores the connection relationship between the access device and the host address. After receiving the multicast notification, the convergence device a1 determines the role of the network device as a core device according to the identifier (C2) of the network device therein, looks up the first table according to C2 and P4 to obtain the convergence device a7, looks up the access devices T7 and T8 from the second table according to the convergence device a7, and finally looks up the addresses of the hosts connected to the access devices T7 and T8 according to the third table to generate a host address list corresponding to the congestion information. Optionally, the three tables may also be integrated into one table, and the table needs to store the corresponding relationship among the core device, the aggregation device, the access device, and the host address.
After the sink device a1 determines a target data flow (assumed to be data flow 1), it determines whether there is a free uplink output port on the sink device a1 (since the target port P4 is a downlink port of the core device and a downlink port of the core device corresponds to an uplink port of the sink device, the sink device a1 needs to determine whether there is a free uplink port) (step 305), and when there is a free uplink output port, the sink device a1 uses the free uplink output port as an output port of the target data flow and forwards the target data flow through the free uplink output port (step 306). When there is no idle upstream egress port, the aggregation device a1 continues to forward the target data flow through the initial forwarding path corresponding to the target data flow (step 307).
Before the congestion information ages, the aggregation device a1 may, upon receiving any data flow, process the data flow according to the method described above.
In addition, after the convergence device a1 executes step 307, the congestion information is also diffused to the access devices. That is, the sink device a1 also generates another notification and sends the other notification to the access devices T1 and T2 (step 302). The other notification includes the congestion information. After receiving the other notification, the access devices T1 and T2 perform corresponding processing. The following describes a process of the access device by taking the access device T2 as an example.
When the access device T2 receives another notification (step 303), the access device T2 obtains the congestion information in the other notification, stores the congestion information, and sets the aging time, similar to the aggregation device a 1. The access device T2 determines a target address range according to the congestion information, determines a target data flow according to the target address range (step 304), determines whether there is an idle exit port capable of forwarding the target data flow on the access device T2 (step 305), and if there is an idle exit port, the access device T2 forwards the target data flow through the idle exit port (step 306); if there are no idle egress ports, the access device T2 forwards the target data flow through its initial forwarding path (step 307). And, the access device T2 determines the source host of the target data flow, and sends a backpressure message to the source host, where the backpressure message is used to notify the source host to perform an operation of avoiding network congestion. The operation to avoid network congestion may be to reduce the rate of data transmission to access device T2 or to reduce the rate of transmission of the target data flow to access device T2. Access device T2 determines the target data stream and processes the target data stream in a manner similar to that of aggregation device a1, and reference may be made to the description of the processing of aggregation device a1 for processes that are not described in detail in this section.
Through the above process, the core device in the Clos system can send the congestion information to the convergence device after the output port enters the pre-congestion state or the congestion state, and the convergence device can send the congestion to the access device. And each network device receiving the congestion information executes the operation of processing the network congestion, so that the network congestion can be avoided, and the bandwidth utilization rate of the whole Clos system is improved.
Fig. 6 is a schematic diagram of a processing procedure when a target port of the multi-plane Clos architecture shown in fig. 1 is a downstream port of a sink device. Wherein the thin solid line represents the link where the destination port is located, and the thick solid line represents the advertised forwarding path. As shown in fig. 6, assuming that host H2 sends data flow 1 to host H7, data flow 1 enters queue 3 of egress port 3 on aggregation device a7, aggregation device a7 detects that the length of queue 3 of egress port 3 exceeds the second threshold, and determines that queue 3 enters the pre-congestion state, then egress port 3 is the target port. There is no free downstream port on the aggregation device a 7. The sink device a7 sends a notification to a plurality of second network devices (step 302). The plurality of second network devices are determined from port attributes of egress port 3 (downstream port) and attributes of aggregation device a7 (aggregation device), including all access devices except access device T7 to which egress port 3 is connected. The notification includes an identification of aggregation device a7 (a7), an identification of egress port 3 (P3). Optionally, the notification may also include one or more of the role of aggregation device a7 (aggregation device), the attributes of egress port 3 (downstream port), and the identification of queue 3 (Q3). The notification may be sent in a unicast or multicast manner.
The notification sent by the convergence device a7 to the access device T8 may reach the access device T8 directly, and the notification sent to the access devices T1-T6 reaches the core devices C1 and C2 belonging to the same forwarding plane as the convergence device a7 first.
Since the core devices C1 and C2 cannot transmit data streams to the host under the egress port 3 of the aggregation device a7 through at least two forwarding paths, the core devices C1 and C2 are not destinations of the notification, and the core devices C1 and C2 forward the notification to other ports except for the port receiving the notification after receiving the notification (fig. 6 only shows the forwarding path of the core device C2).
After forwarding through core device C1 or C2, the notification reaches aggregation devices a1, A3, and a5 that belong to the same forwarding plane as aggregation device a 7. Since aggregation devices a1, A3, and a5 are unable to send data streams to hosts below egress port 3 of aggregation device a7 over at least two forwarding paths, aggregation devices a1, A3, and a5 are also not destinations of the notification, and aggregation devices a1, A3, and a5 are still to forward the received notification. Taking the convergence device a1 as an example, after receiving the notification, the convergence device a1 copies and forwards the notification to the downstream port, that is, sends the notification to the connected access devices T1 and T2.
In the scenario shown in fig. 6, since the purpose of the notification sent by the aggregation device a7 is other than the access device T7, the core device and the aggregation device only forward the notification after receiving the notification. After any one of the access devices T1-T6 and T8 receives the notification, step 304-307 is performed in the manner described with reference to the above embodiments.
Through the above process, after an egress port enters a pre-blocking state or a congestion state, the aggregation device in the Clos system may notify all other access devices except the access device connected to the egress port of the congestion information. Each access device receiving the congestion information performs an operation to handle network congestion. Therefore, the process can avoid network congestion and improve the bandwidth utilization rate of the whole Clos system.
Fig. 7 is a schematic diagram of a processing procedure when a target port is a downstream port of an access device in a multi-plane Clos architecture. As shown in fig. 7, a thin solid line indicates a link where the destination port is located, and a thick solid line indicates a notified forwarding path. As shown in fig. 7, it is assumed that the host H2 sends data flow 1 to the book note H7, the data flow 1 enters the queue 3 of the egress port 3 on the access device T7, and the access device T7 detects that the length of the queue 3 of the egress port 3 exceeds the second threshold, determines that the queue 3 enters the pre-blocking state, and further determines that the port 3 is the target port. Furthermore, there is no other downstream port on access device T7 that can reach host H7. The access device T7 generates a notification including the identification of the access device T7 (T7) and the identification of the egress port 3 (P3). Further, the notification may also include one or more of the role of access device T7 (access device), the attributes of egress port 3 (downstream port), and the identification of queue 3 (Q3). Access device T7 sends the notification to a plurality of second network devices. Wherein the plurality of second network devices includes all access devices except access device T7. Further, since the access device T7 is directly connected with the host H7, the access device T7 knows the address of the host H7, and thus, the notification may also include the address of the host H7. Thus, other access devices receiving the notification can determine the target data stream directly from the address of the host H7. The notification may be sent in a unicast or multicast manner.
Similar to the process described in fig. 6, after the sink device or the core device receives the notification, the sink device or the core device forwards the notification according to the destination address of the notification. After each access device receives the notification, it performs an operation similar to the access device T2 in fig. 4.
In the scenarios shown in fig. 4, 6 and 7, the destination ports are all downstream ports. In other embodiments, the destination port may also be an upstream port.
Fig. 8 is a schematic diagram of a processing procedure when a target port of the multi-plane Clos architecture is an upstream port of the sink device. Wherein, the thin solid line represents the link where the target port is located, and the thick solid line represents the notified forwarding path. Still taking the data flow 1 sent from the host H2 to the host H7 as an example, in the process of forwarding the data flow 1, the aggregation device a1 monitors that the length of the outgoing port queue 3(Q3) of the port 1(P1) where the data flow 1 is located exceeds the second threshold, and determines that the port queue 3 enters the pre-blocking state, so that the outgoing port 1 is the target port. Aggregation device a1 determines whether there are other idle output ports (upstream ports) that can reach host H7 on aggregation device a1, if there are other idle output ports that can reach host H7, aggregation device a1 switches data stream 1 to the idle output port, and sends data stream 1 through the idle output port, and when there are no other idle output ports that can reach host H7, aggregation device a1 sends a notification to multiple access devices connected to aggregation device a1 in a multicast or unicast manner (step 302), where the notification includes an identifier of aggregation device a1 (a1) and an identifier of port 1 (P1). Optionally, the notification may also include the role of aggregation device a1, the attributes of port 1 (upstream port), and the identification of egress port queue 3 (Q3). In fig. 8, although all of the convergence devices A3, a5, and a7 can transmit data streams to hosts under target ports through at least two forwarding paths, since the convergence devices A3, a5, and a7 are not devices with the least number of hops from the convergence device a1, the convergence device a1 transmits only the notification to the access devices T1 and T2, and does not transmit the notification to the convergence devices A3, a5, and a7 and other access devices. The notification arrives at access devices T1 and T2. The following describes a processing flow of the access device by taking the access device T2 as an example.
After receiving the notification (step 303), the access device T2 acquires the congestion information in the notification, stores the congestion information, and sets the aging time. The access device T2 determines a target address range corresponding to the aggregation device a1, that is, addresses of hosts corresponding to all access devices connected to the aggregation device a1, and determines a data stream whose destination address does not belong to the target address range or whose destination address does not belong to the target address range and whose priority level corresponds to Q3 as a target data stream (step 304). In this embodiment, since the upstream port of the aggregation device a1 fails, and the data stream mutually sent between all hosts under the aggregation device a1 does not pass through the upstream port of the aggregation device a1, the access device T2 determines the data stream sent to a host outside the management range of the aggregation device a1 as the target data stream. After determining the target data flow, the access device T2 determines whether there is an idle egress port (upstream port) corresponding to the congestion information on the access device T2, that is, an idle egress port (step 305), and if there is an idle egress port, the access device T2 forwards the target data flow through the idle egress port (step 306); if there are no idle egress ports, the access device T2 sends the target data flow through its initial forwarding path (step 307). Further, the access device T2 determines the source host of the target data flow, and sends a backpressure message to the source host, where the backpressure message is used to notify the source host to perform an operation for handling network congestion. The operation to handle network congestion may be to reduce the rate at which data is sent to access device T2 or to reduce the rate at which the target data flow is sent to access device T2.
In another scenario, when a target port is an uplink port of an access device, the access device determines that a data flow sent to the uplink port is a target data flow, and determines whether an idle output port (uplink port) capable of forwarding the target data flow exists on the access device, if the idle output port exists, the target data flow is sent through the idle output port, and if the idle output port does not exist, a source host of the target data flow is determined, and a backpressure message is sent to the source host, where the backpressure message is used to notify the source host to perform an operation of processing network congestion. It can be seen that when the target port is an uplink port of the access device, the access device does not need to send a notification.
The method shown in fig. 3 of the present application can also be applied to a single plane Clos architecture. In the single plane Clos architecture, each core device is connected to all the aggregation devices.
Fig. 9 is a schematic diagram of a processing procedure when a target port is a downstream port of a core device in a single-plane Clos architecture. As shown in fig. 9, a thin solid line indicates a link where the destination port is located, and a thick solid line indicates a notified forwarding path. A data flow (denoted as data flow 1) from host H2 to host H7 reaches core device C1 through access device T2 and aggregation device a1, and core device C1 forwards data flow 1 to aggregation device 7 through egress port queue 3(Q3) of port 7 (P7). In the process of forwarding the data flow 1, the core device C2 monitors that the length of the egress port queue 3 exceeds the second threshold, determines that the port queue 3 enters the pre-blocking state, and further determines that the port 7 is the target port (step 301). Since there is no idle egress port on core device C1 (i.e., an idle downstream egress port) with the same attributes as port 7, core device C1 sends a notification to all other aggregation devices except aggregation device a7, where the notification includes congestion information as described with reference to fig. 4. After receiving the notification, the aggregation device (e.g., a1) determines a target address range (i.e., addresses of hosts to which the access devices T7 and T8 are connected) according to the notification, determines a target data flow according to the target address range after receiving the data flow, then determines whether there is an idle output port (upstream port) capable of forwarding the target data flow, switches the target data flow to the idle output port when there is an idle output port, forwards the data flow through the current output port of the target data flow when there is no idle output port, and regenerates the notification according to the congestion information and sends the notification to all access devices connected to the aggregation device.
After receiving the notification, the access device (e.g., T2) determines a target data flow according to the congestion information, switches the target data flow to an idle output port (uplink port) capable of forwarding the target data flow when the idle output port exists, and sends a backpressure message to a source host of the target data flow when the idle output port does not exist, where the backpressure message is used to notify the source host to perform an operation of handling network congestion.
Fig. 10 is a schematic diagram of a processing procedure when a target port is a downstream port of a convergence device in a single plane Clos architecture. As shown in fig. 10, a thin solid line indicates a link where the destination port is located, and a thick solid line indicates a notified forwarding path. A data flow (denoted as data flow 1) from host H2 to host H7 reaches sink device a7 through access device T2, sink device a1, and core device C1, and sink device a7 forwards data flow 1 to access device T7 through egress port queue 3(Q3) of port 1 (P7). In the process of forwarding data stream 1, when monitoring that the length of egress port queue 3 exceeds the second threshold, aggregation device a7 determines that port queue 3 enters a pre-blocking state, and further determines that port 1 is a target port (step 301). Since there is no idle egress port (i.e., idle downstream egress port) on the aggregation device a7, which has the same attribute as port 1, the aggregation device a7 sends a notification to all core devices and other access devices (e.g., access device T8) connected to the aggregation device. The notification includes congestion information (the congestion information refers to the above embodiments). In this case, the core devices C1 and C2 can also send data streams to the hosts under port 1 of the aggregation device a7 through at least two forwarding paths, and the core devices C1 and C2 have only one hop for example, the aggregation device a7, and the aggregation device sends notifications to the core devices C1, C2, and the access device T8.
After receiving the notification, the core device (e.g., C1) determines the target data flow according to the congestion information, and if an idle downlink egress port capable of forwarding the target data flow exists on the core device, the core device sends the target data flow through the idle downlink egress port. If there is no idle downstream egress port on the core device that can forward the target data flow, sending a notification to other aggregation devices except aggregation device a7, where the notification includes the congestion information.
After receiving the notification sent by the core device, any aggregation device performs the same operation as the aggregation device a1 in fig. 9.
After receiving the notification, any access device in fig. 10 performs the same operation as the access device T2 in fig. 9.
The processing procedure when the target port under the single-plane Clos architecture is the downlink port of the access device is similar to the processing procedure when the target port under the multi-plane architecture is the downlink port of the access device. The processing method when the target port under the single-plane Clos architecture is the uplink port is similar to the processing method when the target port under the multi-plane Clos architecture is the uplink port.
The method shown in fig. 3 of the present application can also be applied to the network architecture shown in fig. 2. In the network architecture shown in fig. 2, the identifier of each switch may be the number of the switch, for example, the number of the switch is xy, x represents the number of the pod where the switch is located, and y represents the number of the switch within the pod where the switch is located. For example, switch 11 represents a switch numbered 1 within Pod 1. Thus, the first switch can know the role of the second switch according to the number of the second switch, and can also know the attribute of the port of the second switch.
Fig. 11 is a schematic diagram illustrating a processing procedure when a target port in the architecture shown in fig. 2 is an intra-group port. Assuming that the switch 3N monitors that the length of the egress port queue 3 of the port 3 exceeds the second threshold value in the process of sending the data flow 1 to the switch 33, it is determined that the port queue 3 enters the pre-blocking state, and thus it is determined that the port 3 is the target port (step 301). Switch 3N sends a notification to the plurality of second network devices including the identification of switch 3N and the identification of port 3 (step 302). Alternatively, the identifier of the switch 3N may be obtained by parsing the identifier of the port 3, and accordingly, the identifier of the switch 3N and the identifier of the port 3N may use only one field. The notification may also include an identification of egress port queue 3. When the identification of the switches in the network architecture shown in fig. 2 takes other forms, the notification may also include the attributes of port 3 and the role of switch 3N (inter-group switch). The plurality of second network devices includes intergroup switches, i.e., switches 1N, 2N, and 4N, connected to switch 3N. Each inter-group switch has only one hop away from switch 3N. Switch 3N sends the notification to switches 1N, 2N and 4N in a multicast or unicast manner. The procedure of the switches 1N, 2N, and 4N for processing the notification will be described with the switch 1N as an example.
After the switch 1N receives the notification (step 303), the congestion information therein is acquired, the congestion information is stored, and the aging time is set. The switch 1N determines a target data flow according to the congestion information (step 304), where the target data flow is a data flow addressed to the host connected to the switch 3N, or a data flow that is addressed to the host connected to the switch 3N and has a priority corresponding to the egress port queue 3. Switch 1N determines whether there is an idle egress port on switch 1N, i.e., an idle inter-group port, through which switch 1N forwards the target data stream if there is an idle egress port (step 306), and if there is no idle egress port, switch 1N sends the target data stream through its initial forwarding path (step 307). The switch 1N transmits a notification to other switches in the same switch group based on the congestion information. The switches 11,12 and 13 receive the notification and perform similar processing for the access device in Clos architecture.
Under the architecture shown in fig. 2, the hosts may be assigned addresses according to the network architecture. That is, the address of each host may be determined according to the number of the switch to which the host is connected, for example, the address of the host connected under switch 1N is 1 n.xxx.xxx. According to the above addressing rule, when the intra-group port between the switch 3N and the switch 33 is the destination port, the destination data flow is a data flow having a destination address of 33.xxx.xxx and a priority corresponding to Q3.
In the network architecture shown in fig. 2, when the target port is a port of a switch connected to a host, the processing procedure of the switch is similar to that when the target port is an intra-group port.
Fig. 12 is a schematic diagram illustrating a processing procedure when a target port in the architecture shown in fig. 2 is an inter-group port. Assuming that the switch 1N monitors that the length of the egress port queue 3 of the port 2 of the switch 1N exceeds the second threshold value in the process of sending the data flow 1 to the switch 2N, that is, the egress port queue 3 enters the pre-blocking state, the switch 1N determines that the port 2 is the target port (step 301). The host under the target port is the host to which the switch 2N is connected. Switch 1N sends notifications to a plurality of second network devices (step 302). The plurality of second network devices include intra-group switches connected to the switch 1N, i.e., switches 11,12, and 13, and the like. The notification includes the identification of switch 1N (1N), the identification of port 2 (P2). The notification may also include an identification of egress port queue 3 (Q3). When the identification of the switches in the system shown in fig. 2 takes other forms, the notification may also include the attributes of port 2 and the role of switch 1N (intra-group switch). The switch 1N transmits the notification to the switches 11,12, and 13, etc. in a multicast or unicast manner. The switches 11,12, and 13, etc. receive the notification and perform similar processing for the access device in the Clos architecture.
As can be seen from the description of the foregoing embodiments, the method provided in fig. 3 of the present application can issue a notification to other network devices in the network after detecting that an egress port or an egress port queue enters a congestion state or a pre-congestion state, and the network device that receives the notification selects an idle egress port for a target data flow or continues to diffuse the state of the egress port or the egress port queue in the network, so that network devices in the entire network can all perform an operation of processing network congestion, and network congestion can be avoided under various network architectures. Moreover, the network device in the application can forward the target data stream through the idle output port after receiving the notification, thereby realizing end-to-end load balance in the whole network and improving the utilization rate of network resources. In addition, when the target data flow is according to the output port queue, the method and the device can only adjust the forwarding path of the data flow causing congestion without affecting the normal data flow, and further improve the forwarding efficiency of the data flow.
Further, an embodiment of the present application also provides a network device 1300, where the network device 1300 may be any network device in fig. 1 or fig. 2. As shown in fig. 13, the network device 1300 includes a determining unit 1310, a transmitting unit 1320, and optionally, the network device 1300 further includes a receiving unit 1330, and a storing unit 1340. The network device 1300 is configured to implement the functions of the first network device in fig. 3.
A determining unit 1310 configured to determine a target port, where the target port is an output port entering a pre-congestion state or a congestion state. A sending unit 1320, configured to send a first notification to at least one second network device; the at least one second network device comprises one or more network devices capable of sending data streams to the host under the target port via at least two forwarding paths; the first notification includes information of the network device where the target port is located and information of the target port.
Optionally, the network device where the target port is located is the first network device, and the determining unit is configured to: monitoring an egress port of the first network device; when the cache usage amount of an egress port of the first network device exceeds a port cache threshold, determining that the egress port is the target port.
Optionally, the network device where the target port is located is the first network device, and the determining unit is configured to: monitoring an egress port queue of the first network device; and when the length of one egress port queue exceeds a queue buffer threshold value, determining the egress port where the egress port queue is located as the target port.
Optionally, the network device where the target port is located is a third network device, and the receiving unit 1330 is configured to receive a second notification sent by the third network device, where the second notification includes information of the third network device and information of the target port; the determination unit determines the target port based on the second notification.
Optionally, the information of the network device where the target port is located includes an identifier of the network device where the target port is located, and the information of the target port includes an identifier of the target port or an identifier of a forwarding path where the target port is located.
Optionally, the information of the network device where the target port is located further includes a role of the network device where the target port is located, where the role indicates a location of the network device where the target port is located; the information of the target port further includes an attribute of the target port, the attribute indicating a direction in which the target port transmits a data stream.
Optionally, the determining unit is further configured to: and determining that no idle output port capable of forwarding the target data stream corresponding to the target port exists on the network device. The target data stream is a data stream corresponding to a target address range; the target address range is an address range corresponding to the host under the target port, and the target address range is determined according to the information of the network device where the target port is located and the information of the target port.
Optionally, the information of the target port may further include an identifier of a target egress port queue, where the target egress port queue is an egress port queue entering a congestion state or a pre-congestion state in the target port, the target data flow is a data flow corresponding to the target address range, and the priority is corresponding to the identifier of the egress port queue.
Optionally, the storage unit 1340 is configured to store information of a network device where the target port is located and information of the target port. The storage unit 1340 is also used to store the status of the target port.
Further, an embodiment of the present application further provides a network device 1400, where the network device 1400 may be any network device in fig. 1 or fig. 2. As shown in fig. 14, the network device 1400 includes a receiving unit 1410, a first determining unit 1420, a second determining unit 1430, and a processing unit 1440. Optionally, the network device 1400 further includes a storage unit 1450. The network device 1400 is used to implement the functions of the second network device in fig. 3.
A receiving unit 1410, configured to receive a first notification from a first network device, where the first notification includes information of a network device where a target port is located and information of the target port, and the target port is a port entering a pre-congestion state or a congestion state; the second network device is a network device capable of sending data stream to the host under the target port through at least two forwarding paths. A first determining unit 1420, configured to determine a target data flow, where a first forwarding path of the target data flow includes the target port. A second determining unit 1430, configured to determine whether there is an idle egress port capable of forwarding the target data flow on the second network device, and obtain a determination result. A processing unit 1440, configured to process the target data stream according to the determination result.
Optionally, when an idle output port capable of forwarding the target data flow exists on the network device, the processing unit 1430 sends the target data flow through the idle output port, and a second forwarding path where the idle output port is located does not include the target port.
Optionally, when there is no idle egress port on the network device capable of forwarding the target data flow, the processing unit 1430 forwards the target data flow through the first forwarding path.
Optionally, the processing unit 1440 is further configured to: generating a second notification, wherein the second notification comprises information of the network equipment where the target port is located and information of the target port; and sending the second notification to at least one third network device; the at least one third network device includes a device capable of transmitting data streams to a host under the destination port via at least two forwarding paths.
Optionally, the processing unit 1440 is further configured to send, to a source host of the target data flow, a backpressure message, where the backpressure message is used to enable the source host to perform an operation of handling network congestion.
Optionally, the first determining unit 1420 is configured to: determining a target address range according to the information of the network equipment where the target port is located and the information of the target port, wherein the target address range is an address range corresponding to a host under the target port; and determining the data stream with the destination address belonging to the destination address range as the target data stream.
Optionally, the first notification further includes an identifier of a target egress port queue, where the target egress port queue is an egress port queue entering a pre-congestion state or a congestion state in the target port; the first determination unit 1420 is configured to: and determining the data flow of which the destination address belongs to the target address range and the priority corresponds to the identifier of the output port queue as the target data flow.
Optionally, the storage unit 1450 is configured to store information of a network device where the target port is located and information of the target port. The storage unit 1450 is also used to store the state of the target port.
The network devices of fig. 13 and 14 cooperate with each other to implement the method shown in fig. 3, so as to avoid network congestion and achieve load balancing of the whole network.
Further, the network devices of fig. 13 and 14 may be embodied by a network device 1500 as shown in fig. 15, the network device 1500 may include a processor 1510, a memory 1520 and a bus system 1530. Wherein the processor 1510 and the memory 1520 are coupled via the bus system 1530, the memory 1520 is configured to store program codes, and the processor 1510 is configured to execute the program codes stored by the memory 1520. For example, the processor 1510 may invoke program code stored in the memory 1520 to perform methods of handling network congestion in various embodiments of the present application. In this embodiment, the processor 1510 may be a Central Processing Unit (CPU), and the processor 1510 may also be other general-purpose processors, Digital Signal Processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor 1510 may include one or more processing cores. The memory 1520 may include a read-only memory (ROM) device or a random-access memory (RAM) device. Any other suitable type of storage device may also be used as memory 1520. The memory 1520 may include data 1522 that is accessed by the processor 1510 via the bus 1530. The memory 1520 may further include an operating system 1523 to support operation of the network device 1500. The bus system 1530 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, however, the various buses are designated in the figure as the bus system 1530. Optionally, the network device 1500 may also include one or more output devices, such as a communications interface 1540. The network device 1500 may communicate with other devices via the communication interface 1540. The communication interface 1540 may be coupled to the processor 1510 via a bus system 1530. Through the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by hardware, and can also be implemented by software plus a necessary general hardware platform. Based on this understanding, the technical solutions of the present application may be embodied in the form of a hardware product or a software product. The hardware product may be a dedicated chip. The software product may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The foregoing is only a preferred embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (36)

1. A method of handling network congestion, comprising:
the first network equipment determines a target port; the target port is an output port entering a pre-blocking state or a congestion state;
the first network device sends a first notification to at least one second network device; the at least one second network device comprises one or more network devices capable of sending data streams to the host under the target port through at least two forwarding paths; the first notification includes information of a network device where the target port is located and information of the target port.
2. The method of claim 1, wherein the network device where the target port is located is the first network device, and wherein the determining, by the first network device, the target port comprises:
the first network device monitors an output port of the first network device;
when the cache usage amount of one egress port of the first network device exceeds a port cache threshold, the first network device determines that the egress port is the target port.
3. The method of claim 1, wherein the network device where the target port is located is the first network device, and wherein the determining, by the first network device, the target port comprises:
the first network equipment monitors an output port queue of the first network equipment;
when the length of an egress port queue exceeds a queue buffer threshold, the first network device determines that an egress port where the egress port queue is located is the target port.
4. The method according to claim 1, wherein the network device where the target port is located is a third network device, and the first network device determines that the target port includes;
the first network device receives a second notification sent by the third network device, where the second notification includes information of the third network device and information of the target port;
and the first network equipment determines the target port according to the second notification.
5. The method according to any one of claims 1 to 4, wherein the information of the network device where the target port is located includes an identifier of the network device where the target port is located, and the information of the target port includes an identifier of the target port or an identifier of a forwarding path where the target port is located.
6. The method of claim 5, wherein:
the information of the network device where the target port is located further includes a role of the network device where the target port is located, and the role indicates a location of the network device where the target port is located;
the information of the target port further comprises an attribute of the target port, wherein the attribute indicates the direction of the target port for sending data stream.
7. The method according to any of claims 1-6, wherein before the first network device sends the first notification to the at least one second network device, the method further comprises:
determining that no idle output port capable of forwarding the target data stream corresponding to the target port exists on the first network device;
the target data stream is a data stream corresponding to a target address range; the target address range is an address range corresponding to a host under the target port, and the target address range is determined according to information of the network device where the target port is located and information of the target port.
8. The method of claim 7, wherein the information of the target port further includes an identifier of a target egress port queue, the target egress port queue is an egress port queue entering a congestion state or a pre-congestion state in the target port, the target data flow is a data flow corresponding to a target address range and having a priority corresponding to the identifier of the egress port queue.
9. The method of claim 7 or 8, further comprising:
and the first network equipment stores the information of the network equipment where the target port is located and the information of the target port.
10. A method of handling network congestion, comprising:
the method comprises the steps that a second network device receives a first notice from a first network device, wherein the first notice comprises information of a network device where a target port is located and information of the target port, and the target port is a port entering a pre-blocking state or a congestion state; the second network device is a network device capable of sending data stream to the host under the target port through at least two forwarding paths;
the second network device determining a target data flow, a first forwarding path of the target data flow comprising the target port;
the second network device determines whether an idle output port capable of forwarding the target data stream exists on the second network device, and obtains a determination result;
and the second network equipment processes the target data stream according to the determination result.
11. The method of claim 10, wherein the processing the target data stream according to the determination comprises:
when an idle output port capable of forwarding the target data flow exists on the second network device, the second network device sends the target data flow through the idle output port, and a second forwarding path where the idle output port is located does not include the target port.
12. The method of claim 10, wherein the processing the target data stream according to the determination comprises:
and when the second network equipment does not have an idle output port capable of forwarding the target data flow, the second network equipment forwards the target data flow through the first forwarding path.
13. The method of claim 12, further comprising:
the second network equipment generates a second notification, wherein the second notification comprises the information of the network equipment where the target port is located and the information of the target port;
the second network device sending the second notification to at least one third network device; the at least one third network device includes a device capable of transmitting data streams to a host under the destination port via at least two forwarding paths.
14. The method of claim 12, further comprising:
and the second network equipment sends a backpressure message to a source host of the target data stream, wherein the backpressure message is used for enabling the source host to execute the operation of processing network congestion.
15. The method of any of claims 10-14, wherein the second network device determining the target data flow comprises: the second network device determines a target address range according to the information of the network device where the target port is located and the information of the target port, wherein the target address range is an address range corresponding to a host under the target port;
and the second network equipment determines the data stream of which the destination address belongs to the destination address range as the destination data stream.
16. The method of claim 15, wherein the first notification further includes an identification of a target egress port queue, the target egress port queue being an egress port queue of the target ports entering a pre-congestion state or a congestion state;
the determining, by the second network device, a data flow whose destination address belongs to the destination address range as the destination data flow includes:
and the second network equipment determines the data flow of which the destination address belongs to the target address range and the priority corresponds to the identifier of the output port queue as the target data flow.
17. The method of any of claims 10-16, wherein prior to the second network device determining the target data flow, the method further comprises:
and the second network equipment stores the information of the network equipment where the target port is positioned and the information of the target port.
18. A network device for handling network congestion, the network device being a first network device, comprising:
a determining unit, configured to determine a target port, where the target port is an output port entering a pre-congestion state or a congestion state;
a sending unit, configured to send a first notification to at least one second network device; the at least one second network device comprises one or more network devices capable of sending data streams to the host under the target port through at least two forwarding paths; the first notification includes information of a network device where the target port is located and information of the target port.
19. The network device of claim 18, wherein the network device where the target port is located is the first network device, and wherein the determining unit is configured to:
monitoring an egress port of the first network device;
when the cache usage amount of one egress port of the first network device exceeds a port cache threshold, determining that the egress port is the target port.
20. The network device of claim 18, wherein the network device where the target port is located is the first network device, and wherein the determining unit is configured to:
monitoring an egress port queue of the first network device;
and when the length of one egress port queue exceeds a queue buffer threshold, determining that the egress port where the egress port queue is located is the target port.
21. The network device of claim 18, wherein the network device at which the destination port is located is a third network device,
the network device further includes a receiving unit, configured to receive a second notification sent by the third network device, where the second notification includes information of the third network device and information of the target port;
the determination unit determines the target port according to the second notification.
22. The network device according to any of claims 18 to 21, wherein the information of the network device where the target port is located comprises an identifier of the network device where the target port is located, and the information of the target port comprises an identifier of the target port or an identifier of a forwarding path where the target port is located.
23. The network device of claim 22, wherein:
the information of the network device where the target port is located further includes a role of the network device where the target port is located, and the role indicates a location of the network device where the target port is located;
the information of the target port further comprises an attribute of the target port, wherein the attribute indicates the direction of the target port for sending data stream.
24. The network device according to any of claims 18-23, wherein the determining unit is further configured to:
determining that no idle output port capable of forwarding the target data stream corresponding to the target port exists on the network device;
the target data stream is a data stream corresponding to a target address range; the target address range is an address range corresponding to a host under the target port, and the target address range is determined according to information of the network device where the target port is located and information of the target port.
25. The network device of claim 24, wherein the information of the target port further includes an identifier of a target egress port queue, the target egress port queue is an egress port queue of the target port entering a congestion state or a pre-congestion state, the target data flow is a data flow corresponding to a target address range and having a priority corresponding to the identifier of the egress port queue.
26. The network device of claim 24 or 25, further comprising:
and the storage unit is used for storing the information of the network equipment where the target port is located and the information of the target port.
27. A network device that handles network congestion, wherein the network device is a second network device, comprising:
a receiving unit, configured to receive a first notification from a first network device, where the first notification includes information of a network device where a target port is located and information of the target port, and the target port is a port entering a pre-congestion state or a congestion state; the second network device is a network device capable of sending data stream to the host under the target port through at least two forwarding paths;
a first determining unit, configured to determine a target data flow, where a first forwarding path of the target data flow includes the target port;
a second determining unit, configured to determine whether an idle output port capable of forwarding the target data stream exists on the second network device, so as to obtain a determination result;
and the processing unit is used for processing the target data stream according to the determination result.
28. The network device of claim 27, wherein when an idle egress port capable of forwarding the target data flow exists on the network device, the processing unit sends the target data flow through the idle egress port, and a second forwarding path of the idle egress port does not include the target port.
29. The network device of claim 27, wherein the processing unit forwards the target data flow through the first forwarding path when there is no idle egress port on the network device capable of forwarding the target data flow.
30. The network device of claim 29, wherein the processing unit is further configured to:
generating a second notification, wherein the second notification comprises information of the network equipment where the target port is located and information of the target port;
sending the second notification to at least one third network device; the at least one third network device includes a device capable of transmitting data streams to a host under the destination port via at least two forwarding paths.
31. The network device of claim 29, wherein the processing unit is further configured to send a backpressure message to a source host of the target data flow, and wherein the backpressure message is configured to cause the source host to perform an operation of handling network congestion.
32. The network device according to any of claims 27-31, wherein the first determining unit is configured to:
determining a target address range according to the information of the network equipment where the target port is located and the information of the target port, wherein the target address range is an address range corresponding to a host under the target port;
and determining the data stream with the destination address belonging to the destination address range as the target data stream.
33. The network device of claim 32, wherein the first notification further includes an identification of a target egress port queue, the target egress port queue being an egress port queue of the target ports that enters a pre-congestion state or a congestion state;
the first determination unit is configured to: and determining the data flow of which the destination address belongs to the target address range and the priority corresponds to the identifier of the output port queue as the target data flow.
34. The network device of any one of claims 27-33, wherein the network device further comprises:
and the storage unit is used for storing the information of the network equipment where the target port is located and the information of the target port.
35. A network device comprising a memory and a processor,
the memory is used for storing program codes;
the processor is configured to execute the program code to implement the method of any one of claims 1-17.
36. A storage medium for storing program code; the program code, when executed, may implement the method of any of claims 1-17.
CN201910913827.4A 2019-07-24 2019-09-25 Method and related device for processing network congestion Pending CN112311685A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20843498.5A EP3972209A4 (en) 2019-07-24 2020-06-30 Method for processing network congestion, and related apparatus
PCT/CN2020/099204 WO2021012902A1 (en) 2019-07-24 2020-06-30 Method for processing network congestion, and related apparatus
US17/563,167 US20220124036A1 (en) 2019-07-24 2021-12-28 Network congestion handling method and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019106737067 2019-07-24
CN201910673706 2019-07-24

Publications (1)

Publication Number Publication Date
CN112311685A true CN112311685A (en) 2021-02-02

Family

ID=74485636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910913827.4A Pending CN112311685A (en) 2019-07-24 2019-09-25 Method and related device for processing network congestion

Country Status (1)

Country Link
CN (1) CN112311685A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113206794A (en) * 2021-03-31 2021-08-03 新华三信息安全技术有限公司 Forwarding speed limiting method and device
CN115051953A (en) * 2022-06-16 2022-09-13 广州大学 Programmable data plane distributed load balancing method based on switch queue behavior
CN115297067A (en) * 2022-04-29 2022-11-04 华为技术有限公司 Shared cache management method and device
CN116192777A (en) * 2022-12-30 2023-05-30 中国联合网络通信集团有限公司 Path learning method, device and storage medium
CN115297067B (en) * 2022-04-29 2024-04-26 华为技术有限公司 Shared cache management method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7675857B1 (en) * 2006-05-03 2010-03-09 Google Inc. Method and apparatus to avoid network congestion
US20170048144A1 (en) * 2015-08-13 2017-02-16 Futurewei Technologies, Inc. Congestion Avoidance Traffic Steering (CATS) in Datacenter Networks
US20170324664A1 (en) * 2016-05-05 2017-11-09 City University Of Hong Kong System and method for load balancing in a data network
CN109391560A (en) * 2017-08-11 2019-02-26 华为技术有限公司 Notifying method, agent node and the computer equipment of network congestion
CN109981471A (en) * 2017-12-27 2019-07-05 华为技术有限公司 A kind of method, apparatus and system for alleviating congestion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7675857B1 (en) * 2006-05-03 2010-03-09 Google Inc. Method and apparatus to avoid network congestion
US20170048144A1 (en) * 2015-08-13 2017-02-16 Futurewei Technologies, Inc. Congestion Avoidance Traffic Steering (CATS) in Datacenter Networks
US20170324664A1 (en) * 2016-05-05 2017-11-09 City University Of Hong Kong System and method for load balancing in a data network
CN109391560A (en) * 2017-08-11 2019-02-26 华为技术有限公司 Notifying method, agent node and the computer equipment of network congestion
CN109981471A (en) * 2017-12-27 2019-07-05 华为技术有限公司 A kind of method, apparatus and system for alleviating congestion

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113206794A (en) * 2021-03-31 2021-08-03 新华三信息安全技术有限公司 Forwarding speed limiting method and device
CN113206794B (en) * 2021-03-31 2022-05-27 新华三信息安全技术有限公司 Forwarding speed limiting method and device
CN115297067A (en) * 2022-04-29 2022-11-04 华为技术有限公司 Shared cache management method and device
CN115297067B (en) * 2022-04-29 2024-04-26 华为技术有限公司 Shared cache management method and device
CN115051953A (en) * 2022-06-16 2022-09-13 广州大学 Programmable data plane distributed load balancing method based on switch queue behavior
CN115051953B (en) * 2022-06-16 2023-07-28 广州大学 Programmable data plane distributed load balancing method based on switch queue behavior
CN116192777A (en) * 2022-12-30 2023-05-30 中国联合网络通信集团有限公司 Path learning method, device and storage medium

Similar Documents

Publication Publication Date Title
US10735323B2 (en) Service traffic allocation method and apparatus
US20230388239A1 (en) Packet sending method, network node, and system
US8064344B2 (en) Flow-based queuing of network traffic
JP2986085B2 (en) ATM network hop-by-hop flow control
Fang et al. A loss-free multipathing solution for data center network using software-defined networking approach
EP3208977A1 (en) Data forwarding method, device and system in software-defined networking
CN108809847B (en) Method, device and network system for realizing load balance
WO2021000752A1 (en) Method and related device for forwarding packets in data center network
US8971317B2 (en) Method for controlling data stream switch and relevant equipment
CN112311685A (en) Method and related device for processing network congestion
CN102263699A (en) Load balancing implementation method and device applied to MPLS TP (multiprotocol label switch transport profile)
KR20190062525A (en) Method and software defined networking (SDN) controller for providing multicast service
US8121138B2 (en) Communication apparatus in label switching network
CN113079090A (en) Traffic transmission method, node and system
US9172653B2 (en) Sending request messages to nodes indicated as unresolved
US10305787B2 (en) Dropping cells of a same packet sent among multiple paths within a packet switching device
US11646978B2 (en) Data communication method and apparatus
US20220124036A1 (en) Network congestion handling method and related apparatus
US11909546B2 (en) Method and network node for sending and obtaining assert packet
Yang et al. Crsp: Network congestion control through credit reservation
CN111490941B (en) Multi-protocol label switching MPLS label processing method and network equipment
CN113014498A (en) Method and device for receiving and transmitting data
CN112311678B (en) Method and device for realizing message distribution
CN117714378A (en) Data transmission method, device, node and system
CN114501544A (en) Data transmission method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination