CN109981471B - Method, equipment and system for relieving congestion - Google Patents

Method, equipment and system for relieving congestion Download PDF

Info

Publication number
CN109981471B
CN109981471B CN201711449438.8A CN201711449438A CN109981471B CN 109981471 B CN109981471 B CN 109981471B CN 201711449438 A CN201711449438 A CN 201711449438A CN 109981471 B CN109981471 B CN 109981471B
Authority
CN
China
Prior art keywords
data stream
message
destination
source
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711449438.8A
Other languages
Chinese (zh)
Other versions
CN109981471A (en
Inventor
晏思宇
刘世兴
夏寅贲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201711449438.8A priority Critical patent/CN109981471B/en
Publication of CN109981471A publication Critical patent/CN109981471A/en
Application granted granted Critical
Publication of CN109981471B publication Critical patent/CN109981471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2458Modification of priorities while in transit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions

Abstract

The application provides a method for relieving congestion, which comprises the following steps: the first device receives a first message forwarded by the second device, wherein the first message is a message generated by the second device in response to congestion occurring at an output port of the second device for forwarding the first data stream. And the first equipment generates a second message according to the first message. And the first equipment forwards a second message to the source equipment, wherein the second message is used for indicating the source equipment to reduce the rate of sending the first data stream to the first equipment. By the method, the network equipment with congestion can send the back-pressure message to the source equipment sending the message, so that the sending rate of the source equipment is reduced, and the network congestion is quickly relieved.

Description

Method, equipment and system for relieving congestion
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, a device, and a system for relieving congestion.
Background
Remote Direct Memory Access (RDMA) may enable Memory on one computer to directly Access Memory on another computer. RDMA technology has been applied to data center networks. In a data center network, a port of a switch may receive data streams sent by multiple network devices at the same time, and the data center network may have a traffic congestion condition.
To solve the above congestion problem, a congestion notification may be sent by the receiving end to the transmitting end. For example, TCP receiver sends a congestion notification message to TCP sender to reduce the rate at which TCP sender sends messages. In the above method, the sending end may not receive the congestion notification in time, which aggravates the network congestion.
Disclosure of Invention
The embodiment of the application provides a method, equipment and a system for relieving congestion, so that in the process of forwarding a message, a network device with congestion can send a back-pressure message to a source device sending the message, the sending rate of the source device is reduced, and the network congestion is relieved quickly.
In a first aspect, the present application provides a method of relieving congestion. The method comprises the following steps: the first device receives a first message sent by the second device, wherein the first message is a message generated by the second device in response to congestion occurring at an output port where the second device forwards the first data stream. The destination Internet Protocol (IP) address of the first packet is equal to the IP address of the source device that sent the first data stream. The first data stream sent by the source device reaches the destination device through the first device and the second device, wherein the first data stream reaches the first device before reaching the second device. And the first equipment generates a second message according to the first message. And the first equipment sends a second message to the source equipment, wherein the second message is used for indicating the source equipment to reduce the rate of sending the first data stream to the first equipment.
In the above technical solution, when the output port of the second device for sending the first data flow is congested, the second device sends the first message to the first device, and the first device generates the second message according to the second message, and reduces the rate at which the source device sends the first data flow by sending the second message to the source device, so as to alleviate the congestion of the output port of the second device. Furthermore, the occurrence of congestion at the egress port of the second device triggers the source device to reduce the rate at which the first data stream is transmitted. Compared with the technical scheme that the destination device triggers the source device to reduce the rate of sending the first data stream, the congestion feedback path of the technical scheme is shorter. Therefore, the technical scheme is beneficial to quickly relieving congestion.
In one possible design, the generating, by the first device, the second packet according to the first packet includes: the method comprises the steps of obtaining information which is carried in a first message and used for indicating a first priority, and generating a second message based on the information which is carried in the first message and used for indicating the first priority. The second message carries information for indicating the first priority, wherein the forwarding priority of the first data stream is the first priority, and the information for indicating the first priority is carried in the second message and used for indicating the source device to reduce the data stream with the forwarding priority of the first priority sent to the first device by the source device.
By adopting the mode, the source equipment sending the first data flow reduces the data flow with the same priority as the first data flow, so that not only can the network congestion be quickly relieved, but also the data flow can be sent and controlled according to the priority, and the transmission quality of the data flow is ensured.
In one possible design, the first data stream includes a destination Queue Pair (QP) of the first data stream. And the second data stream sent by the destination device reaches the source device through the second device and the first device, wherein the second data stream reaches the second device before reaching the first device. The source IP address of the second data stream is equal to the IP address of the destination device and the destination IP address of the second data stream is equal to the IP address of the source device. The second data stream contains a destination QP for the second data stream, and the second packet does not contain the destination QP for the second data stream.
By adopting the scheme, the source equipment for sending the first data stream can be subjected to back pressure without maintaining the corresponding relation between the QP of the source equipment and the QP of the target equipment when the source equipment and the target equipment send the data streams, the QP connection tracking table does not need to be established for each data stream, and the message can be prevented from being sent by looking up the table, so that the rapid back pressure on the source equipment is realized, and the network overhead is relieved.
In a second aspect, the present application provides a method of relieving congestion. The method comprises the following steps: and the second equipment receives the first data stream forwarded by the first equipment, wherein the first data stream is sent by the source equipment and is forwarded to the destination equipment through the first equipment and the second equipment. The second device determines that congestion occurs at an egress port of the second device that forwards the first data flow. And responding to the congestion of an output port of the second equipment for forwarding the first data flow, and generating a first message by the second equipment, wherein the destination IP address of the first message is equal to the IP address of the source equipment for sending the first data flow. The second device sends the first message to the first device to instruct the first device to generate a message for instructing the source device to reduce the rate of sending the first data stream to the first device.
Through the technical scheme, when congestion occurs at the output port of the second network device for sending the first data flow, a message can be sent to the source device for sending the first data flow to indicate that the source device reduces the rate of sending the data flow, and the message does not need to be sent to the source device through the target device to indicate that the rate of sending the data flow is reduced, so that the path for sending the message to the source device is shortened, and the network congestion can be relieved quickly.
In a third aspect, the present application provides a first device for performing the method of the first aspect or any one of the possible implementation manners of the first aspect. In particular, the first device comprises means for performing the first aspect or the method in any one of its possible implementations.
In a fourth aspect, the present application provides a second device that performs the method of the second aspect or any one of the possible implementations of the second aspect. In particular, the second device comprises means for performing the second aspect or the method in any one of its possible implementations.
In a fifth aspect, the present application provides a system for relieving congestion, the system comprising: a first device and a second device. The first device is configured to receive a first data stream sent by the source device and a first packet sent by the second device, generate a second packet according to the first packet, and send the second packet to the source device. The first message is a message generated by the second device in response to congestion occurring at an egress port of the second device, where the egress port transmits the first data flow. The destination IP address of the first packet is equal to the IP address of the source device that sent the first data stream. The first data stream sent by the source device is forwarded by the first device and the second device to the destination device. The second message is used for instructing the source device to reduce the rate of sending the first data stream to the first device by the source device.
The second device is configured to receive the first data flow sent by the first device, and determine that congestion occurs at an output port of the second device that sends the first data flow. The second device responds to congestion of an output port of the second device for sending the first data flow, generates a first message and sends the first message to the first device.
In a sixth aspect, the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the first aspect and each of the possible implementations.
In a seventh aspect, the present application provides another computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the second aspect and each of the possible implementations.
In an eighth aspect, the present application provides a network device comprising a network interface, a processor, a memory, and a bus connecting the network interface, the processor, and the memory. The memory is configured to store a program, instructions or code, and the processor is configured to execute the program, instructions or code in the memory to perform the method of the first aspect and each possible implementation manner.
In a ninth aspect, the present application provides a network device comprising a network interface, a processor, a memory, and a bus connecting the network interface, the processor, and the memory. The memory is used for storing programs, instructions or codes, and the processor is used for executing the programs, instructions or codes in the memory to realize the method of the second aspect and each possible implementation mode.
Drawings
Fig. 1 is a schematic view of an application scenario of a network system according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a method for relieving congestion according to an embodiment of the present application.
Fig. 3 is a flowchart of another congestion relieving method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a format of a first packet according to an embodiment of the present application.
Fig. 5 is a schematic format diagram of a second packet according to an embodiment of the present application.
Fig. 6 is a schematic structural framework diagram of a first device according to an embodiment of the present disclosure.
Fig. 7 is a schematic structural framework diagram of a second apparatus according to an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of a hardware structural framework of a first device according to an embodiment of the present disclosure.
Fig. 9 is a schematic diagram of a hardware structural framework of a second device according to an embodiment of the present application
Fig. 10 is a schematic structural diagram of a system for relieving congestion according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 provides a schematic diagram of a system architecture for controlling network congestion. The network system 100 in fig. 1 may be a CLOS network system. The network system 100 includes: aggregation nodes 101 and 102, rack switches 103, 104, 105, and 106, and servers 107 to 112. Servers 107-115 may be, among other things, storage servers or computing servers of a data center. The servers 107 to 115 are used for sending and receiving messages. The rack switches 103 to 106 are used to forward messages sent by the servers 107 to 115. The aggregation nodes 101 and 102 may be switches or routers with a function of forwarding packets. The aggregation nodes 101 and 102 are configured to forward a packet sent by a server. As shown in fig. 1, the aggregation node 101 is connected to chassis switches 103-106, respectively. The aggregation node 102 is connected to chassis switches 103-106, respectively. Rack switch 103 is connected to server 107, server 108, and server 109, respectively. The rack switch 104 is connected to the server 110 and the server 111, respectively. The rack switch 105 is connected to the server 112 and the server 113, respectively. The rack switch 106 is connected to servers 114 and 115, respectively. The following describes a forwarding process of a message in a data center by taking the server 107 as a source device and the server 115 as a destination device as an example. The server 107 as the source device sends the message to the rack switch 103 connected thereto. The chassis switch 103 receives the message and forwards the message to the aggregation node 101. The aggregation node 101 receives the message and forwards the message to the chassis switch 106. Receives the message and forwards the message to the server 115 to which it is connected. The server 115 receives the message from the server 107 forwarded by the rack switch 106 as a destination device.
In order to reduce the delay of receiving and sending messages by the server side in network transmission, the data center network can adopt an RDMA technology. In a data center network applying an RDMA protocol, the above-mentioned message sent by the server 107 as a sending end may be forwarded in the data center of the CLOS network architecture shown in fig. 1 by using a remote direct memory access protocol version 2 (english: remote direct memory access over converted Ethernet version 2, abbreviated as RoCEv 2) based on a Converged Ethernet, for example, the server 107 as a sending end sends the message adopting the RoCEv2 protocol to the receiving end server 115, the rack switch 103 receives the RoCEv2 message sent by the sending end server 107 connected thereto, and the rack switch 103 forwards the RoCEv2 message to the sink node 101. If the Congestion occurs at the outgoing interface of the RoCEv2 Packet sent by the aggregation node 101, for example, the Congestion occurs at the interface where the aggregation node 101 is connected to the rack switch 106, the aggregation node 101 marks an Explicit Congestion Notification (ECN) bit in the RoCEv2 Packet as a Congestion state, and continues to send the Packet to the receiving end server 115, the receiving end server 115 receives the RoCEv2 Packet marked as the Congestion state by the ECN, constructs a Congestion Notification Packet (CNP) according to the RoCEv2 Packet carrying the Congestion flag, and sends the CNP to the sending end server 107. The sender server 107 receives the CNP and slows down according to a congestion control algorithm to relieve network congestion. However, in the method for reducing network congestion, the congestion feedback path is long, which may cause that the sending end cannot receive the congestion notification in time, so that the network congestion is aggravated, and the message is discarded. Another method for controlling network congestion first needs to establish a QP according to a sending queue and a receiving queue, and a sending end server adds a destination QP to a data stream to be sent, where the destination QP is used to indicate a queue pair number (queue pair number) located at a destination device. Therefore, even if the device sending the data stream needs to send the congestion notification message to the sending end device sending the data stream, the destination QP needs to be added to the congestion notification message, so that a QP connection tracking table is needed, for example, a correspondence table between a queue pair number of the sending end server 107 and a queue pair number of the receiving end server 115 is established, where the QP is used to identify a virtual port number, and when the source device sends the message to the destination device, the destination QP is usually carried in the message. This is because in an RDMA network, both the RoCEv2 packet and the CNP need to send packets according to the QP of the destination carried in the RoCEv2 packet or the CNP. In the second congestion control method, a network device with congestion, such as the aggregation node 101, generates a first CNP according to a received RoCEv2 packet and sends the first CNP to a sending-end server 107 through a rack switch 103. Since the first CNP is generated according to the RoCEv2 message, the destination QP carried in the first CNP needs to be replaced by the source QP corresponding to the destination QP when the first CNP is sent to the sending-end server 107, and therefore before the CNP is received by the sending-end server 107, the QP carried by the destination in the first CNP needs to be replaced by the QP of the source device corresponding to the destination to generate a second CNP, so that the sending-end server 107 can receive the message according to the QP of the source device corresponding to the destination carried in the second CNP to control the speed of sending the message by the sending-end server 107, thereby relieving congestion. In the above process, not only the agent is required to establish the QP connection tracking table for each data stream, but also the corresponding relationship of QP between the sending end server and the receiving end server needs to be searched, and the specification of the QP connection tracking table is limited by the storage resources of the network device, and when the table entry scale is too large, the table lookup time is longer, which is not beneficial to quickly relieving congestion and causing performance degradation.
In order to avoid the agent establishing a QP connection tracking table for the data stream and determining the correspondence between QPs of the sending-end network device and the receiving-end network device by looking up the table, as shown in fig. 2, a flow diagram of a method for relieving congestion is provided for the embodiment of the present application, where the network includes a source device, a destination device, a first device, and a second device. The network may be applied to the scenario shown in fig. 1, where the source device and the destination device may be any one of servers 107 to 115 shown in fig. 1, the second device may be any one of rack switches 103 to 106, the first device may be any one of network devices other than servers 107 to 115, for example, the first device may be aggregation node 101 in fig. 1, and may also be rack switch 105. The method specifically comprises the following steps:
s210, the second device receives the first data stream forwarded by the first device.
The first data stream sent by the source device reaches the destination device through the first device and the second device.
In one possible implementation, the source device and the destination device may be network devices such as a storage server and a computing server of a data center.
In a possible implementation manner, the source device sends the first data stream to the first device, the first device sends the received first data stream to the second device, and the second device sends the received first data stream to the destination device, where there may be other network devices between the first device and the second device and between the second device and the destination device, for example, a third device is further included between the first device and the second device, that is, the first device sends the first data stream to the second device through the third device.
In a possible implementation manner, the first data flow adopts the RoCEv2 protocol, and an Explicit Congestion Notification (ECN) bit of an IP header of the first data flow is marked as "10", which indicates that the message has an Explicit Congestion Notification function. The source IP address of the first data stream is the IP address of the source device, and the destination IP address of the first data stream is the IP address of the destination device.
S220, the second device determines that congestion occurs at an output port of the second device for forwarding the first data stream.
S230, in response to the congestion occurring at the output port of the second device for forwarding the first data stream, the second device generates a first packet.
In one possible implementation, if the second device determines that the port sending the first data flow is congested, the second device generates a CNP. Fig. 4 is a schematic diagram of a format of a second packet, that is, a schematic diagram of a format of a CNP, provided in an embodiment of the present application.
In one possible implementation, the first device stores priority information of the first data stream in the first message.
For example, as shown in fig. 1, the second device is the aggregation node 102 in fig. 1, and the first device is the rack switch 103. The sink node 102 receives the first data flow from the chassis switch 103, and determines an egress port for sending the first data flow according to a route of the first data flow. When the sink node 102 determines that congestion occurs at an egress port that sends a first data flow, the sink node 102 generates a CNP, the destination IP address of which is the same as the source IP address of the first data flow, and a reserved field in an "IB basic Transport Header" (InfiniBand Base Transport Header) in the CNP holds priority information of the first data flow.
In a possible implementation manner, the destination IP address of the first packet is equal to the IP address of the source device that sends the first data stream, that is, the second device adds the source IP address of the first data stream to the destination IP address of the first packet.
In a possible implementation manner, when an output port of the second device that sends the first data stream is congested, the second device may be triggered to generate the first packet.
In a possible implementation manner, the second device uses the destination IP address of the first data stream as the source IP address of the first packet, uses the source IP address of the first data stream as the destination IP address of the first packet, and sends the first packet according to the destination IP address of the first packet.
In a possible implementation manner, the first packet generated by the first device according to the first data stream is a CNP.
S240, the second device sends the first message to the first device.
The first message is used for instructing the first device to generate a message for instructing the source device to reduce the rate of sending the first data stream to the first device.
As shown in fig. 3, a schematic flowchart of a method for relieving congestion is provided for the embodiment of the present application, where the network includes a source device, a destination device, a first device, and a second device. The network may be applied to the scenario shown in fig. 1, where the source device and the destination device may be any one of servers 107 to 115 shown in fig. 1, the first device may be any one of rack switches 103 to 106, the second device may be any one of network devices other than servers 107 to 115, for example, the second device may be aggregation node 101 in fig. 1, and may also be rack switch 105. The method specifically comprises the following steps:
s310, the first device receives a first message sent by the second device.
The first message is a message generated by the second device in response to congestion occurring at an egress port through which the second device forwards the first data stream. The destination IP address of the first packet is equal to the IP address of the source device that sent the first data stream. The first data stream sent by the source device reaches the destination device through the first device and the second device, and the first data stream reaches the first device before reaching the second device.
In a possible implementation manner, the first data stream is sent by the source device through the first device and the second device to the destination device, that is, the source device sends the first data stream to the first device, the first device sends the received first data stream to the second device, and the second device sends the received data stream to the destination device.
In a possible implementation manner, the source device sends the first data stream to the destination device through the first path, where the first path may include at least one network device in addition to the source device, the destination device, the first device, and the second device, for example, a third device is also included between the first device and the second device, where the first device sends the first data stream to the third device, the third device sends the received first data stream to the second device, and the second device sends the first data stream to the destination device.
S320, the first equipment generates a second message according to the first message.
In a possible implementation manner, the first message may be a CNP, where the CNP has a congestion notification function, and the first device triggers generation of the third message when receiving the CNP.
Optionally, the generating, by the first device, the second message according to the first message includes: and acquiring the information which is carried in the first message and used for indicating the first priority, and generating a second message based on the information which is carried in the first message and used for indicating the first priority. The second packet carries information indicating the first priority. The forwarding priority of the first data stream is the first priority, and the information for indicating the first priority is carried in the second message and used for indicating the source device to reduce the data stream with the forwarding priority as the first priority sent by the source device to the first device.
In a possible implementation manner, the first device obtains priority information of the first data stream from the first packet, and sets a value of the obtained priority of the first data stream as a priority of the third packet, that is, a forwarding priority of the first packet is the same as a forwarding priority of the third packet.
In a possible implementation manner, the first device generates the second message according to the number of the received first messages, that is, the first device sets a threshold value for the received first messages, and when the number of the received first messages of the first device reaches or exceeds the threshold value, the first device is triggered to generate the second message.
For example, as shown in fig. 1, the device is a rack switch 103, the first device is an aggregation node 101, the source device is a server 107, and the destination device is a server 115. The first device receives a CNP sent by the second device, and generates a Priority-based Flow Control (PFC) message, where a Priority value of the PFC message is obtained by the device from a CNP reserved field, and the CNP reserved field stores Priority information of a first data stream, that is, the Priority value of the PFC message is the same as the Priority value of the first message.
For example, as shown in fig. 1, the first device is a rack switch 103. Fig. 5 provides a schematic diagram of a message format of the second message, i.e., a PFC message format diagram. Fig. 4 provides a message format diagram of the first message, i.e., a CNP format diagram. The rack switch 103 obtains the stored Priority information of the first data stream from the reserved field of the BTH in the CNP in fig. 4, and the rack switch 103 stores the value of the Priority obtained from the CNP in a Priority-enable-vector (Priority-vector) bit in the PFC message shown in fig. 5, that is, sets the value of the Priority of the PFC message to be the same as the value of the Priority of the first message obtained from the CNP in fig. 4.
In a possible implementation manner, the second message may also carry a time for controlling the back-pressure source device, for example, a time value of the back-pressure source device is added to a "time-vector" (time-vector) bit in fig. 5.
Optionally, the first data stream includes a destination queue pair QP for the first data stream. The second data stream sent by the destination device reaches the source device via the second device and the first device. The second data stream arrives at the second device before arriving at the first device. The source IP address of the second data stream is equal to the IP address of the destination device. The destination IP address of the second data stream is equal to the IP address of the source device. The second data stream includes a destination QP for the second data stream, and the second packet does not include the destination QP for the second data stream.
In one possible implementation, the QP consists of a send queue and a receive queue. The destination QP of the first data stream is used to indicate the queue pair number located at the destination device, and the destination QP of the second data stream is used to indicate the queue pair number located at the source device. The queue pair number at the destination device and the queue pair number at the source device correspond to the same queue pair, but a QP connection tracking table does not need to be established, and a destination QP of the CNP does not need to be obtained by searching the QP connection tracking table when the CNP is sent to the source device, so that the network overhead and time for establishing the QP connection tracking table are reduced, and the table lookup is avoided to quickly relieve the network congestion.
S330, the first equipment sends a second message to the source equipment.
The second message is used for instructing the source device to reduce the rate of sending the first data stream to the first device.
In a possible implementation manner, the first device determines that the source device is the next hop of the first device according to the destination IP address carried in the first message, and the first device adds the destination MAC address of the first message to the destination MAC address of the second message and sends the second message to the next hop.
In a possible implementation manner, the first device determines a port for sending the second packet according to a destination MAC address of the second packet, and sends the second packet to the source device through the determined port. The second message is used for back-pressing the sending end network device to send the message with the same priority as the third message.
For example, as shown in fig. 1, the device is a rack switch 103, the first device is a sink node 101, the sending-end network device is a server 107, the receiving-end network device is a server 115, and the third message is a PFC message. The rack switch 103 determines a port to transmit the PFC according to the destination IP address of the PFC, and transmits the PFC to the server 107 through the port.
In a possible implementation manner, the sending, by the first device, the second packet to the destination device includes triggering, by the first device, the first device to send the second packet to the destination device when the first device determines that the number of the first messages received by the first device reaches the threshold.
In a possible implementation manner, the first device sets a threshold for receiving the number of the first packets, and when the number of the first packets received by the first device is greater than or equal to the threshold, the first device sends a second packet to the source device, where the threshold may be flexibly set.
For example, as shown in fig. 1, the first device is a rack switch 103, the second device is a sink node 101, the source device is a server 107, the destination device is a server 115, the first message is a CNP, and the second message is a PFC message. When the rack switch 103 determines that the number of CNPs sent by the receiving aggregation node 101 is greater than or equal to the set threshold, the rack switch 103 sends a PFC message to the server 107.
Optionally, the second message further carries a value of the back pressure duration, and the value of the back pressure duration is determined by the first device.
For example, as shown in fig. 1, the device is a rack switch 103, and the rack switch may determine the length of time that the backpressure source device sends the data stream.
By the method, when the second device in the forwarding path of the first data stream is congested, the congested second device sends a congestion notification message to the first device. And the first equipment receives the congestion notification message and generates a flow control message based on priority for the back pressure source equipment, so that the congestion state is relieved. According to the method, the QP corresponding to the source equipment and the destination equipment does not need to be established through the control message, the QP connection tracking table does not need to be established by an agent, and the first message can be continuously sent without replacing the destination QP of the first message through the QP table in the sending process of sending the first message to the source equipment, so that the network overhead is greatly reduced, the table lookup time is saved, and the congestion can be quickly relieved.
As shown in fig. 6, a first apparatus 600 for congestion relief is provided for the present application. The first device 600 may be any one of the rack switches 103 to 106 in fig. 1, or may be the first device in the method flowcharts of fig. 2 and fig. 3, and may implement the function of the first device. The first device 600 comprises a receiving unit 601, a processing unit 602 and a transmitting unit 603.
The receiving unit 601 is configured to receive a first packet sent by a second device, where the first packet is a packet generated by the second device in response to congestion occurring at an output port of the second device, where the output port forwards a first data flow. The destination IP address of the first packet is equal to the IP address of the source device that sent the first data stream. The first data stream sent by the source device reaches the destination device via the first device 600 and the second device, and the first data stream reaches the first device 600 before reaching the second device.
The processing unit 602 is configured to generate a second packet according to the first packet received by the receiving unit 601.
The sending unit 603 is configured to send, to the source device, the second packet generated by the processing unit 602, where the second packet is used to instruct the source device to reduce a rate at which the source device sends the first data stream to the first device 600.
Optionally, the processing unit 602 is specifically configured to obtain information that is carried in the first packet and used for indicating a first priority, and generate the second packet based on the information that is carried in the first packet and used for indicating the first priority. The second packet carries information for indicating the first priority. The forwarding priority of the first data flow is the first priority. The information used for indicating the first priority is carried in the second packet and used for indicating the source device to reduce the rate at which the source device sends the data stream with the forwarding priority of the first priority to the first device 600.
In this specific embodiment, for specific implementation of the receiving unit 601, the processing unit 602, and the sending unit 603, reference may be made to the functions and implementation steps of the first device described in fig. 2 and fig. 3, and details are not described again for brevity.
As shown in fig. 7, a second apparatus 700 for congestion mitigation is provided for the present application. The second device 700 may be any one of the rack switches 103 to 106 or any one of the aggregation nodes 101 to 102 in fig. 1, or may be the second device in the method flow diagrams 2 and 3. The second device 700 comprises a receiving unit 701, a determining unit 702, a processing unit 703 and a transmitting unit 704.
The receiving unit 701 is configured to receive a first data stream forwarded by a first device, where the first data stream is sent by a source device and reaches a destination device through a second device 700 and the first device.
The determining unit 702 is configured to determine that congestion occurs at an egress port of the first device forwarding the first data flow.
The processing unit 703 is configured to respond to that an egress port of the first device that forwards the first data flow is congested, and generate a first packet. The destination IP address of the first packet is equal to the IP address of the source device that sent the first data stream.
The sending unit 704 is configured to send the first packet to the first device, so as to instruct the first device to generate a packet for instructing the source device to reduce a rate at which the source device sends the first data stream to the first device.
In this specific embodiment, for specific implementation of the receiving unit 701, the determining unit 702, the processing unit 703 and the sending unit 704, reference may be made to the functions and implementation steps of the second device described in fig. 2 and fig. 3, and for brevity, no further description is given.
As shown in fig. 8, another first apparatus 800 for congestion mitigation is provided for the present application. The first device 800 may be any one of the rack switches 103 to 106 in fig. 1, or may be the first device in the method flowcharts of fig. 2 and fig. 3, and may implement the function of the first device. The first device 800 comprises a network interface 801 and a processor 802, and may further comprise a memory 803.
The processor 802 may be a Central Processing Unit (CPU), a Network Processor (NP), an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD). The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), general Array Logic (GAL), or any combination thereof. The processor 802 is responsible for managing the bus 804 and general processing, and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. The memory 803 may be used to store data used by the processor 802 in performing operations.
The network Interface 801 may be a wired Interface, such as a Fiber Distributed Data Interface (FDDI) Interface or an Ethernet Interface. The network interface 801 may also be a wireless interface, such as a wireless local area network interface.
The memory 803 may be a content-addressable memory (CAM), such as a Ternary CAM (TCAM) or a random-access memory (RAM).
The memory 803 may also be integrated into the processor 802. If the memory 803 and the processor 802 are separate devices, the memory 803 may be coupled to the processor 802, for example, the memory 803 and the processor 802 may communicate via a bus. The network interface 801 and the processor 802 may communicate via a bus, and the network interface 801 may be directly connected to the processor 802.
The bus 804 may include any number of interconnected buses and bridges, the bus 804 linking together various circuits including one or more processors 802, represented by the processor 802, and memory, represented by the memory 803. The bus 804 may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein.
The network interface 801 is configured to receive a first packet sent by the second device and send a second packet generated by the processing unit to the source device. The first message is a message generated by the second device in response to congestion occurring at an egress port of the second device, where the egress port transmits the first data flow. The destination internet protocol IP address of the first packet is equal to the IP address of the source device that sent the first data stream. The first data stream transmitted by the source device reaches the destination device via the first device 800 and the second device. The first data stream arrives at the first device 800 before arriving at the second device.
The processor 802 is configured to generate a second message according to the first message received by the receiving unit.
In this specific embodiment, the specific implementation of the processor 802 and the network interface 801 may refer to the functions and implementation steps of the first device described in fig. 2 and fig. 3, and for brevity, no further description is given.
As shown in fig. 9, another second apparatus 900 for relieving congestion is provided for the present application. The second device 900 may be any one of the rack switches 103 to 106 or any one of the aggregation nodes 101 to 102 in fig. 1, or may be a second device in the method flowcharts of fig. 2 and fig. 3, and may implement a function of the second device, or may implement a function of the second device. The second device 900 comprises a network interface 901 and a processor 902, and may further comprise a memory 903.
The processor 902 may include, but is not limited to, one or more of a Central Processing Unit (CPU), a Network Processor (NP), an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD). The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. The processor 902 is responsible for managing the bus 904 and general processing, and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. The memory 903 may be used to store data used by the processor 902 in performing operations.
The network Interface 901 may be a wired Interface, such as a Fiber Distributed Data Interface (FDDI), an Ethernet Interface. The network interface 901 may also be a wireless interface, such as a wireless local area network interface.
The memory 903 may be, but is not limited to, a content-addressable memory (CAM), such as a Ternary CAM (TCAM), a random-access memory (RAM).
The memory 903 may also be integrated within the processor 902. If the memory 903 and the processor 902 are separate devices, the memory 903 and the processor 902 may be connected, for example, the memory 903 and the processor 902 may communicate via a bus. The network interface 901 and the processor 902 may communicate via a bus, and the network interface 901 may also be directly connected to the processor 902.
The bus 904 may include any number of interconnected buses and bridges, with the bus 904 linking various circuits including one or more of the processor 902, represented by the processor 902, and the memory, represented by the memory 903. The bus 904 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein.
The network interface 901 is configured to receive a first data stream sent by a first device and send the first packet to the first device. The first data stream is sent by the source device and forwarded to the destination device via the second device 900 and the first device.
The processor 902 is configured to determine that an egress port of the first device sending the first data flow is congested, and, in response to the egress port of the first device sending the first data flow being congested, generate the first packet by the second device 900. The destination IP address of the first packet is equal to the IP address of the source device that sent the first data stream.
In this specific embodiment, for specific implementation of the processor 902 and the network interface 901, reference may be made to the functions and implementation steps of the first device described in fig. 2 and fig. 3, and details are not described again for brevity.
As shown in fig. 10, a system 1000 for congestion relief is provided for the present application. The system 1000 includes a first device 1001 and a second device 1002. The first device 1001 may be any one of the rack switches 103 to 106 in fig. 1, or may be the first device in the method flowcharts of fig. 2 and fig. 3, and may implement the function of the first device. The second device 1002 may be any one of the rack switches 103 to 106 or any one of the aggregation nodes 101 to 102 in fig. 1, or may be a second device in the method flowcharts of fig. 2 and 3, and may implement the function of the second device.
The first device 1001 is configured to receive a first data stream sent by a source device and a first packet sent by a second device, generate a second packet according to the first packet, and send the second packet to the source device. The first packet is a packet generated by the second device 1002 in response to congestion occurring at an egress port of the second device 1002, where the egress port sends the first data stream. The destination IP address of the first packet is equal to the IP address of the source device that sent the first data stream. The first data stream is sent by the source device to the first device 1001 and the forwarding of the second device 1002 to the destination device. The second message is used to instruct the source device to reduce the rate at which the source device sends the first data stream to the first device 1001 before the first data stream reaches the second device 1002 and reaches the first device 1001.
The second device 1002 is configured to receive the first data flow sent by the first device 1001, and determine that congestion occurs at an egress port of the second device 1002, where the first data flow is sent. The second device 1002 generates a first packet in response to the occurrence of congestion at the egress port from which the second device 1002 transmits the first data flow, and transmits the first packet to the first device 1001.
In this embodiment, the first device 1001 implements the functions and implementation steps of the first device that may be described with reference to fig. 2 and 3. For specific implementation of the second device 1002, reference may be made to the functions and implementation steps of the second device 1002 described in fig. 2 and fig. 3, and details are not repeated for brevity.
It should be understood that, in the embodiments of the present application, the magnitude of the serial number of each method described above does not mean the execution sequence, and the execution sequence of each method should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed methods and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The integrated module can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit, if implemented in hardware in combination with software and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, some technical features of the technical solutions of the present application, which contribute to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to perform some or all of the steps of the methods described in the embodiments of the present application. The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.

Claims (11)

1. A method of alleviating congestion, the method comprising:
a first device receives a first packet sent by a second device, where the first packet is a packet generated by the second device in response to congestion occurring at an output port of a first data stream forwarded by the second device, a destination Internet Protocol (IP) address of the first packet is equal to an IP address of a source device that sends the first data stream, the first data stream sent by the source device reaches a destination device through the first device and the second device, and the first data stream reaches the first device before reaching the second device;
the first device generates a second message according to that the number of the received first messages meets a preset threshold, wherein the second message carries information used for indicating a first priority, the forwarding priority of the first data stream is the first priority, and the second message carries information used for indicating the first priority and used for indicating the source device to reduce the rate of sending the data stream with the forwarding priority of the first priority to the first device; the second message also carries a back-pressure time length value used for indicating the source equipment to reduce the time length for sending the data stream; the backpressure duration value is determined by the first device;
the first data stream comprises a destination queue pair QP of the first data stream, a second data stream sent by the destination device reaches the source device through the second device and the first device, the second data stream reaches the second device before reaching the first device, the source IP address of the second data stream is equal to the IP address of the destination device, the destination IP address of the second data stream is equal to the IP address of the source device, the second data stream comprises the destination QP of the second data stream, and the second message does not comprise the destination QP of the second data stream;
and the first equipment sends the second message to the source equipment, wherein the second message is used for indicating the source equipment to reduce the rate of sending the first data stream to the first equipment by the source equipment.
2. The method of claim 1, wherein generating, by the first device, a second packet from the first packet comprises:
acquiring information carried in the first message and used for indicating a first priority; and
and generating the second message based on the information which is carried in the first message and used for indicating the first priority.
3. A method of alleviating congestion, the method comprising:
the method comprises the steps that a second device receives a first data stream forwarded by a first device, wherein the first data stream is sent by a source device and reaches a destination device through the first device and the second device;
the second device determines that congestion occurs at an output port of the second device for forwarding the first data flow;
responding to congestion of an output port of the second device for forwarding the first data flow, the second device generating a first message, wherein a destination IP address of the first message is equal to an IP address of the source device for sending the first data flow;
the second device sends the first message to the first device to instruct the first device to generate a second message for instructing the source device to reduce a rate of sending the first data stream to the first device according to the fact that the number of the received first messages meets a preset threshold, wherein the second message carries information for indicating a first priority, the forwarding priority of the first data stream is the first priority, and the information for indicating the first priority is carried in the second message and is used for instructing the source device to reduce the rate of sending the data stream with the forwarding priority of the first priority to the first device; the second message also carries a back-pressure duration value, which is used for indicating the source equipment to reduce the duration of sending the data stream; the backpressure duration value is determined by the first device;
the first data stream includes a destination queue pair QP of the first data stream, a second data stream sent by the destination device reaches the source device via the second device and the first device, the second data stream reaches the second device before reaching the first device, a source IP address of the second data stream is equal to an IP address of the destination device, a destination IP address of the second data stream is equal to an IP address of the source device, the second data stream includes the destination QP of the second data stream, and the second packet does not include the destination QP of the second data stream.
4. A first device, comprising:
a receiving unit, configured to receive a first packet sent by a second device, where the first packet is a packet generated by the second device in response to a congestion occurring at an output port of the second device, where the output port forwards a first data stream, and a destination internet protocol IP address of the first packet is equal to an IP address of a source device that sends the first data stream, where the first data stream sent by the source device reaches a destination device via a first device and the second device, and the first data stream reaches the first device before reaching the second device;
a processing unit, configured to generate a second packet according to that the number of the first packets received by the receiving unit meets a preset threshold, where the second packet carries information used to indicate a first priority, a forwarding priority of the first data stream is the first priority, and the second packet carries information used to indicate the first priority and is used to indicate the source device to reduce a rate of sending a data stream with the forwarding priority of the first priority to the first device; the second message also carries a back-pressure duration value, which is used for indicating the source equipment to reduce the duration of sending the data stream; the backpressure duration value is determined by the first device;
the first data stream comprises a destination queue pair QP of the first data stream, a second data stream sent by the destination device reaches the source device through the second device and the first device, the second data stream reaches the second device before reaching the first device, the source IP address of the second data stream is equal to the IP address of the destination device, the destination IP address of the second data stream is equal to the IP address of the source device, the second data stream comprises the destination QP of the second data stream, and the second message does not comprise the destination QP of the second data stream;
a sending unit, configured to send the second packet generated by the processing unit to the source device, where the second packet is used to instruct the source device to reduce a rate at which the source device sends the first data stream to the first device.
5. The first device of claim 4, wherein the processing unit is configured to:
acquiring information carried in the first message and used for indicating a first priority; and
and generating the second message based on the information which is carried in the first message and used for indicating the first priority.
6. A second apparatus, comprising:
a receiving unit, configured to receive a first data stream forwarded by a first device, where the first data stream is sent by a source device and reaches a destination device through the second device and the first device;
a determining unit, configured to determine that congestion occurs at an output port at which the second device forwards the first data flow;
a processing unit, configured to generate a first packet in response to congestion occurring at an output port through which the second device forwards the first data flow, where a destination IP address of the first packet is equal to an IP address of the source device that sends the first data flow;
a sending unit, configured to send the first packet to the first device, so as to instruct the first device to generate a second packet, where the second packet carries information used to indicate a first priority, a forwarding priority of the first data stream is the first priority, and the second packet carries information used to indicate the first priority and is used to instruct the source device to reduce a rate of sending a data stream with the forwarding priority of the first priority to the first device; the second message also carries a back-pressure duration value, which is used for indicating the source equipment to reduce the duration of sending the data stream; the backpressure duration value is determined by the first device;
the first data stream includes a destination queue pair QP of the first data stream, the second data stream sent by the destination device reaches the source device via the second device and the first device, the second data stream reaches the second device before reaching the first device, a source IP address of the second data stream is equal to an IP address of the destination device, a destination IP address of the second data stream is equal to an IP address of the source device, the second data stream includes the destination QP of the second data stream, and the second packet does not include the destination QP of the second data stream.
7. A system for alleviating congestion, the system comprising:
the first device is configured to receive a first data stream sent by a source device and a first message sent by a second device, generate a second message according to that the number of the received first messages meets a preset threshold, and send the second message to the source device, where a destination IP address of the first message is equal to an IP address of the source device that sends the first data stream, the first data stream sent by the source device reaches the destination device via the first device and the second device, the first data stream reaches the first device before reaching the second device, and the second message is used to instruct the source device to reduce a rate at which the source device sends the first data stream to the first device, where the second message carries information used to indicate a first priority, a forwarding priority of the first data stream is the first priority, and the second message carries information used to indicate the first priority and is used to instruct the source device to reduce a rate at which the source device sends the data stream with the forwarding priority as the first priority to the first device; the second message also carries a back-pressure duration value, which is used for indicating the source equipment to reduce the duration of sending the data stream; the backpressure duration value is determined by the first device;
the second device is configured to receive the first data flow forwarded by the first device, determine that an output port of the second device, which forwards the first data flow, is congested, generate a first packet in response to the congestion occurring at the output port of the second device, which forwards the first data flow, and send the first packet to the first device;
the first data stream includes a destination queue pair QP of the first data stream, the second data stream sent by the destination device reaches the source device via the second device and the first device, the second data stream reaches the second device before reaching the first device, a source IP address of the second data stream is equal to an IP address of the destination device, a destination IP address of the second data stream is equal to an IP address of the source device, the second data stream includes the destination QP of the second data stream, and the second packet does not include the destination QP of the second data stream.
8. A first device, comprising: a processor, and a memory coupled to the processor, wherein,
the memory to store program instructions;
the processor configured to perform the method of any of claims 1-2 by executing program instructions in the memory.
9. A second apparatus, comprising: a processor, and a memory coupled to the processor, wherein,
the memory to store program instructions;
the processor configured to perform the method of claim 3 by executing program instructions in the memory.
10. A computer-readable medium comprising instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 2.
11. A computer-readable medium comprising instructions that, when executed by a computer, cause the computer to perform the method of claim 3.
CN201711449438.8A 2017-12-27 2017-12-27 Method, equipment and system for relieving congestion Active CN109981471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711449438.8A CN109981471B (en) 2017-12-27 2017-12-27 Method, equipment and system for relieving congestion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711449438.8A CN109981471B (en) 2017-12-27 2017-12-27 Method, equipment and system for relieving congestion

Publications (2)

Publication Number Publication Date
CN109981471A CN109981471A (en) 2019-07-05
CN109981471B true CN109981471B (en) 2023-04-18

Family

ID=67071294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711449438.8A Active CN109981471B (en) 2017-12-27 2017-12-27 Method, equipment and system for relieving congestion

Country Status (1)

Country Link
CN (1) CN109981471B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3993330A4 (en) * 2019-07-18 2022-08-10 Huawei Technologies Co., Ltd. Flow rate control method and apparatus
CN112311685A (en) * 2019-07-24 2021-02-02 华为技术有限公司 Method and related device for processing network congestion
EP3972209A4 (en) * 2019-07-24 2022-07-06 Huawei Technologies Co., Ltd. Method for processing network congestion, and related apparatus
CN110647071B (en) * 2019-09-05 2021-08-27 华为技术有限公司 Method, device and storage medium for controlling data transmission
CN112714072A (en) * 2019-10-25 2021-04-27 华为技术有限公司 Method and device for adjusting sending rate
CN112751765A (en) * 2019-10-30 2021-05-04 华为技术有限公司 Method and device for adjusting transmission rate
CN111404826B (en) * 2020-03-23 2022-04-22 苏州盛科通信股份有限公司 Flow planning method and device based on output port feedback
CN111614471B (en) * 2020-04-29 2022-06-07 网络通信与安全紫金山实验室 DCQCN data transmission system and transmission method based on SDN
CN113746744A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Method, device, equipment, system and storage medium for controlling network congestion
CN114095448A (en) * 2020-08-05 2022-02-25 华为技术有限公司 Method and equipment for processing congestion flow
CN113162864B (en) * 2021-04-25 2022-11-08 中国工商银行股份有限公司 RoCE network flow control method, device, equipment and storage medium
WO2023122995A1 (en) * 2021-12-28 2023-07-06 华为技术有限公司 Packet transmission method and device
CN116032852B (en) * 2023-03-28 2023-06-02 新华三工业互联网有限公司 Flow control method, device, system, equipment and storage medium based on session

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101227495B (en) * 2008-02-20 2013-01-16 中兴通讯股份有限公司 Common telecommunication grouping data network system and congestion control method thereof
CN102025617B (en) * 2010-11-26 2015-04-01 中兴通讯股份有限公司 Method and device for controlling congestion of Ethernet
CN104303465A (en) * 2013-03-29 2015-01-21 华为技术有限公司 Network congestion processing method, network node, and network system
CN106330742B (en) * 2015-06-23 2019-12-06 华为技术有限公司 Flow control method and network controller
CN107493238A (en) * 2016-06-13 2017-12-19 华为技术有限公司 A kind of method for controlling network congestion, equipment and system
CN106657365B (en) * 2016-12-30 2019-12-17 清华大学 RDMA (remote direct memory Access) -based high-concurrency data transmission method

Also Published As

Publication number Publication date
CN109981471A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN109981471B (en) Method, equipment and system for relieving congestion
US11792046B2 (en) Method for generating forwarding information, controller, and service forwarding entity
EP3618372B1 (en) Congestion control method and network device
CN110022264B (en) Method for controlling network congestion, access device and computer readable storage medium
US8953631B2 (en) Interruption, at least in part, of frame transmission
CN111800351A (en) Congestion notification packet generation by a switch
US11252099B2 (en) Data stream sending method and system, and device
US11070386B2 (en) Controlling an aggregate number of unique PIM joins in one or more PIM join/prune messages received from a PIM neighbor
US20150229575A1 (en) Flow control in a network
CN107846341B (en) Method, related device and system for scheduling message
US11258723B2 (en) Data processing method and apparatus, and switching device using footprint queues
CN111092858B (en) Message processing method, device and system
CN115460114A (en) Service function chain congestion feedback
CN113923161A (en) Message forwarding method and device
CN114095448A (en) Method and equipment for processing congestion flow
US20220124036A1 (en) Network congestion handling method and related apparatus
US11805071B2 (en) Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus
CN116319535A (en) Path switching method, path switching device, network equipment and network system
CN113612698A (en) Data packet sending method and device
JP6801075B2 (en) How to get path information for data packets and devices
CN114221913A (en) Method and network node for sending and obtaining assertion message
US20230188458A1 (en) IPV6 Packet Sending Method, Device, and System
CN116938666A (en) Data processing method and related equipment
CN117376285A (en) Message forwarding method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant