WO2019029220A1 - 网络设备 - Google Patents

网络设备 Download PDF

Info

Publication number
WO2019029220A1
WO2019029220A1 PCT/CN2018/087286 CN2018087286W WO2019029220A1 WO 2019029220 A1 WO2019029220 A1 WO 2019029220A1 CN 2018087286 W CN2018087286 W CN 2018087286W WO 2019029220 A1 WO2019029220 A1 WO 2019029220A1
Authority
WO
WIPO (PCT)
Prior art keywords
network device
counter
data streams
control module
queues
Prior art date
Application number
PCT/CN2018/087286
Other languages
English (en)
French (fr)
Inventor
吕晖
林云
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18843217.3A priority Critical patent/EP3661139B1/en
Publication of WO2019029220A1 publication Critical patent/WO2019029220A1/zh
Priority to US16/785,990 priority patent/US11165710B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/413Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection [CSMA-CD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a network device.
  • the Internet provides a data transmission pipeline between network devices.
  • the sequence of data transmitted between network devices can be referred to as a data stream.
  • the network device can be a switch, a router, or a switch chip.
  • network devices allocate different queues for different data streams. For example, assign different queues to data flows to other different network devices, or assign different queues to data flows of different services.
  • Each of the queues occupies a certain amount of free space. When the space of one queue is exhausted, other queues still have enough space available.
  • the number of data streams in network devices also rises rapidly. In order to isolate different data streams so that they do not affect each other, the number of queues required is also rising sharply.
  • queues are generally in the form of linked lists. The larger the number of queues, the higher the logic complexity of the linked list, which will lead to excessive resource consumption of network devices. On the other hand, the sharp increase in the number of queues poses a greater challenge to the caching of network devices.
  • the application provides a network device.
  • the resource consumption of the network device can be reduced.
  • it can alleviate the cache pressure of network devices.
  • the application provides a network device, including: a cache module, a counting module, a control module, and a sending module.
  • the cache module includes N queues for buffering M data streams, where N is less than M, and the counting module includes M
  • the counters, the M counters are in one-to-one correspondence with the M data streams, the M counters are used to count the number of buffers of the M data streams in the N queues, and the control module is configured to: when the count value of the first counter exceeds the corresponding threshold, And the control sending module sends a pause indication information to the upper control module, where the suspension indication information is used to indicate that the upper control module pauses to send the data packet; wherein the first counter is M Any of the counters.
  • the control sending module sends the suspension indication information to the upper control module.
  • the resource consumption of the network device can be reduced.
  • it can alleviate the cache pressure of network devices.
  • control module is further configured to: when the count value of the first counter is less than the corresponding threshold, insert the data packet to be queued into the corresponding queue, and control the first counter to update the count value.
  • the first counter is specifically configured to calculate a weighted average value of the count value and the length of the data packet to be queued, and obtain a count value after the first counter is updated.
  • the network device can accurately update the count value of each counter, so that it can accurately determine whether the count value of each counter is less than a corresponding threshold, to determine whether to discard the to-be-entered data packet or send the suspension indication information to the superior control module.
  • control module is further configured to schedule a data packet in any queue, and control a second counter update count value corresponding to the scheduled data packet.
  • the second counter is specifically configured to calculate a difference between the count value of the second counter and the length of the scheduled data packet, to obtain a count value after the second counter is updated.
  • the N queues are N input queues; the M data streams are M input data streams; and correspondingly, the control module is further configured to: according to the number of input ports of the network device, the maximum corresponding to each input port The number of queues sets N input queues.
  • control module is further configured to: determine a maximum number M of input data streams according to the number of users corresponding to each input port and the maximum number of data streams corresponding to each user; and divide each input data packet into M input data streams. .
  • the network device as the lower-level network device can accurately set the input queue and the input data stream M.
  • the N queues are N output queues; the M data streams are M output data streams; and correspondingly, the control module is further configured to set N output queues according to the number of the lower level network devices.
  • control module is further configured to: determine, according to the number of the subordinate network devices, the maximum number of input ports of each subordinate network device, and the maximum number of data streams corresponding to each input port of the subordinate network device, the maximum output data stream. Number M; divide each output packet into M output data streams.
  • the network device can accurately set the output queue and the output data stream M as the upper-level network device.
  • the application provides a network device, including: a cache module, a counting module, a control module, and a sending module.
  • the cache module includes N queues for buffering M data streams, where N is less than M.
  • the counting module includes M counters, M counters are in one-to-one correspondence with M data streams, and M counters are used to count the number of buffers of M data streams in N queues.
  • a control module configured to: when the count value of the first counter exceeds the corresponding threshold, discard the data packet to be queued corresponding to the data flow corresponding to the first counter; or, the control sending module sends the suspension indication information to the upper control module, because the data at the same time The number of queues is not a common phenomenon.
  • the number of queues set in this application is less than the number of data streams.
  • the number of queues included in the present application is small. Reducing the resource consumption of network devices can also alleviate the cache pressure of network devices. For short-term bursts, that is, the number of data streams at the same time is huge.
  • the control module discards the to-be-entered data packet of the data stream corresponding to the counter; or the control sending module sends the suspension indication information to the upper control module. Based on this, the network device can cache M data streams through N queues. Thereby, the effect of reducing the resource consumption of the network device and alleviating the buffer pressure of the network device is achieved.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a network device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of a network device according to an embodiment of the present disclosure.
  • the application provides a network device.
  • the network device may be a switch, a router, or a switch chip.
  • FIG. 1 is an application scenario diagram provided by an embodiment of the present application.
  • a switch chip may have at least one upper-level switch chip and at least one lower-level switch chip.
  • the switch chip can be used as an upper switch chip or as a lower switch chip.
  • the upper switching chip transmits the data packet to its lower switching chip through the output port.
  • the lower-level switching chip receives a data packet sent by its upper-level switching chip through an input port.
  • a queue for storing the data stream to be sent needs to be set.
  • the queue for storing the data stream to be sent is called an output queue.
  • a queue for storing the received data stream needs to be set.
  • the queue for storing the received data stream is called an input queue.
  • a switch can act as both a superior switch and a lower-level switch.
  • the upper switch the upper switch sends the data packet to the lower switch through the output port.
  • the subordinate switch receives the data packet sent by its superior switch through the input port.
  • the output queue needs to be set as the upper switch.
  • a router can act as both a superior router and a lower-level router.
  • the superior router sends the data packet to the lower router through the output port.
  • the subordinate router receives the data packet sent by its superior router through the input port. Among them, as the superior router, you need to set the output queue. As a subordinate router, you need to set the input queue.
  • the queue is generally in the form of a linked list.
  • the sharp increase in the number of queues poses a greater challenge to the caching of network devices.
  • the queue can be the input queue or output queue described above.
  • the application provides a network device.
  • FIG. 2 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • the network device includes: a cache module 21, a counting module 22, a control module 23, and a sending module 24.
  • the cache module 21 includes N queues for buffering M data streams, where N is less than M.
  • the counting module 22 includes M counters, and the M counters are in one-to-one correspondence with the M data streams, and the M counters are used to count the number of buffers of the M data streams in the N queues.
  • the control module 23 is configured to discard the to-be-entered data packet of the data flow corresponding to the first counter when the count value of the first counter exceeds the corresponding threshold; or the control sending module 24 sends the suspension indication information to the upper control module 25, and pauses.
  • the indication information is used to instruct the upper control module 25 to suspend transmission of the data packet; wherein the first counter is any one of the M counters.
  • the above N queues may be N input queues.
  • the M data streams are M input data streams.
  • the network device can be understood as a lower-level network device.
  • the above N queues may also be N output queues.
  • the M data streams are M output data streams.
  • the network device can be understood as a superior network device.
  • M counters are not fixedly allocated to the data stream, but are dynamically allocated to the respective data streams.
  • the specific dynamic allocation algorithm may adopt any dynamic allocation algorithm of the prior art, as long as the M counters are matched with the M data streams one by one, which is not limited in this application.
  • Each of the M counters is used to count the number of buffers of the corresponding data stream in the queue corresponding to the data stream.
  • the number of buffers of the counter A statistic stream B in the queue Q is the length of the stream B.
  • the number of buffers of the counter A statistic stream B in the queue Q is the sum of the length of the stream B and the length of the packet C.
  • the number of buffers of the counter A statistic data stream B in the queue Q is a weighted average of the length of the data stream B and the length of the data packet C.
  • the number of buffers of the counter A statistic stream B in the queue Q is the number of packets included in the stream B.
  • Each counter has a corresponding threshold. The corresponding thresholds between the M counters may be the same or different.
  • the above upper control module 25 may belong to the same network device as the control module 23 or belong to different network devices.
  • the upper control module 25 and the control module 23 may be two control modules belonging to the same switch chip, or the upper control module 25 and the control module 23 may belong to different switch chips. This application does not limit this.
  • the upper-level control module 25 and the control module 23 in FIG. 1 belong to different network devices as an example.
  • the cache module 21 may be a cache buffer, which may be a cache in a random access memory (RAM), or a memory stick, or an integrated circuit. This embodiment of the present application does not limit this.
  • the control module 23 can be an integrated circuit in the switch chip, or a processor, a controller in the switch, or a processor, controller, or the like in the router.
  • the control module 23 can be connected to the cache module 21, the counting module 22, and the transmitting module 24 via a CAN bus.
  • the control module 23 can be connected to the upper control module and the lower control module through a public interface or a private interface.
  • the public interface may be an Ethernet interface
  • the private interface may be a cell-based interface.
  • the sending module 24 may be an integrated circuit in the switch chip, or a transmitter in the switch, or a transmitter in the router.
  • FIG. 3 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • the switching chip of the current level has a plurality of upper switching chips, and each switching chip includes: an integrated circuit 31 for buffering queues, and a counting module 32 including M counters.
  • the integrated circuit 34 for transmitting the suspension indication information to the upper-level integrated circuit 33 (corresponding to the above-mentioned superior control module), and for discarding the data packet to be queued, or for controlling the integrated circuit 34 to transmit the suspension indication information to the upper-level integrated circuit 33 Circuit 35.
  • the integrated circuit 31 includes N queues for buffering M data streams, where N is less than M.
  • the counting module 32 includes M counters 321 , and the M counters 321 are in one-to-one correspondence with the M data streams, and the M counters 321 are used to count the number of buffers of the M data streams in the N queues.
  • Integrated circuit 35 can be coupled to integrated circuit 31, counting module 32, and integrated circuit 34 via a CAN bus.
  • the integrated circuit 35 is connected to the upper integrated circuit 33 through a cell-based interface.
  • the number of data streams in the network device also rises rapidly.
  • the large number of data streams at the same time is not a common phenomenon.
  • the switch is mostly light-loaded, and only a short burst of time occurs. Therefore, the number of queues set in this application is less than the number of data streams. In this case, it is reasonable, and since the number of queues is small, the resource consumption of the network device can be reduced, and the buffer pressure of the network device can be alleviated.
  • the control module of the network device may determine whether to send the suspension indication information or discard the to-be-queued data packet to the superior control module according to whether the counter value of the counter exceeds the corresponding threshold.
  • the control sending module sends the suspension indication information to the upper control module.
  • the sending of the suspension indication information to the superior control module embodies the management mechanism of the hierarchical queue.
  • the control module has multiple upper-level control modules.
  • the present application is applicable to the case where the number of data streams is normal at the same time, and is also applicable to the case where the number of data streams is large at the same time.
  • the network device provided by the present application can reduce the resource consumption of the network device. On the other hand, it can alleviate the cache pressure of network devices.
  • the application provides a network device, including: a cache module, a counting module, a control module, and a sending module.
  • the cache module includes N queues for buffering M data streams, where N is less than M.
  • the counting module includes M counters, M counters are in one-to-one correspondence with M data streams, and M counters are used to count the number of buffers of M data streams in N queues.
  • the control module is configured to discard the to-be-entered data packet of the data flow corresponding to the first counter when the count value of the first counter exceeds the corresponding threshold; or, the control sending module sends the suspension indication information to the upper control module.
  • the resource consumption of the network device can be reduced.
  • it can alleviate the cache pressure of network devices.
  • the transmitting module sends the pause indication information to the upper control module, which can be understood as a hierarchical implementation manner.
  • the upper control module When multiple upper-level network devices output data streams to the local network device, when the network device of the current level is congested, the level is adopted.
  • the implementation mode enables the upper-level control module to suspend the transmission of data packets, that is, multiple upper-level network devices can share the congestion pressure of the current level.
  • control module 23 is further configured to: when the count value of the first counter is less than the corresponding threshold, insert the data packet to be queued into the corresponding queue, and control the first counter to update the count value.
  • the first counter is specifically configured to calculate a weighted average value of the count value and the length of the data packet to be queued, to obtain a count value after the first counter is updated.
  • the first counter C 1 can update the count value by the following formula.
  • C 1_new (1 - ⁇ ) ⁇ C 1_old + ⁇ ⁇ packet 1_length.
  • is a smoothing factor of 0 ⁇ ⁇ ⁇ 1
  • C 1_old indicates that the first counter C 1 is the count value before the update.
  • C 1_new represents the count value after the update of the first counter C 1 .
  • Packet1_length indicates the length of the packet to be enqueued.
  • the first counter can also directly calculate the sum of the count value and the length of the data packet to be queued, and obtain the count value after the first counter is updated.
  • the present application does not limit the method of calculating the updated count value.
  • the network device can accurately update the count value of each counter, so that it can accurately determine whether the count value of each counter is less than a corresponding threshold, to determine whether to discard the data packet to be queued or send the pause indication information to the upper control module.
  • control module 23 is further configured to schedule a data packet in any queue, and control a second counter update count value corresponding to the scheduled data packet.
  • the second counter is specifically configured to calculate a difference between the count value of the second counter and the length of the scheduled data packet, to obtain a count value after the second counter is updated.
  • the second counter may not be any one of the M counters. That is, the second counter and the above M counters are independently set. In this case, the count values of the second counter and the above M counters do not affect each other.
  • the second counter may also be any of the above M counters.
  • the second counter is the first counter described above.
  • the count value of the first counter also changes with the scheduling of the data packet.
  • the current count value of the first counter C 1 is the above-mentioned C 1_new .
  • C 1_new ' C 1_new -packet2_length, wherein C 1_new' denotes a first counter C 1 packet count value updated according to the data scheduling.
  • Packet2_length indicates the length of the scheduled packet.
  • control module 23 is further configured to: according to the number of input ports of the network device and each input The number of maximum queues corresponding to the port sets the N input queues.
  • control module 23 is further configured to: according to the number of users corresponding to each input port and each user The corresponding maximum number of data streams determines the maximum number M of input data streams; each input data packet is divided into M input data streams.
  • the control module 23 may be configured according to the network.
  • the number of input ports of the device and the maximum number of queues corresponding to each input port set the N input queues.
  • the maximum number of queues corresponding to each input port can be the same or different. Assume that the number of input ports of the network device is 5. The maximum number of queues per input port is the same, both are 10. Then, the product of the number of input ports of the network device and the maximum number of queues of each input port is 50. Based on this, the control module 23 can be configured to set up 50 input queues.
  • the number of users corresponding to each input port may be the same or different.
  • the maximum number of data streams corresponding to each user can also be the same or different. Assume that the number of users corresponding to each input port is 100, and the maximum number of data streams corresponding to each user is 5. In this case, calculating the product of the number of users corresponding to each input port and the maximum number of data streams corresponding to each user determines that the maximum number M of input data streams is 500.
  • control module 23 is further configured to set the N output queues according to the number of the lower level network devices. .
  • control module 23 is further configured to: according to the number of the lower level network devices, each lower level network device The maximum number of input ports and the maximum number of data streams corresponding to each input port of each lower-level network device determine the maximum number M of output data streams; each output data packet is divided into M output data streams.
  • the control module 23 may be configured according to the lower level.
  • the number of network devices sets the N output queues.
  • the number of subordinate network devices of the control module is 10. Based on this, the control module 23 can set up 10 output queues. That is, the number of output queues is the same as the number of subordinate network devices.
  • the control module 23 can also set N output queues according to the number of subordinate network devices and the adjustment factor.
  • the adjustment factor is a positive integer greater than one. For example, the number of subordinate network devices of the control module is 10.
  • the adjustment factor is 3, and the control module 23 can calculate the product of the number 10 of the lower-level network devices and the adjustment factor 3, and obtain N as 30. Therefore, the control module 23 can set up 30 output queues.
  • the maximum number of input ports of each lower-level network device may be the same or different, and the maximum number of data streams corresponding to each input port of the lower-level network device may be the same or different.
  • the number of subordinate network devices of the network device is 10.
  • the maximum number of input ports for each subordinate network device is 3.
  • the maximum number of streams corresponding to each input port is 10.
  • the product of the number of lower-level network devices, the maximum number of input ports of each lower-level network device, and the maximum number of data streams corresponding to each input port is calculated, and the maximum number M of output data streams is 300.
  • the network device as the upper-level network device and the lower-level network device may have different standards for dividing the data stream.
  • data stream division can be performed according to the input ports of the lower-level network device and the lower-level network device.
  • the upper-level network device can perform data flow division according to the number of lower-level network devices and the input port of the lower-level network device.
  • data stream division can be performed according to a quintuple.
  • the quintuple includes: an Internet Protocol (IP) address, a source port, a destination IP address, a destination port, and a transport layer protocol.
  • IP Internet Protocol
  • the data stream can be divided according to the quintuple.
  • data flow division can be performed according to the number of upper-level network devices and the output port of the upper-level network device.
  • the network device can accurately set the input queue and the input data stream M as the lower-level network device.
  • the network device can accurately set the output queue and the output data stream M as the upper-level network device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供一种网络设备,包括缓存模块、计数模块、控制模块和发送模块。其中缓存模块,包括N条队列,用于缓存M条数据流,其中N小于M。计数模块,包括M个计数器,M个计数器与M条数据流一一对应,M个计数器用于统计M条数据流在N条队列中的缓存数量。控制模块,用于当第一计数器的计数值超过对应阈值时,丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息。由于一旦第一计数器的计数值超过对应阈值时,则丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息。因此,一方面,可以降低网络设备的资源消耗。另一方面,可以缓解网络设备的缓存压力。

Description

网络设备
本申请要求于2017年08月10日提交中国专利局、申请号为201710680143.5、申请名称为“网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种网络设备。
背景技术
互联网提供了一条网络设备之间的数据传输管道。其中网络设备之间所传输的数据序列可以称为一条数据流。网络设备可以是交换机、路由器或者交换芯片等。
现有技术中,网络设备为不同的数据流分配不同的队列。例如:为发往其他不同网络设备的数据流分配不同的队列,或者为不同业务的数据流分配不同的队列。其中每个队列占用一定的可用空间,当一个队列的空间被耗尽时,其它队列仍旧有足够的空间可用。随着网络速率的快速提升,网络设备中的数据流的数目也在快速上升。为了隔离不同的数据流,使它们互不影响,需要的队列数量也在急剧上升。然而,一方面,队列一般采用链表形式。而队列数量越大,链表的逻辑复杂度也就越高,这将导致网络设备的资源消耗过大。另一方面,队列数量的急剧上升,给网络设备的缓存带来了更大的挑战。
发明内容
本申请提供一种网络设备。一方面,可以降低网络设备的资源消耗。另一方面,可以缓解网络设备的缓存压力。
本申请提供一种网络设备,包括:缓存模块、计数模块、控制模块和发送模块;缓存模块,包括N条队列,用于缓存M条数据流,其中,N小于M;计数模块,包括M个计数器,M个计数器与M条数据流一一对应,M个计数器用于统计M条数据流在N条队列中的缓存数量;控制模块,用于当第一计数器的计数值超过对应阈值时,丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息,暂停指示信息用于指示上级控制模块暂停发送数据包;其中,第一计数器为M个计数器中的任一计数器。
即一旦第一计数器的计数值超过对应阈值时,则丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息。一方面,可以降低网络设备的资源消耗。另一方面,可以缓解网络设备的缓存压力。
可选地,控制模块,还用于当第一计数器的计数值小于对应阈值时,将待入队数据包插入对应队列中,并控制第一计数器更新计数值。
可选地,第一计数器,具体用于计算计数值与待入队数据包的长度的加权平均值,得 到第一计数器更新后的计数值。
即该网络设备可以准确的更新各个计数器的计数值,从而可以准确的判断各个计数器的计数值是否小于对应阈值,以确定是否丢弃待入队数据包或者向上级控制模块发送暂停指示信息。
可选地,控制模块,还用于调度任一队列中的数据包,并控制调度的数据包对应的第二计数器更新计数值。
可选地,第二计数器,具体用于计算第二计数器的计数值与调度的数据包的长度之差,得到第二计数器更新后的计数值。
可选地,N条队列为N条输入队列;M条数据流为M条输入数据流;相应的,控制模块,还用于根据网络设备的输入端口的个数与每个输入端口对应的最大队列数量设置N条输入队列。
可选地,控制模块还用于:根据每个输入端口对应的用户数和每个用户对应的最大数据流数量确定输入数据流的最大数量M;将各个输入数据包划分为M条输入数据流。
本申请中,网络设备作为下级网络设备可以准确的设置输入队列和输入数据流M。
可选的,所述N条队列为N条输出队列;M条数据流为M条输出数据流;相应的,控制模块,还用于根据下级网络设备的数量设置N条输出队列。
可选地,控制模块还用于:根据下级网络设备的数量、每个下级网络设备的输入端口的最大数量、以及下级网络设备的每个输入端口对应的最大数据流数量确定输出数据流的最大数量M;将各个输出数据包划分为M条输出数据流。
本申请中,网络设备作为上级网络设备可以准确的设置输出队列和输出数据流M。
本申请提供一种网络设备,包括:缓存模块、计数模块、控制模块和发送模块。其中,缓存模块,包括N条队列,用于缓存M条数据流,其中,N小于M。计数模块,包括M个计数器,M个计数器与M条数据流一一对应,M个计数器用于统计M条数据流在N条队列中的缓存数量。控制模块,用于当第一计数器的计数值超过对应阈值时,丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息,由于同一时刻数据流数量庞大的情况并不是常见现象,因此本申请中设置的队列数量少于数据流的数量在这种情况下是合理的,由于本申请提供的网络设备所包括的队列数量较少,因此可以降低网络设备的资源消耗,还可以缓解网络设备的缓存压力。对于短时突发现象,即同一时刻数据流数量庞大的情况。这种情况下,一旦某计数器的计数值超过对应阈值时,则控制模块丢弃该计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息。基于此,该网络设备才可以通过N条队列缓存M条数据流。从而达到降低网络设备的资源消耗以及缓解网络设备的缓存压力的效果。
附图说明
图1为本申请一实施例提供的应用场景图;
图2为本申请一实施例提供的网络设备的结构示意图。
图3为本申请一实施例提供的网络设备的结构示意图。
具体实施方式
本申请提供一种网络设备。其中该网络设备可以是交换机、路由器或者交换芯片等。
以交换芯片为例:图1为本申请一实施例提供的应用场景图,如图1所示,一个交换芯片可以有至少一个上级交换芯片,以及至少一个下级交换芯片。该交换芯片既可以作为上级交换芯片,也可以作为下级交换芯片。作为上级交换芯片,该上级交换芯片通过输出端口将数据包发送给它的下级交换芯片。作为下级交换芯片,该下级交换芯片通过输入端口接收它的上级交换芯片发送的数据包。其中作为上级交换芯片需要设置用于存放待发送的数据流的队列。该用于存放待发送的数据流的队列被称为输出队列。作为下级交换芯片需要设置用于存放接收到的数据流的队列。该用于存放接收到的数据流的队列被称为输入队列。
同样的,一个交换机既可以作为上级交换机,也可以作为下级交换机。作为上级交换机,该上级交换机通过输出端口将数据包发送给下级交换机。作为下级交换机,该下级交换机通过输入端口接收它的上级交换机发送的数据包。其中作为上级交换机需要设置输出队列。作为下级交换机需要设置输入队列。
同样的,一个路由器既可以作为上级路由器,也可以作为下级路由器。作为上级路由器,该上级路由器通过输出端口将数据包发送给下级路由器。作为下级路由器,该下级路由器通过输入端口接收它的上级路由器发送的数据包。其中作为上级路由器需要设置输出队列。作为下级路由器需要设置输入队列。
为了解决现有技术中如下技术问题:一方面,队列一般采用链表形式。而队列数量越大,链表的逻辑复杂度也就越高,这将导致网络设备的资源消耗过大。另一方面,队列数量的急剧上升,给网络设备的缓存带来了更大的挑战。该队列可以是上述的输入队列或者输出队列。本申请提供一种网络设备。
具体地,图2为本申请一实施例提供的网络设备的结构示意图。如图2所示,该网络设备包括:缓存模块21、计数模块22、控制模块23和发送模块24。其中,缓存模块21,包括N条队列,用于缓存M条数据流,其中,N小于M。计数模块22,包括M个计数器,M个计数器与M条数据流一一对应,M个计数器用于统计M条数据流在N条队列中的缓存数量。控制模块23,用于当第一计数器的计数值超过对应阈值时,丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块24向上级控制模块25发送暂停指示信息,暂停指示信息用于指示上级控制模块25暂停发送数据包;其中,第一计数器为M个计数器中的任一计数器。
其中,上述N条队列,可以是N条输入队列。相应的,M条数据流为M条输入数据流。这种情况下,该网络设备可以被理解为下级网络设备。上述N条队列,也可以是N条输出队列。相应的,M条数据流为M条输出数据流。这种情况下,该网络设备可以被理解为上级网络设备。
可选地,M个计数器不是固定分配给数据流的,而是动态分配给各个数据流。具体的动态分配算法可以采用现有技术的任何动态分配算法,只要保证M个计数器与M条数据流一一对应即可,本申请对此不做限制。
M个计数器中的每个计数器用于统计对应数据流在该数据流对应的队列中的缓存数量。例如:计数器A统计数据流B在队列Q中的缓存数量为该数据流B的长度。当该数 据流B增加一个数据包C时,则该计数器A统计数据流B在队列Q中的缓存数量为该数据流B的长度与该数据包C的长度之和。或者,当该数据流B增加一个数据包C时,则该计数器A统计数据流B在队列Q中的缓存数量为该数据流B的长度与该数据包C的长度的加权平均值。或者,计数器A统计数据流B在队列Q中的缓存数量为该数据流B所包括的数据包个数。每个计数器具有一个对应的阈值。M个计数器之间的对应阈值可以相同,也可以不同。
上述上级控制模块25可以与控制模块23属于同一网络设备或者属于不同的网络设备。例如:该上级控制模块25和控制模块23可以是属于同一交换芯片的两个控制模块,或者,该上级控制模块25和控制模块23可以是属于不同的交换芯片。本申请对此不做限制。其中,图1中以上级控制模块25和控制模块23属于不同的网络设备为例。
可选地,该缓存模块21可以是缓存buffer,该缓存可以是随机存取存储器(random access memory,RAM)中的缓存、或者是内存条,又或者是一块集成电路等。本申请实施例对此不做限制。
该控制模块23可以是交换芯片中的一块集成电路、或者是交换机中的处理器、控制器,或者是路由器中的处理器、控制器等。控制模块23可以通过CAN总线与缓存模块21、计数模块22和发送模块24连接。并且控制模块23可以与上级控制模块、下级控制模块通过公有接口或者私有接口连接。以实现与上级控制模块、下级控制模块之间的通信。其中,该公有接口可以是以太网(Ethernet)接口等,私有接口可以是基于信源(Cell-based)接口等。
上述发送模块24可以是交换芯片中的一块集成电路、或者是交换机中的发送器,或者是路由器中的发送器等。
例如:图3为本申请一实施例提供的网络设备的结构示意图。如图3所示,假设该网络设备为交换芯片,本级交换芯片存在多个上级交换芯片,每个交换芯片包括:用于缓存队列的集成电路31,包括M个计数器的计数模块32,用于向上级集成电路33(对应上述上级控制模块)发送暂停指示信息的集成电路34,和用于丢弃待入队数据包,或者用于控制集成电路34向上级集成电路33发送暂停指示信息的集成电路35。
其中,集成电路31包括N条队列,用于缓存M条数据流,其中,N小于M。计数模块32,包括M个计数器321,M个计数器321与M条数据流一一对应,M个计数器321用于统计M条数据流在N条队列中的缓存数量。
集成电路35可以通过CAN总线与集成电路31、计数模块32和集成电路34连接。集成电路35通过基于信源(Cell-based)的接口与上级集成电路33连接。
进一步地,随着网络速率的快速提升,网络设备中的数据流的数目也在快速上升。然而,同一时刻数据流数量庞大的情况并不是常见现象。特别是对数据中心来说,交换机大部分时刻是轻载的,只出现短时间的突发。因此本申请中设置的队列数量少于数据流的数量在这种情况下是合理的,并且由于队列数量较少,因此可以降低网络设备的资源消耗,还可以缓解网络设备的缓存压力。
对于短时突发现象,即同一时刻数据流数量庞大的情况。这种情况下,网络设备的控制模块可以根据计数器的计数值是否超过对应阈值来判断是否向上级控制模块发送暂停指示信息或者丢弃待入队数据包。一旦某计数器的计数值超过对应阈值时,则丢弃该计数 器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息。其中向上级控制模块发送暂停指示信息体现了层次化队列的管理机制。通常控制模块存在多个上级控制模块,通过这种层次化队列的管理机制,使得多个上级控制模块可以分担本级控制模块的缓存压力。综上,本申请既适用于同一时刻数据流数量正常的情况,也适用于同一时刻数据流数量庞大的情况,本申请提供的网络设备一方面可以降低网络设备的资源消耗。另一方面,可以缓解网络设备的缓存压力。
综上,本申请提供一种网络设备,包括:缓存模块、计数模块、控制模块和发送模块。其中,缓存模块,包括N条队列,用于缓存M条数据流,其中,N小于M。计数模块,包括M个计数器,M个计数器与M条数据流一一对应,M个计数器用于统计M条数据流在N条队列中的缓存数量。控制模块,用于当第一计数器的计数值超过对应阈值时,丢弃第一计数器对应的数据流的待入队数据包;或者,控制发送模块向上级控制模块发送暂停指示信息。一方面,可以降低网络设备的资源消耗。另一方面,可以缓解网络设备的缓存压力。进一步地,发送模块向上级控制模块发送暂停指示信息可以被理解为一种层次化实现方式,当多个上级网络设备向本级网络设备输出数据流,本级网络设备发生拥塞时,通过该层次化实现方式使得上级控制模块暂停发送数据包,即多个上级网络设备可以分担本级的拥塞压力。
可选地,控制模块23,还用于当第一计数器的计数值小于对应阈值时,将待入队数据包插入对应队列中,并控制第一计数器更新计数值。
可选地,第一计数器,具体用于计算计数值与待入队数据包的长度的加权平均值,得到第一计数器更新后的计数值。
例如:第一计数器C 1可以通过如下公式更新计数值。
C 1_new=(1-α)×C 1_old+α×packet1_length。
其中α为一个0≤α≤1的平滑因子,C 1_old表示第一计数器C 1为更新之前的计数值。C 1_new表示第一计数器C 1更新后的计数值。packet1_length表示待入队数据包的长度。
第一计数器,还可以直接计算计数值与待入队数据包的长度的之和,得到第一计数器更新后的计数值。本申请对计算更新后的计数值的方法不做限制。
该网络设备可以准确的更新各个计数器的计数值,从而可以准确的判断各个计数器的计数值是否小于对应阈值,以确定是否丢弃待入队数据包或者向上级控制模块发送暂停指示信息。
可选地,控制模块23,还用于调度任一队列中的数据包,并控制调度的数据包对应的第二计数器更新计数值。
可选地,第二计数器,具体用于计算第二计数器的计数值与调度的数据包的长度之差,得到第二计数器更新后的计数值。
其中,该第二计数器可以不是上述M个计数器中的任一计数器。即第二计数器和上述M个计数器独立设置。这种情况下,第二计数器和上述M个计数器的计数值互不影响。该第二计数器还可以是上述M个计数器中的任一计数器。例如:该第二计数器为上述的第一计数器,这种情况下,第一计数器的计数值随着数据包的调度也在发生变化。例如:第一计数器C 1的当前计数值为上述的C 1_new。当调度的数据包与该第一计数器C 1对应,则C 1_new’=C 1_new-packet2_length,其中C 1_new’表示第一计数器C 1根据调度的数据包更新的计数 值。packet2_length表示调度的数据包的长度。
可选地,当所述N条队列为N条输入队列;所述M条数据流为M条输入数据流时,控制模块23,还用于根据网络设备的输入端口的个数与每个输入端口对应的最大队列数量设置所述N条输入队列。
可选地,当所述N条队列为N条输入队列;所述M条数据流为M条输入数据流时,控制模块23还用于:根据每个输入端口对应的用户数和每个用户对应的最大数据流数量确定输入数据流的最大数量M;将各个输入数据包划分为M条输入数据流。
具体地,当所述N条队列为N条输入队列;所述M条数据流为M条输入数据流时,表示该网络设备目前作为下级网络设备,这种情况下,控制模块23可以根据网络设备的输入端口的个数与每个输入端口对应的最大队列数量设置所述N条输入队列。每个输入端口对应的最大队列数量可以相同,也可以不同。假设该网络设备的输入端口的个数为5。每个输入端口的最大队列数量相同,均是10。则计算网络设备的输入端口的个数与每个输入端口的最大队列数量的乘积为50。基于此,控制模块23可以是设置50条输入队列。
进一步地,上述每个输入端口对应的用户数可以相同,也可以不同。每个用户对应的最大数据流数量也可以相同或者不同。假设每个输入端口对应的用户数均是100,每个用户对应的最大数据流数量均为5。这种情况下,计算每个输入端口对应的用户数和每个用户对应的最大数据流数量乘积确定输入数据流的最大数量M为500。
可选地,当所述N条队列为N条输出队列;所述M条数据流为M条输出数据流时,控制模块23,还用于根据下级网络设备的数量设置所述N条输出队列。
可选地,当所述N条队列为N条输出队列;所述M条数据流为M条输出数据流时,控制模块23还用于:根据下级网络设备的数量、每个下级网络设备的输入端口的最大数量、以及每个下级网络设备的每个输入端口对应的最大数据流数量确定输出数据流的最大数量M;将各个输出数据包划分为M条输出数据流。
具体地,当所述N条队列为N条输出队列;所述M条数据流为M条输出数据流时,表示该网络设备目前作为上级网络设备,这种情况下,控制模块23可以根据下级网络设备的数量设置所述N条输出队列。例如:控制模块的下级网络设备的数量为10。基于此,控制模块23可以设置10条输出队列。即输出队列的数量和下级网络设备的数量相同。控制模块23还可以根据下级网络设备的数量与调整因子设置N条输出队列。该调整因子为大于1的正整数。例如:控制模块的下级网络设备的数量为10。调整因子为3,控制模块23可以计算下级网络设备的数量10与调整因子3的乘积,得到N为30。因此控制模块23可以设置30条输出队列。
进一步地,每个下级网络设备的输入端口的最大数量可以相同,也可以不同,以及下级网络设备的每个输入端口对应的最大数据流数量可以相同,也可以不同。假设该网络设备的下级网络设备的数量为10。每个下级网络设备的输入端口的最大数量为3。每个输入端口对应的最大数据流数量均为10。这种情况下,计算下级网络设备的数量、每个下级网络设备的输入端口的最大数量和每个输入端口对应的最大数据流数量的乘积,得到输出数据流的最大数量M为300。
需要说明的是,本申请中网络设备作为上级网络设备和下级网络设备对数据流的划分标准可以不一致。例如,作为上级网络设备可以按照下级网络设备和下级网络设备的输入 端口进行数据流划分。例如:上级网络设备可以按照下级网络设备的数量和下级网络设备的输入端口进行数据流划分。作为下级网络设备可以按照五元组进行数据流划分。该五元组包括:源因特网(Internet Protocol,简称IP)地址,源端口,目的IP地址,目的端口和传输层协议。或者,作为上级网络设备可以按照五元组进行数据流划分。作为下级网络设备可以按照上级网络设备的数量和上级网络设备的输出端口进行数据流划分。
综上,本申请中,网络设备作为下级网络设备可以准确的设置输入队列和输入数据流M。网络设备作为上级网络设备可以准确的设置输出队列和输出数据流M。

Claims (9)

  1. 一种网络设备,其特征在于,包括:缓存模块、计数模块、控制模块和发送模块;
    所述缓存模块,包括N条队列,用于缓存M条数据流,其中,N小于M;
    所述计数模块,包括M个计数器,所述M个计数器与所述M条数据流一一对应,所述M个计数器用于统计所述M条数据流在所述N条队列中的缓存数量;
    所述控制模块,用于当第一计数器的计数值超过对应阈值时,丢弃所述第一计数器对应的数据流的待入队数据包;或者,控制所述发送模块向上级控制模块发送暂停指示信息,所述暂停指示信息用于指示所述上级控制模块暂停发送数据包;
    其中,所述第一计数器为所述M个计数器中的任一计数器。
  2. 根据权利要求1所述的网络设备,其特征在于,
    所述控制模块,还用于当所述第一计数器的计数值小于所述对应阈值时,将所述待入队数据包插入对应队列中,并控制所述第一计数器更新计数值。
  3. 根据权利要求2所述的网络设备,其特征在于,
    所述第一计数器,具体用于计算所述计数值与所述待入队数据包的长度的加权平均值,得到所述第一计数器更新后的计数值。
  4. 根据权利要求1-3任一项所述的网络设备,其特征在于,
    所述控制模块,还用于调度任一队列中的数据包,并控制调度的数据包对应的第二计数器更新计数值。
  5. 根据权利要求4所述的网络设备,其特征在于,
    所述第二计数器,具体用于计算所述第二计数器的计数值与所述调度的数据包的长度之差,得到所述第二计数器更新后的计数值。
  6. 根据权利要求1-5任一项所述的网络设备,其特征在于,
    所述N条队列为N条输入队列;所述M条数据流为M条输入数据流;
    相应的,所述控制模块,还用于根据所述网络设备的输入端口的个数与每个输入端口对应的最大队列数量设置所述N条输入队列。
  7. 根据权利要求6所述的网络设备,其特征在于,
    所述控制模块还用于:
    根据所述每个输入端口对应的用户数和每个用户对应的最大数据流数量确定输入数据流的最大数量M;
    将各个输入数据包划分为所述M条输入数据流。
  8. 根据权利要求1-7任一项所述的网络设备,其特征在于,
    所述N条队列为N条输出队列;所述M条数据流为M条输出数据流;
    相应的,所述控制模块,还用于根据下级网络设备的数量设置所述N条输出队列。
  9. 根据权利要求8所述的网络设备,其特征在于,
    所述控制模块还用于:
    根据所述下级网络设备的数量、每个下级网络设备的输入端口的最大数量、以及所述下级网络设备的每个输入端口对应的最大数据流数量确定输出数据流的最大数量M;
    将各个输出数据包划分为所述M条输出数据流。
PCT/CN2018/087286 2017-08-10 2018-05-17 网络设备 WO2019029220A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18843217.3A EP3661139B1 (en) 2017-08-10 2018-05-17 Network device
US16/785,990 US11165710B2 (en) 2017-08-10 2020-02-10 Network device with less buffer pressure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710680143.5A CN109391559B (zh) 2017-08-10 2017-08-10 网络设备
CN201710680143.5 2017-08-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/785,990 Continuation US11165710B2 (en) 2017-08-10 2020-02-10 Network device with less buffer pressure

Publications (1)

Publication Number Publication Date
WO2019029220A1 true WO2019029220A1 (zh) 2019-02-14

Family

ID=65271055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/087286 WO2019029220A1 (zh) 2017-08-10 2018-05-17 网络设备

Country Status (4)

Country Link
US (1) US11165710B2 (zh)
EP (1) EP3661139B1 (zh)
CN (1) CN109391559B (zh)
WO (1) WO2019029220A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112994943B (zh) * 2021-02-28 2022-05-27 新华三信息安全技术有限公司 一种报文统计方法及装置
CN113676423A (zh) * 2021-08-13 2021-11-19 北京东土军悦科技有限公司 一种端口流量控制方法、装置、交换芯片和存储介质
CN114465924B (zh) * 2021-12-24 2023-12-22 阿里巴巴(中国)有限公司 网络设备测试方法、数据包发生方法和交换芯片

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1905531A (zh) * 2006-08-11 2007-01-31 白杰 待发送数据的处理方法以及数据发送方法、装置
US20110035487A1 (en) * 2007-08-29 2011-02-10 China Mobile Communications Corporation Communication network system and service processing method in communication network
CN103888377A (zh) * 2014-03-28 2014-06-25 华为技术有限公司 报文缓存方法及装置
CN105812285A (zh) * 2016-04-29 2016-07-27 华为技术有限公司 一种端口拥塞管理方法及装置
CN106941460A (zh) * 2016-01-05 2017-07-11 中兴通讯股份有限公司 报文发送方法和装置

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134218A (en) * 1994-04-28 2000-10-17 Pmc-Sierra (Maryland), Inc. Many dimensional congestion detection system and method
AUPN526595A0 (en) * 1995-09-07 1995-09-28 Ericsson Australia Pty Ltd Controlling traffic congestion in intelligent electronic networks
TW477133B (en) * 2000-04-01 2002-02-21 Via Tech Inc Method for solving network congestion and Ethernet switch controller using the same
US7042891B2 (en) * 2001-01-04 2006-05-09 Nishan Systems, Inc. Dynamic selection of lowest latency path in a network switch
US20020124102A1 (en) * 2001-03-01 2002-09-05 International Business Machines Corporation Non-zero credit management to avoid message loss
US20020141427A1 (en) * 2001-03-29 2002-10-03 Mcalpine Gary L. Method and apparatus for a traffic optimizing multi-stage switch fabric network
EP1382165A2 (en) * 2001-04-13 2004-01-21 MOTOROLA INC., A Corporation of the state of Delaware Manipulating data streams in data stream processors
US7190667B2 (en) * 2001-04-26 2007-03-13 Intel Corporation Link level packet flow control mechanism
US6687781B2 (en) * 2001-05-01 2004-02-03 Zettacom, Inc. Fair weighted queuing bandwidth allocation system for network switch port
US7058070B2 (en) * 2001-05-01 2006-06-06 Integrated Device Technology, Inc. Back pressure control system for network switch port
JP4080911B2 (ja) * 2003-02-21 2008-04-23 株式会社日立製作所 帯域監視装置
US7561590B1 (en) * 2003-05-05 2009-07-14 Marvell International Ltd. Network switch having virtual input queues for flow control
JP2005277804A (ja) * 2004-03-25 2005-10-06 Hitachi Ltd 情報中継装置
JP2006121667A (ja) * 2004-09-27 2006-05-11 Matsushita Electric Ind Co Ltd パケット受信制御装置及びパケット受信制御方法
JP2006319914A (ja) * 2005-05-16 2006-11-24 Fujitsu Ltd 呼処理制御装置、呼処理制御装置の制御方法
US8521955B2 (en) * 2005-09-13 2013-08-27 Lsi Corporation Aligned data storage for network attached media streaming systems
JP5104508B2 (ja) * 2008-04-16 2012-12-19 富士通株式会社 中継装置およびパケット中継方法
EP2540038B1 (en) * 2010-02-25 2013-09-11 Telefonaktiebolaget LM Ericsson (publ) Control of token holding in multi-token optical network
WO2011154060A1 (en) * 2010-06-11 2011-12-15 Telefonaktiebolaget L M Ericsson (Publ) Control of buffering in multi-token optical network for different traffic classes
CN101917492B (zh) * 2010-08-06 2013-06-05 北京乾唐视联网络科技有限公司 一种新型网的通信方法及系统
US8520522B1 (en) * 2010-10-15 2013-08-27 Juniper Networks, Inc. Transmit-buffer management for priority-based flow control
CN102088412B (zh) * 2011-03-02 2014-09-03 华为技术有限公司 交换单元芯片、路由器及信元信息的发送方法
JP5737039B2 (ja) * 2011-07-25 2015-06-17 富士通株式会社 パケット伝送装置、メモリ制御回路及びパケット伝送方法
CN102413063B (zh) * 2012-01-12 2014-05-28 盛科网络(苏州)有限公司 动态调整出口资源分配阈值的方法及系统
CN103379038B (zh) * 2012-04-12 2018-08-03 南京中兴新软件有限责任公司 一种流量调度的装置及方法
US8995263B2 (en) * 2012-05-22 2015-03-31 Marvell World Trade Ltd. Method and apparatus for internal/external memory packet and byte counting
CN103023806B (zh) * 2012-12-18 2015-09-16 武汉烽火网络有限责任公司 共享缓存式以太网交换机的缓存资源控制方法及装置
CN103929372B (zh) * 2013-01-11 2017-10-10 华为技术有限公司 主动队列管理方法和设备
US9794141B2 (en) * 2013-03-14 2017-10-17 Arista Networks, Inc. System and method for determining a cause of network congestion
US10230665B2 (en) * 2013-12-20 2019-03-12 Intel Corporation Hierarchical/lossless packet preemption to reduce latency jitter in flow-controlled packet-based networks
US9722810B2 (en) * 2014-02-03 2017-08-01 International Business Machines Corporation Computer-based flow synchronization for efficient multicast forwarding for products and services
CN103763217A (zh) * 2014-02-07 2014-04-30 清华大学 多路径tcp的分组调度方法和装置
CN105337896A (zh) * 2014-07-25 2016-02-17 华为技术有限公司 报文处理方法和装置
CN106850714B (zh) * 2015-12-04 2021-03-09 中国电信股份有限公司 缓存共享方法和装置
US9977745B2 (en) * 2016-01-05 2018-05-22 Knuedge, Inc. Flow control through packet router
WO2017199208A1 (en) * 2016-05-18 2017-11-23 Marvell Israel (M.I.S.L) Ltd. Congestion avoidance in a network device
US10333848B2 (en) * 2016-07-01 2019-06-25 Intel Corporation Technologies for adaptive routing using throughput estimation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1905531A (zh) * 2006-08-11 2007-01-31 白杰 待发送数据的处理方法以及数据发送方法、装置
US20110035487A1 (en) * 2007-08-29 2011-02-10 China Mobile Communications Corporation Communication network system and service processing method in communication network
CN103888377A (zh) * 2014-03-28 2014-06-25 华为技术有限公司 报文缓存方法及装置
CN106941460A (zh) * 2016-01-05 2017-07-11 中兴通讯股份有限公司 报文发送方法和装置
CN105812285A (zh) * 2016-04-29 2016-07-27 华为技术有限公司 一种端口拥塞管理方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3661139A4

Also Published As

Publication number Publication date
US20200177514A1 (en) 2020-06-04
EP3661139A1 (en) 2020-06-03
CN109391559A (zh) 2019-02-26
US11165710B2 (en) 2021-11-02
CN109391559B (zh) 2022-10-18
EP3661139B1 (en) 2021-11-24
EP3661139A4 (en) 2020-08-26

Similar Documents

Publication Publication Date Title
US10772081B2 (en) Airtime-based packet scheduling for wireless networks
US8665892B2 (en) Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
CN109120544B (zh) 一种数据中心网络中基于主机端流量调度的传输控制方法
US8125904B2 (en) Method and system for adaptive queue and buffer control based on monitoring and active congestion avoidance in a packet network switch
JP4260631B2 (ja) ネットワーク輻輳制御の方法および装置
EP3942758A1 (en) System and method for facilitating global fairness in a network
EP3763094B1 (en) Flow management in networks
US9699095B2 (en) Adaptive allocation of headroom in network devices
US9007901B2 (en) Method and apparatus providing flow control using on-off signals in high delay networks
US10050896B2 (en) Management of an over-subscribed shared buffer
US11165710B2 (en) Network device with less buffer pressure
US11695702B2 (en) Packet forwarding apparatus, method and program
WO2016008399A1 (en) Flow control
Cascone et al. Towards approximate fair bandwidth sharing via dynamic priority queuing
Halepoto et al. Management of buffer space for the concurrent multipath transfer over dissimilar paths
CN112751776A (zh) 拥塞控制方法和相关装置
CN114629847B (zh) 基于可用带宽分配的耦合多流tcp拥塞控制方法
US11805071B2 (en) Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus
US20230254264A1 (en) Software-defined guaranteed-latency networking
WO2024036476A1 (zh) 一种报文转发方法及装置
US12028265B2 (en) Software-defined guaranteed-latency networking
Hayasaka et al. Dynamic pause time calculation method in MAC layer flow control
Khan et al. Receiver-driven flow scheduling for commodity datacenters
Chen et al. On meeting deadlines in datacenter networks
Yang et al. Cross-Layer Assisted Early Congestion Control for Cloud VR Applications in 5G Edge Networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18843217

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018843217

Country of ref document: EP

Effective date: 20200310