WO2012119414A1 - Method and device for controlling traffic of switching network - Google Patents

Method and device for controlling traffic of switching network Download PDF

Info

Publication number
WO2012119414A1
WO2012119414A1 PCT/CN2011/078853 CN2011078853W WO2012119414A1 WO 2012119414 A1 WO2012119414 A1 WO 2012119414A1 CN 2011078853 W CN2011078853 W CN 2011078853W WO 2012119414 A1 WO2012119414 A1 WO 2012119414A1
Authority
WO
WIPO (PCT)
Prior art keywords
rate
data packet
destination end
data
destination
Prior art date
Application number
PCT/CN2011/078853
Other languages
French (fr)
Chinese (zh)
Inventor
雷春
项能武
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN2011800018546A priority Critical patent/CN102356609A/en
Priority to PCT/CN2011/078853 priority patent/WO2012119414A1/en
Publication of WO2012119414A1 publication Critical patent/WO2012119414A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/18End to end

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a flow control method and apparatus for a switching network.
  • BACKGROUND In a typical switching network system composed of a plurality of line cards and a switching network board, data exchange is performed between a line card and a line card.
  • the Ethernet packet 21 of the second line card 12 is switched to the second line card 12 through the switching network board 3.
  • the Ethernet packet 21 whose destination end is the third line card 13 is exchanged by the switching network board 3 and reaches the third line card 13.
  • the first line card 11 has a rate of 10 Gbps
  • the second line card 12 has a rate of 1 Gbps
  • the third line card 13 has a rate of 10 Gbps.
  • Ethernet packet 21 in the first line card 11 is sent to the second line card 12 at a rate of 10 Gbps. If the egress buffer 31 of the switching network board 3 connected to the second line card 12 is not large enough, all the Ethernet packets 21 sent from the first line card 11 to the second line card 12 cannot be stored, and the switching network board 3 is connected to the second line card 12 Packet loss occurs at the outbound cache 31.
  • the prior art sets the egress buffer 31 of the switching network board 3 connected to the second line card 12 to a non-drop mode.
  • the input buffer 31 has an input traffic greater than the output traffic
  • the egress buffer 31 a back pressure occurs, and the back pressure is transmitted to the switching network board 3 to connect to the ingress buffer 33 of the first line card 11, which is expressed as the first line when the first line card 11 receives the pause frame of the switching network board 3.
  • the card 11 stops transmitting the Ethernet packet 21, that is, the Ethernet packet 21 of the first line card 11 is no longer sent to the outgoing buffer 31.
  • the technical problem to be solved by the embodiments of the present invention is to provide a method and an apparatus for controlling traffic of a switching network, which can reduce packet loss rate and improve data transmission efficiency.
  • an embodiment of the present invention provides a method for controlling a traffic of a switching network, including: obtaining a rate at which at least one destination receives a data packet;
  • the embodiment of the present invention further provides a traffic control device for a switching network, including: a processor, configured to obtain a rate at which at least one destination receives a data packet;
  • a transmitter configured to send a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
  • the flow control method and apparatus for the switching network provided by the embodiment of the present invention reduces the burst data packet received by the same destination end by controlling the data packet sending rate to be less than or equal to the data receiving rate of the destination end. At the same time, the transmission of the sender data packet does not stop, thereby reducing the packet loss rate and improving the data transmission efficiency.
  • FIG. 1 is a schematic diagram of a typical switching network system in the prior art
  • FIG. 2 is a flowchart of a method for controlling traffic of a switching network according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a switching network system according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a switching network system applied to a base station controller according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of an apparatus for controlling a traffic of a switching network according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of the transmitter of Figure 5.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings.
  • the embodiment of the present invention provides a traffic control method for a switching network. As shown in FIG. 2, the method is a flow control performed by a data packet sending end of a switching network.
  • the method specifically includes:
  • Step 101 Obtain a rate at which at least one destination end receives data.
  • This step specifically includes:
  • Step 102 Send a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
  • the data packet is buffered in the data stream corresponding to the destination end according to the rate at which the destination end of the data packet receives the data, and the data packet in the data stream is sent at a rate less than or equal to the rate at which the destination end corresponding to the data packet receives the data.
  • the preset priority may be used, so that the data stream can send the data packet according to the priority.
  • the total transmission rate of the data packet is less than or equal to the transmission data packet rate of the transmitting end, so as to avoid the rate of receiving the data packet by controlling the transmitting end to be less than or equal to the rate at which the destination end receives the data. There is a case where the destination end is lost due to insufficient buffering, and since the transmission of the data packet at the transmitting end is not stopped, the data transmission efficiency is also improved.
  • the technical solution of the present invention will be described below by taking a switching system composed of a plurality of line cards and a switching network board 3 as an example.
  • the data packet transmitted in this embodiment is the Ethernet packet 21. It is assumed that the first line card 11 is a transmitting end, and the rate at which the Ethernet packet 21 is transmitted is 10 Gbps; the second line card 12 and the third line card 13 are destination ends, and the receiving rate of the second line card 12 is 1 Gbps, and the receiving of the third line card 13 is At a rate of 10 Gbps, the first line card 11 transmits the Ethernet packets 21 to the second line card 12 and the third line card 13, respectively.
  • the first line card 11 first obtains the rate of receiving the Ethernet packet of the second line card 12 and the third line card 13; and then identifies the destination information contained in the destination address of the Ethernet packet 21 that the first line card 11 needs to transmit,
  • the Ethernet packet 21 of the first line card 11 with the destination end being the second line card 12 can be cached in the first buffer queue 14 as a data stream; optionally, the ethernet with the destination line being the second line card 12
  • the cache may be cached in a preset priority order.
  • the buffering may be performed in a preset priority order.
  • the first cache queue 14 and the second buffer queue 15 described above can be implemented on the first line card 11.
  • the cache queue is then scheduled, and the Ethernet packets 21 in the first cache queue 14 are sent to the second line card 12 in order of priority, and the rate at which the first buffer queue 14 transmits the Ethernet packet 21 is less than or equal to the second line card 12
  • the rate at which the Ethernet packet 21 of the queue 15 is transmitted is set to 10 Gbps.
  • the rate of the total transmission Ethernet packet 21 of the first buffer queue 14 and the second buffer queue 15 is set to the transmission rate of the first line card 11 by 10 Gbps, so as to prevent the transmission rate of the Ethernet packet 21 from being greater than the transmission of the first line card 11. The rate is lost.
  • the scheduling of the cache queue may be performed by using a polling manner. For example, the Ethernet packet 21 in the first buffer queue 14 is sent in the first time period, and the Ethernet packet 21 in the second buffer queue 15 is sent in the second time. The Ethernet packet 21 in the first buffer queue 14 is sent again in three periods, and thus alternates.
  • the traffic control method of the switching network is compared with the prior art.
  • the Ethernet packet 21 of the second line card 12 needs to be sent, because the first cache is used.
  • the rate at which the queue 14 sends the Ethernet packet 21 is controlled, so that the first line card 11 transmits the Ethernet packet 21 to the second line card 12 at a rate of 1 Gbps, and does not connect the outgoing line buffer of the second line card 12 because the switching network board 3 is connected.
  • the third line card 13 may be idle for a long time and the Ethernet packet 21 of the first line card 11 whose destination end is the third line card 13 cannot be transmitted, thereby improving the efficiency of transmitting the Ethernet packet.
  • a switching network system applied to a base station controller includes two interface boards 16, a switching network board 3, and ten processing boards 17, and the processing capacity of the interface board 16 is 10 Gbps, and the processing board 17
  • the processing capacity is 1 Gbps
  • the switching capacity of the switching board 3 is greater than 40 Gbps
  • the processing board 17 is used as a resource pool
  • the total processing capacity is 10 Gbps.
  • the data transmitted between the interface board 16 and the processing board 17 is a message, and the interface board 16 as a transmitting end transmits a packet as a data packet to the processing board 17 as a destination.
  • the specific data interaction process is:
  • the rate at which the processing board 17 receives the message that is, the processing capability of the processing board 17, is obtained, which is 1 Gbps;
  • the packets in the two interface boards 16 are respectively buffered in the Double Rate Synchronous Dynamic Random Access Memory (DDR) according to the destination end, and the corresponding destinations are formed according to the preset priority.
  • DDR Double Rate Synchronous Dynamic Random Access Memory
  • the data stream is sent according to the priority, and the transmission rate of each data stream is less than or equal to the processing capability of the processing board 17 according to the processing capability of the processing board 17, preferably 0.5 Gbps;
  • a Field-Programmable Gate Array schedules the buffer queues 18 in the interface board 16, and transmits the packets to the processing board 17 at a rate of 0.5 Gbps in order of priority.
  • the total rate of the packets sent by the interface board 16 is limited according to the processing capability of the interface board. For example, the rate of sending packets is limited to 10 Gbps to avoid the packet transmission rate being greater than the transmission rate of the interface board 16. Lose the package.
  • the scheduling of the buffer queue 18 in each data stream or each data stream may be performed in a polling manner, and each data stream or each buffer queue 18 is sequentially transmitted.
  • the flow control method of the switching network provided by the embodiment of the present invention is compared with the prior art.
  • the transmission rate of each data stream is 0.5 Gbps
  • the processing board 17 is sufficient to process the packets sent by the two interface boards 16 at the same time, and no traffic burst occurs, because the outgoing port buffer of the interface between the switching network board 3 and the processing board 17 31 is less, it is not possible to cache the packets in the burst time and thus congest the packet loss situation.
  • the sending rate of each data stream can be set according to the situation. For example, when one of the two interface boards 16 is broken and cannot send a message, the sending rate of each data stream is set to 1 Gbps. Improve the efficiency of the switching network.
  • each data stream When the transmission rate of each data stream is set to 1 Gbps, when two interface boards 16 simultaneously send a message to one processing board 17, the transmission rate of the data stream corresponding to each processing board 17 is limited, which is reduced. The burst message received by the same processing board 17 reduces the packet loss rate.
  • the interface board 16 can receive the packets sent by the interface board 16 on a relatively even average basis, and the interface board 16 does not stop sending the packets, thereby improving the data. The efficiency of the transmission.
  • the embodiment of the present invention further provides an apparatus for using the flow control method of the above switching network.
  • the apparatus includes: a processor 51 and a transmitter 52.
  • the processor 51 is configured to obtain a rate at which the at least one destination end receives the data packet, and the sender 52 is configured to send, to the destination end, the data packet corresponding to the destination end, where the rate of sending the data packet is less than or equal to The destination receives the rate of the data packet.
  • the transmitter 52 includes: a cache module 521 and a sending module 522.
  • the buffering module 521 is configured to buffer the data packet in a data stream corresponding to the destination end according to a rate at which the destination end of the data packet is received
  • the sending module 522 is configured to send the data packet in the data stream, where And sending a data packet in the data stream at a rate less than or equal to a rate at which the destination end receives the data packet.
  • the destination end has multiple, and the transmitter 52 is further configured to enable the total rate of sending the data packet to be less than or equal to the rate of sending the data packet of the transmitting end.
  • the destination end has multiple, and the sending module 522 is specifically configured to sequentially send data in a data stream corresponding to each destination end, where a rate of data packets in the sending data stream is less than or equal to the receiving of each destination end. The rate of the packet.
  • the cache module 521 is specifically configured to cache the data packet in a data stream corresponding to the destination end according to a preset priority according to the rate at which the destination end of the data packet receives the data packet.
  • the data rate of the data packet transmitted by the transmitter 52 is less than or equal to the data receiving rate of the destination end, and the data stream of the corresponding destination end of the buffer module 521 is less than or equal to the transmitting end.
  • the data packet transmission rate transmits the data packet therein, and may also sequentially send each data stream, and send the data packet buffered therein at the transmission rate of the data stream, thereby reducing burst data packets received by the same destination end, and simultaneously transmitting end Packet transmission The delivery will not stop, reducing the packet loss rate and improving the efficiency of data transmission.
  • the sending end may be a line card or an interface board, etc., for sending a data packet;
  • the destination end may be a port, a core in a CPU, a line card or a processing board, etc., for receiving a data packet.

Abstract

Provided are a method and device for controlling the traffic of a switching network, relating to the technical field of communications, reducing the packet loss ratio, and improving the efficiency of the switching network. ­The method includes: obtaining the rate at which at least one destination end receives a data packet; and sending the data packet corresponding to the destination end to the destination end, wherein the rate at which the data packet is sent is lower than or equal to the rate at which the destination end receives the data packet. The device includes: a processor for obtaining the rate at which at least one destination end receives a data packet; a sender for sending the data packet corresponding to the destination end to the destination end, wherein the rate at which the data packet is sent is lower than or equal to the rate at which the destination end receives the data packet.

Description

交换网的流量控制方法和装置  Traffic control method and device for switching network
技术领域 本发明涉及通信技术领域, 尤其涉及一种交换网的流量控制方法和装置。 背景技术 在一种由多个线卡和交换网板构成的典型交换网系统中,线卡与线卡之间 进行数据的交互, 例如, 如图 1所示, 在第一线卡 11中, 目的端为第二线卡 12的以太网包 21经过交换网板 3交换后到达第二线卡 12, 目的端为第三线 卡 13的以太网包 21经过交换网板 3交换后到达第三线卡 13。第一线卡 11速 率为 lOGbps, 第二线卡 12的速率为 lGbps, 第三线卡 13的速率为 lOGbps, 当第一线卡 11中的以太网包 21以 lOGbps的速率发到第二线卡 12时, 如果 交换网板 3连接第二线卡 12的出端緩存 31不够大, 不能存下所有从第一线 卡 11发往第二线卡 12的以太网包 21 ,交换网板 3连接第二线卡 12的出端緩 存 31处就会出现丟包。 The present invention relates to the field of communications technologies, and in particular, to a flow control method and apparatus for a switching network. BACKGROUND In a typical switching network system composed of a plurality of line cards and a switching network board, data exchange is performed between a line card and a line card. For example, as shown in FIG. 1, in the first line card 11, The Ethernet packet 21 of the second line card 12 is switched to the second line card 12 through the switching network board 3. The Ethernet packet 21 whose destination end is the third line card 13 is exchanged by the switching network board 3 and reaches the third line card 13. The first line card 11 has a rate of 10 Gbps, the second line card 12 has a rate of 1 Gbps, and the third line card 13 has a rate of 10 Gbps. When the Ethernet packet 21 in the first line card 11 is sent to the second line card 12 at a rate of 10 Gbps. If the egress buffer 31 of the switching network board 3 connected to the second line card 12 is not large enough, all the Ethernet packets 21 sent from the first line card 11 to the second line card 12 cannot be stored, and the switching network board 3 is connected to the second line card 12 Packet loss occurs at the outbound cache 31.
为解决上述问题, 现有技术将交换网板 3连接第二线卡 12的出端緩存 31 设置为不丟包模式, 当出端緩存 31出现输入流量大于输出流量时, 经过一段 时间, 出端緩存 31 出现反压, 反压传递到交换网板 3连接第一线卡 11的入 端緩存 33 , 表现为当第一线卡 11收到交换网板 3的暂停(pause ) 帧时, 第 一线卡 11停止发送以太网包 21 , 即第一线卡 11的以太网包 21不再发送到出 端緩存 31。 第一线卡 11收到 pause帧后, 由于第一线卡 11的緩存队列为单 队列, 所以其中所有的以太网包 21都被阻塞, 即使交换网板 3连接第三线卡 13的出端緩存 32空闲,由于入端緩存 33中目的端为第二线卡 12的以太网包 21 阻塞了后面的目的端为第三线卡 13的以太网包 21 , 使其得不到调度, 降 低了数据的传输效率。 发明内容 本发明的实施例所解决的技术问题在于提供一种交换网的流量控制的方 法和装置, 实现降低丟包率并且提高数据的传输效率。 In order to solve the above problem, the prior art sets the egress buffer 31 of the switching network board 3 connected to the second line card 12 to a non-drop mode. When the input buffer 31 has an input traffic greater than the output traffic, after a period of time, the egress buffer 31, a back pressure occurs, and the back pressure is transmitted to the switching network board 3 to connect to the ingress buffer 33 of the first line card 11, which is expressed as the first line when the first line card 11 receives the pause frame of the switching network board 3. The card 11 stops transmitting the Ethernet packet 21, that is, the Ethernet packet 21 of the first line card 11 is no longer sent to the outgoing buffer 31. After the first line card 11 receives the pause frame, since the buffer queue of the first line card 11 is a single queue, all the Ethernet packets 21 are blocked, even if the switching network board 3 is connected to the outbound buffer of the third line card 13. 32 is idle, because the Ethernet packet 21 whose destination end is the second line card 12 in the inbound buffer 33 blocks the subsequent Ethernet packet 21 whose destination end is the third line card 13, so that it cannot be scheduled, Low data transmission efficiency. SUMMARY OF THE INVENTION The technical problem to be solved by the embodiments of the present invention is to provide a method and an apparatus for controlling traffic of a switching network, which can reduce packet loss rate and improve data transmission efficiency.
一方面, 本发明的实施例提供一种交换网的流量控制方法, 包括: 获得至少一个目的端接收数据包的速率;  In one aspect, an embodiment of the present invention provides a method for controlling a traffic of a switching network, including: obtaining a rate at which at least one destination receives a data packet;
将对应所述目的端的数据包发送给所述目的端, 其中,发送所述数据包的 速率小于或等于所述目的端接收所述数据包的速率。  Sending a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
另一方面, 本发明的实施例又提供了一种交换网的流量控制装置, 包括: 处理器, 用于获得至少一个目的端接收数据包的速率;  In another aspect, the embodiment of the present invention further provides a traffic control device for a switching network, including: a processor, configured to obtain a rate at which at least one destination receives a data packet;
发送器, 用于将对应所述目的端的数据包发送给所述目的端, 其中, 发送 所述数据包的速率小于或等于所述目的端接收所述数据包的速率。  And a transmitter, configured to send a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
釆用上述技术方案后,本发明实施例提供的一种交换网的流量控制方法和 装置通过控制数据包的发送速率小于等于目的端的数据接收速率, 减少了同 一目的端接收到的突发数据包, 同时发送端数据包的发送不会停止, 从而降 低了丟包率且提高了数据的传输效率。 附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施 例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描 述中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在 不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。  After the foregoing technical solution, the flow control method and apparatus for the switching network provided by the embodiment of the present invention reduces the burst data packet received by the same destination end by controlling the data packet sending rate to be less than or equal to the data receiving rate of the destination end. At the same time, the transmission of the sender data packet does not stop, thereby reducing the packet loss rate and improving the data transmission efficiency. BRIEF DESCRIPTION OF THE DRAWINGS In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and obviously, in the following description The drawings are only some of the embodiments of the present invention, and other drawings may be obtained from those skilled in the art without departing from the drawings.
图 1为现有技术中一种典型交换网系统示意图;  1 is a schematic diagram of a typical switching network system in the prior art;
图 2为本发明实施例提供的一种交换网的流量控制方法的流程图; 图 3为本发明实施例提供的一种交换网系统示意图; 2 is a flowchart of a method for controlling traffic of a switching network according to an embodiment of the present invention; FIG. 3 is a schematic diagram of a switching network system according to an embodiment of the present disclosure;
图 4为本发明实施例提供的运用于基站控制器的一种交换网系统示意图; 图 5 为本发明实施例提供的一种釆用本发明实施例中交换网的流量控制 方法的装置示意图;  4 is a schematic diagram of a switching network system applied to a base station controller according to an embodiment of the present invention; FIG. 5 is a schematic diagram of an apparatus for controlling a traffic of a switching network according to an embodiment of the present invention;
图 6为图 5中发送器的示意图。 具体实施方式 下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、 完整地描述。  Figure 6 is a schematic diagram of the transmitter of Figure 5. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings.
应当明确, 所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实 施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创造性劳动 前提下所获得的所有其他实施例, 都属于本发明保护的范围。  It should be understood that the described embodiments are only a part of the embodiments of the invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
实施例一  Embodiment 1
本发明实施例提供一种交换网的流量控制方法, 如图 2所示, 该方法为交 换网的数据包发送端进行的流量控制, 该方法具体包括:  The embodiment of the present invention provides a traffic control method for a switching network. As shown in FIG. 2, the method is a flow control performed by a data packet sending end of a switching network. The method specifically includes:
步骤 101、 获得至少一个目的端接收数据的速率;  Step 101: Obtain a rate at which at least one destination end receives data.
本步骤具体包括:  This step specifically includes:
步骤 102、 将对应所述目的端的数据包发送给所述目的端, 其中, 发送所 述数据包的速率小于或等于所述目的端接收所述数据包的速率。  Step 102: Send a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
根据数据包对应的目的端接收数据的速率将数据包緩存于对应目的端的 数据流中, 数据流中数据包的发送速率小于或等于数据包对应的目的端接收 数据的速率。 可选地, 在将数据包緩存于对应目的端的数据流中时可以按照 预设的优先级, 以使数据流可以按照优先级发送数据包。  The data packet is buffered in the data stream corresponding to the destination end according to the rate at which the destination end of the data packet receives the data, and the data packet in the data stream is sent at a rate less than or equal to the rate at which the destination end corresponding to the data packet receives the data. Optionally, when the data packet is cached in the data stream corresponding to the destination end, the preset priority may be used, so that the data stream can send the data packet according to the priority.
数据包总的发送速率小于或等于发送端的发送数据包的速率,以避免由于 通过控制发送端发送数据包的速率小于或等于目的端接收数据的速率,减 少了目的端因为緩存不够而出现丟包的情况, 并且, 由于发送端数据包的发 送不会停止, 因此也提高了数据传输效率。 The total transmission rate of the data packet is less than or equal to the transmission data packet rate of the transmitting end, so as to avoid the rate of receiving the data packet by controlling the transmitting end to be less than or equal to the rate at which the destination end receives the data. There is a case where the destination end is lost due to insufficient buffering, and since the transmission of the data packet at the transmitting end is not stopped, the data transmission efficiency is also improved.
实施例二  Embodiment 2
如图 3所示 ,以下通过一种由多个线卡和交换网板 3构成的交换系统为例 , 说明本发明的技术方案。  As shown in FIG. 3, the technical solution of the present invention will be described below by taking a switching system composed of a plurality of line cards and a switching network board 3 as an example.
在本实施例中发送的数据包为以太网包 21。 假设第一线卡 11为发送端, 其发送以太网包 21的速率为 lOGbps; 第二线卡 12和第三线卡 13为目的端, 第二线卡 12的接收速率为 lGbps, 第三线卡 13的接收速率为 lOGbps, 第一 线卡 11将以太网包 21分别发送到第二线卡 12和第三线卡 13。  The data packet transmitted in this embodiment is the Ethernet packet 21. It is assumed that the first line card 11 is a transmitting end, and the rate at which the Ethernet packet 21 is transmitted is 10 Gbps; the second line card 12 and the third line card 13 are destination ends, and the receiving rate of the second line card 12 is 1 Gbps, and the receiving of the third line card 13 is At a rate of 10 Gbps, the first line card 11 transmits the Ethernet packets 21 to the second line card 12 and the third line card 13, respectively.
具体的数据交互为:  The specific data interaction is:
第一线卡 11首先获得第二线卡 12和第三线卡 13的接收以太网包的速率; 然后识别第一线卡 11需要发送的以太网包 21的目的地址中包含的目的端 信息, 对于第一线卡 11中目的端为第二线卡 12的以太网包 21 , 可以将其緩 存在作为一条数据流的第一緩存队列 14中; 可选地, 在将目的端为第二线卡 12的以太网包 21緩存在第一緩存队列 14中时, 可以按照预设的优先级顺序 进行緩存。 同理, 对于第一线卡 11中目的端为第三线卡 13的以太网包 21按 照预设的优先级緩存在作为另一条数据流的第二緩存队列 15中, 可选地, 在 将目的端为第三线卡 13的以太网包 21緩存在第二緩存队列 15中时, 可以按 照预设的优先级顺序进行緩存。 考虑到交换网板的緩存较小, 上述第一緩存 队列 14和第二緩存队列 15可以在第一线卡 11上实现。  The first line card 11 first obtains the rate of receiving the Ethernet packet of the second line card 12 and the third line card 13; and then identifies the destination information contained in the destination address of the Ethernet packet 21 that the first line card 11 needs to transmit, The Ethernet packet 21 of the first line card 11 with the destination end being the second line card 12 can be cached in the first buffer queue 14 as a data stream; optionally, the ethernet with the destination line being the second line card 12 When the network packet 21 is cached in the first cache queue 14, the cache may be cached in a preset priority order. Similarly, for the Ethernet packet 21 of the first line card 11 whose destination end is the third line card 13 is cached in the second buffer queue 15 as another data stream according to a preset priority, optionally, in the purpose When the Ethernet packet 21 of the third line card 13 is buffered in the second buffer queue 15, the buffering may be performed in a preset priority order. Considering that the cache of the switching network board is small, the first cache queue 14 and the second buffer queue 15 described above can be implemented on the first line card 11.
之后调度緩存队列,将第一緩存队列 14中的以太网包 21按照优先级的顺 序发送给第二线卡 12, 并且使第一緩存队列 14发送以太网包 21的速率小于 或等于第二线卡 12的接收以太网包 21的速率, 例如, 将第一緩存队列 14的 发送以太网包 21的速率设置为 lGbps; 将第二緩存队列 15中的以太网包 21 按照优先级的顺序发送给第三线卡 13 , 并且使第二緩存队列 1发送以太网包 21的速率小于或等于第三线卡 13接收以太网包 21的速率, 例如, 将第二緩 存队列 15的发送以太网包 21的速率设置为 lOGbps。将第一緩存队列 14和第 二緩存队列 15总的发送以太网包 21 的速率设置为第一线卡 11 的发送速率 lOGbps, 以避免以太网包 21的发送速率大于第一线卡 11的发送速率而产生 丟包。 The cache queue is then scheduled, and the Ethernet packets 21 in the first cache queue 14 are sent to the second line card 12 in order of priority, and the rate at which the first buffer queue 14 transmits the Ethernet packet 21 is less than or equal to the second line card 12 Receiving the rate of the Ethernet packet 21, for example, setting the rate of transmitting the Ethernet packet 21 of the first buffer queue 14 to 1 Gbps; and transmitting the Ethernet packet 21 in the second buffer queue 15 to the third line in order of priority Card 13, and causing the second buffer queue 1 to transmit the Ethernet packet 21 at a rate less than or equal to the rate at which the third line card 13 receives the Ethernet packet 21, for example, The rate at which the Ethernet packet 21 of the queue 15 is transmitted is set to 10 Gbps. The rate of the total transmission Ethernet packet 21 of the first buffer queue 14 and the second buffer queue 15 is set to the transmission rate of the first line card 11 by 10 Gbps, so as to prevent the transmission rate of the Ethernet packet 21 from being greater than the transmission of the first line card 11. The rate is lost.
其中緩存队列的调度可以釆用轮询的方式, 例如, 第一段时间发送第一緩 存队列 14中的以太网包 21 , 第二段时间发送第二緩存队列 15中的以太网包 21 , 第三段时间又发送第一緩存队列 14中的以太网包 21 , 如此交替进行。  The scheduling of the cache queue may be performed by using a polling manner. For example, the Ethernet packet 21 in the first buffer queue 14 is sent in the first time period, and the Ethernet packet 21 in the second buffer queue 15 is sent in the second time. The Ethernet packet 21 in the first buffer queue 14 is sent again in three periods, and thus alternates.
本发明实施例提供的交换网的流量控制方法与现有技术相比,当第一线卡 11 中有连续的目的端为第二线卡 12的以太网包 21需要发送时, 由于对第一 緩存队列 14发送以太网包 21的速率做了控制, 因而第一线卡 11以 lGbps的 速率发送以太网包 21给第二线卡 12, 不会因为交换网板 3连接第二线卡 12 的出端緩存 31不够大而造成拥塞丟包, 并且由于有两列緩存队列发送以太网 包 21 ,使第二线卡 12和第三线卡 13可以相对平均地接收第一线卡 11发送的 以太网包 21 , 不会造成第三线卡 13长时间空闲而第一线卡 11中目的端为第 三线卡 13的以太网包 21无法发送的情况, 从而提高了传输以太网包的效率。  The traffic control method of the switching network provided by the embodiment of the present invention is compared with the prior art. When the first line card 11 has a continuous destination end, the Ethernet packet 21 of the second line card 12 needs to be sent, because the first cache is used. The rate at which the queue 14 sends the Ethernet packet 21 is controlled, so that the first line card 11 transmits the Ethernet packet 21 to the second line card 12 at a rate of 1 Gbps, and does not connect the outgoing line buffer of the second line card 12 because the switching network board 3 is connected. 31 is not large enough to cause congestion and packet loss, and since there are two columns of buffer queues for transmitting the Ethernet packet 21, the second line card 12 and the third line card 13 can receive the Ethernet packet 21 sent by the first line card 11 relatively evenly, The third line card 13 may be idle for a long time and the Ethernet packet 21 of the first line card 11 whose destination end is the third line card 13 cannot be transmitted, thereby improving the efficiency of transmitting the Ethernet packet.
实施例三  Embodiment 3
如图 4所示, 运用于基站控制器的一种交换网系统, 包括两块接口板 16, 一块交换网板 3 , 十块处理板 17, 接口板 16的处理能力为 lOGbps, 处理板 17的处理能力为 lGbps, 交换板 3的交换能力大于 40Gbps, 处理板 17作为 资源池使用, 总的处理能力为 10Gbps。 接口板 16和处理板 17之间传输的数 据为报文, 作为发送端的接口板 16发送作为数据包的报文给作为目的端的处 理板 17。  As shown in FIG. 4, a switching network system applied to a base station controller includes two interface boards 16, a switching network board 3, and ten processing boards 17, and the processing capacity of the interface board 16 is 10 Gbps, and the processing board 17 The processing capacity is 1 Gbps, the switching capacity of the switching board 3 is greater than 40 Gbps, and the processing board 17 is used as a resource pool, and the total processing capacity is 10 Gbps. The data transmitted between the interface board 16 and the processing board 17 is a message, and the interface board 16 as a transmitting end transmits a packet as a data packet to the processing board 17 as a destination.
具体的数据交互过程为:  The specific data interaction process is:
首先获得处理板 17接收报文的速率, 也就是处理板 17的处理能力, 为 lGbps;  First, the rate at which the processing board 17 receives the message, that is, the processing capability of the processing board 17, is obtained, which is 1 Gbps;
然后识别接口板 16需要发送的报文的目的地址中包含的目的端信息, 按 照目的端信息将两个接口板 16中的报文按照目的端分别緩存在双倍速率同步 动态随机存储器(Double Data Rate, DDR ) 中, 对应按照预设的优先级形成 对应目的端十块处理板 17的数列緩存队列 18, 每块处理板 17对应按照预设 的优先级形成的一列或数列緩存队列 18,每块处理板 17对应的一列或数列緩 存队列 18作为一条数据流, 使每条数据流按照优先级发送报文, 根据处理板 17的处理能力使每条数据流的发送速率小于等于处理板 17的处理能力,优选 为 0.5Gbps; Then identifying the destination information contained in the destination address of the packet that the interface board 16 needs to send, according to According to the destination information, the packets in the two interface boards 16 are respectively buffered in the Double Rate Synchronous Dynamic Random Access Memory (DDR) according to the destination end, and the corresponding destinations are formed according to the preset priority. A plurality of cache queues 18 of the board 17, each of the processing boards 17 corresponding to a column or sequence of cache queues 18 formed according to a preset priority, and a column or sequence of cache queues 18 corresponding to each processing board 17 as a data stream, so that each strip The data stream is sent according to the priority, and the transmission rate of each data stream is less than or equal to the processing capability of the processing board 17 according to the processing capability of the processing board 17, preferably 0.5 Gbps;
之后, 现场可编程门阵列 ( Field - Programmable Gate Array, FPGA )对 接口板 16 中的緩存队列 18进行调度, 将报文按照优先级的顺序以 0.5Gbps 的速率发送给处理板 17。 每一个接口板 16 中发送报文总的速率根据接口板 16的处理能力进行限制, 例如将发送报文的速率限制为 lOGbps, 以避免由于 报文的发送速率大于接口板 16的发送速率而产生丟包。  Then, a Field-Programmable Gate Array (FPGA) schedules the buffer queues 18 in the interface board 16, and transmits the packets to the processing board 17 at a rate of 0.5 Gbps in order of priority. The total rate of the packets sent by the interface board 16 is limited according to the processing capability of the interface board. For example, the rate of sending packets is limited to 10 Gbps to avoid the packet transmission rate being greater than the transmission rate of the interface board 16. Lose the package.
其中对于每条数据流或每条数据流中緩存队列 18的调度可以釆用轮询的 方式, 依次发送各数据流或各緩存队列 18。  The scheduling of the buffer queue 18 in each data stream or each data stream may be performed in a polling manner, and each data stream or each buffer queue 18 is sequentially transmitted.
本发明实施例提供的交换网的流量控制方法与现有技术相比,当每条数据 流的发送速率为 0.5Gbps,短时间两个接口板 16同时向一个处理板 17发送才艮 文时, 若釆用轮询的方式调度各数据流, 处理板 17足以处理两个接口板 16 同时发送的报文, 不会发生流量突发时, 由于交换网板 3和处理板 17接口的 出端口緩存 31较少, 不能将突发时间内的报文都緩存下来从而拥塞丟包的情 况。  The flow control method of the switching network provided by the embodiment of the present invention is compared with the prior art. When the transmission rate of each data stream is 0.5 Gbps, when two interface boards 16 simultaneously send a message to a processing board 17, If the data flow is scheduled by polling, the processing board 17 is sufficient to process the packets sent by the two interface boards 16 at the same time, and no traffic burst occurs, because the outgoing port buffer of the interface between the switching network board 3 and the processing board 17 31 is less, it is not possible to cache the packets in the burst time and thus congest the packet loss situation.
需要说明的是, 每条数据流的发送速率可以根据情况设定, 比如当两个接 口板 16 中的一个坏了, 无法发送报文时, 则设置每条数据流的发送速率为 lGbps, 以提高交换网的效率。  It should be noted that the sending rate of each data stream can be set according to the situation. For example, when one of the two interface boards 16 is broken and cannot send a message, the sending rate of each data stream is set to 1 Gbps. Improve the efficiency of the switching network.
当每条数据流的发送速率设置为 lGbps, 短时间两个接口板 16同时向一 个处理板 17发送报文时, 由于对每一个处理板 17对应的数据流的发送速率 做了限制, 减少了同一个处理板 17收到的突发报文, 从而降低了丟包率。 并且由于每个接口板 16对应设置有数列緩存队列 18发送报文,使各处理 板 17可以相对平均地接收接口板 16发送的报文, 接口板 16不会停止发送报 文, 从而提高了数据传输的效率。 When the transmission rate of each data stream is set to 1 Gbps, when two interface boards 16 simultaneously send a message to one processing board 17, the transmission rate of the data stream corresponding to each processing board 17 is limited, which is reduced. The burst message received by the same processing board 17 reduces the packet loss rate. The interface board 16 can receive the packets sent by the interface board 16 on a relatively even average basis, and the interface board 16 does not stop sending the packets, thereby improving the data. The efficiency of the transmission.
实施例四  Embodiment 4
本发明实施例还提供釆用上述交换网的流量控制方法的装置, 如图 5 所 示, 该装置包括: 处理器 51和发送器 52。  The embodiment of the present invention further provides an apparatus for using the flow control method of the above switching network. As shown in FIG. 5, the apparatus includes: a processor 51 and a transmitter 52.
处理器 51 , 用于获得至少一个目的端接收数据包的速率; 发送器 52, 用 于将对应所述目的端的数据包发送给所述目的端, 其中, 发送所述数据包的 速率小于或等于所述目的端接收所述数据包的速率。  The processor 51 is configured to obtain a rate at which the at least one destination end receives the data packet, and the sender 52 is configured to send, to the destination end, the data packet corresponding to the destination end, where the rate of sending the data packet is less than or equal to The destination receives the rate of the data packet.
进一步地, 如图 6所示, 发送器 52包括: 緩存模块 521和发送模块 522。 緩存模块 521 , 用于根据数据包的目的端接收数据包的速率将所述数据包 緩存于对应所述目的端的数据流中,发送模块 522, 用于发送所述数据流中的 数据包, 其中, 发送所述数据流中数据包的速率小于或等于所述目的端接收 数据包的速率。  Further, as shown in FIG. 6, the transmitter 52 includes: a cache module 521 and a sending module 522. The buffering module 521 is configured to buffer the data packet in a data stream corresponding to the destination end according to a rate at which the destination end of the data packet is received, and the sending module 522 is configured to send the data packet in the data stream, where And sending a data packet in the data stream at a rate less than or equal to a rate at which the destination end receives the data packet.
进一步地, 所述目的端有多个, 发送器 52还用于使发送所述数据包总的 速率小于或等于发送端的发送数据包的速率。  Further, the destination end has multiple, and the transmitter 52 is further configured to enable the total rate of sending the data packet to be less than or equal to the rate of sending the data packet of the transmitting end.
进一步地, 所述目的端有多个,发送模块 522具体用于依次发送对应每个 目的端的数据流中的数据, 其中, 发送数据流中数据包的速率小于或等于所 述每个目的端接收数据包的速率。  Further, the destination end has multiple, and the sending module 522 is specifically configured to sequentially send data in a data stream corresponding to each destination end, where a rate of data packets in the sending data stream is less than or equal to the receiving of each destination end. The rate of the packet.
进一步地,緩存模块 521具体用于根据数据包的目的端接收数据包的速率 将数据包按照预设的优先级緩存于对应目的端的数据流中。  Further, the cache module 521 is specifically configured to cache the data packet in a data stream corresponding to the destination end according to a preset priority according to the rate at which the destination end of the data packet receives the data packet.
通过根据处理器 51获得的目的端的数据接收速率,发送器 52发送数据包 的速率小于等于所发送目的端的数据接收速率,且发送器 52调度緩存模块 521 中对应目的端的数据流以小于等于发送端的数据包发送速率发送其中的数据 包, 还可以依次发送各数据流, 并以数据流的发送速率发送緩存于其中的数 据包, 从而减少了同一目的端接收到的突发数据包, 同时发送端数据包的发 送不会停止, 降低了丟包率且提高了数据传输的效率。 The data rate of the data packet transmitted by the transmitter 52 is less than or equal to the data receiving rate of the destination end, and the data stream of the corresponding destination end of the buffer module 521 is less than or equal to the transmitting end. The data packet transmission rate transmits the data packet therein, and may also sequentially send each data stream, and send the data packet buffered therein at the transmission rate of the data stream, thereby reducing burst data packets received by the same destination end, and simultaneously transmitting end Packet transmission The delivery will not stop, reducing the packet loss rate and improving the efficiency of data transmission.
所有上述实施例中: 发送端可以为线卡或接口板等, 用于发送数据包; 目 的端可以为端口、 CPU中的核、 线卡或处理板等, 用于接收数据包。  In all the above embodiments, the sending end may be a line card or an interface board, etc., for sending a data packet; the destination end may be a port, a core in a CPU, a line card or a processing board, etc., for receiving a data packet.
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分流程可 以通过计算机程序指令相关的硬件来完成, 前述的程序可以存储于一计算机 可读取存储介质中, 该程序在执行时, 执行包括上述方法实施例的步骤; 而 前述的存储介质包括: ROM、 RAM, 磁碟或者光盘等各种可以存储程序代码 的介质。  A person skilled in the art can understand that all or part of the process of implementing the above method embodiments may be completed by using computer program related hardware, and the foregoing program may be stored in a computer readable storage medium, when executed, The foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
以上所述, 仅为本发明的具体实施方式,但本发明的保护范围并不局限于 此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易想 到变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明的保护范 围应以所述权利要求的保护范围为准。  The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or substitutions within the technical scope of the present invention. It should be covered by the scope of the present invention. Accordingly, the scope of the invention should be determined by the scope of the appended claims.

Claims

权利要求 书 Claim
1、 一种交换网的流量控制方法, 其特征在于, 包括:  A flow control method for a switching network, characterized in that:
获得至少一个目的端接收数据包的速率;  Obtaining a rate at which at least one destination receives a data packet;
将对应所述目的端的数据包发送给所述目的端, 其中, 发送所述数据包的速 率小于或等于所述目的端接收所述数据包的速率。  Transmitting a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
2、 根据权利要求 1所述的方法, 其特征在于, 将对应所述目的端的数据包 发送给所述目的端, 其中, 发送所述数据包的速率小于或等于所述目的端接收 数据包的速率之前进一步包括:  The method according to claim 1, wherein the data packet corresponding to the destination end is sent to the destination end, wherein a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives a data packet. The rate further includes:
根据所述目的端接收数据包的速率将所述数据包緩存于对应所述目的端的 数据流中, 其中, 发送所述数据流中数据包的速率小于或等于所述目的端接收 数据包的速率。  Decoding the data packet in a data stream corresponding to the destination end according to a rate at which the destination end receives the data packet, where a rate of sending the data packet in the data stream is less than or equal to a rate at which the destination end receives the data packet .
3、 根据权利要求 1或 2所述的方法, 其特征在于, 当所述目的端有多个时, 发送所述数据包总的速率小于或等于发送端的发送数据包的速率。  The method according to claim 1 or 2, wherein, when there are multiple destinations, the total rate at which the data packet is sent is less than or equal to the rate at which the data packet is sent by the sender.
4、 根据权利要求 2所述的方法, 其特征在于, 所述目的端有多个, 将对应 所述目的端的数据包发送给所述目的端包括:  The method according to claim 2, wherein the destination end has a plurality of, and the sending of the data packet corresponding to the destination end to the destination end comprises:
依次发送对应每个目的端的数据流中的数据, 其中,发送所述数据流中数据 包的速率小于或等于所述每个目的端接收数据包的速率。  The data in the data stream corresponding to each destination is sent in sequence, wherein the rate at which the data packets in the data stream are sent is less than or equal to the rate at which each destination receives the data packet.
5、 根据权利要求 2或 4所述的方法, 其特征在于,  5. A method according to claim 2 or 4, characterized in that
所述根据所述目的端接收数据包的速率将所述数据包緩存于对应所述目的 端的数据流中包括:  The buffering the data packet to the data stream corresponding to the destination according to the rate at which the destination end receives the data packet includes:
根据所述目的端接收数据包的速率将所述数据包按照预设的优先级緩存于 对应所述目的端的数据流中。  And according to the rate at which the destination end receives the data packet, the data packet is cached in a data stream corresponding to the destination end according to a preset priority.
6、 一种交换网的流量控制装置, 其特征在于, 包括:  6. A flow control device for a switching network, comprising:
处理器, 用于获得至少一个目的端接收数据包的速率;  a processor, configured to obtain a rate at which at least one destination receives the data packet;
发送器, 用于将对应所述目的端的数据包发送给所述目的端, 其中, 发送所 述数据包的速率小于或等于所述目的端接收所述数据包的速率。 And a transmitter, configured to send a data packet corresponding to the destination end to the destination end, where a rate at which the data packet is sent is less than or equal to a rate at which the destination end receives the data packet.
7、 根据权利要求 6所述的装置, 其特征在于, 所述发送器包括: 緩存模块,用于根据所述目的端接收数据包的速率将所述数据包緩存于对应 所述目的端的数据流中; The device according to claim 6, wherein the transmitter comprises: a buffering module, configured to cache the data packet in a data stream corresponding to the destination end according to a rate at which the destination end receives a data packet Medium
发送模块, 用于发送所述数据流中的数据包, 其中, 发送所述数据流中数据 包的速率小于或等于所述目的端接收数据包的速率。  And a sending module, configured to send a data packet in the data stream, where a rate of sending the data packet in the data stream is less than or equal to a rate at which the destination end receives the data packet.
8、 根据权利要求 6或 7所述的装置, 其特征在于, 所述目的端有多个, 所述发送器还用于使发送所述数据包总的速率小于或等于发送端的发送数 据包的速率。  The device according to claim 6 or 7, wherein the destination end has multiple, the transmitter is further configured to enable the total rate of sending the data packet to be less than or equal to a sending data packet of the transmitting end. rate.
9、 根据权利要求 7所述的装置, 其特征在于, 所述目的端有多个, 所述发送模块具体用于依次发送对应每个目的端的数据流中的数据, 其中, 发送所述数据流中数据包的速率小于或等于所述每个目的端接收数据包的速 率。  The device according to claim 7, wherein the destination end has a plurality of, the sending module is specifically configured to sequentially send data in a data stream corresponding to each destination end, where the data stream is sent The rate of the data packet is less than or equal to the rate at which each destination receives the data packet.
10、 根据权利要求 7或 9所述的装置, 其特征在于,  10. Apparatus according to claim 7 or claim 9 wherein:
所述緩存模块具体用于根据所述目的端接收数据包的速率将所述数据包按 照预设的优先级緩存于对应所述目的端的数据流中。  The cache module is specifically configured to cache the data packet in a data stream corresponding to the destination end according to a preset priority according to a rate at which the destination end receives a data packet.
PCT/CN2011/078853 2011-08-24 2011-08-24 Method and device for controlling traffic of switching network WO2012119414A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2011800018546A CN102356609A (en) 2011-08-24 2011-08-24 Flow control method of switched network and device
PCT/CN2011/078853 WO2012119414A1 (en) 2011-08-24 2011-08-24 Method and device for controlling traffic of switching network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/078853 WO2012119414A1 (en) 2011-08-24 2011-08-24 Method and device for controlling traffic of switching network

Publications (1)

Publication Number Publication Date
WO2012119414A1 true WO2012119414A1 (en) 2012-09-13

Family

ID=45579289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/078853 WO2012119414A1 (en) 2011-08-24 2011-08-24 Method and device for controlling traffic of switching network

Country Status (2)

Country Link
CN (1) CN102356609A (en)
WO (1) WO2012119414A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024827B (en) * 2012-12-03 2016-08-03 中国联合网络通信集团有限公司 Method of rate control, base station and the communication system of base station direct connection communication
CN105337888B (en) * 2015-11-18 2018-12-07 华为技术有限公司 Load-balancing method, device and virtual switch based on multicore forwarding
CN108737997B (en) * 2017-04-21 2020-11-27 展讯通信(上海)有限公司 Method, equipment and system for adjusting data packet transmission rate
CN110401603B (en) * 2019-07-25 2023-07-28 北京百度网讯科技有限公司 Method and device for processing information
CN114500385A (en) * 2021-12-23 2022-05-13 武汉微创光电股份有限公司 Method and system for realizing gigabit Ethernet data traffic shaping through FPGA
CN115277589B (en) * 2022-06-30 2023-08-29 北京比特大陆科技有限公司 Control data sending method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1866927A (en) * 2006-05-08 2006-11-22 国家数字交换系统工程技术研究中心 Information switching realizing system and method and scheduling algorithm
US20080112523A1 (en) * 2006-11-10 2008-05-15 Tenor Electronics Corporation Data synchronization apparatus
CN101227296A (en) * 2007-12-27 2008-07-23 杭州华三通信技术有限公司 Method, system for transmitting PCIE data and plate card thereof
CN101959245A (en) * 2009-07-13 2011-01-26 中兴通讯股份有限公司 Method, device and system for controlling flow in WiMAX (Worldwide Interoperability for Microwave Access) system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299721B (en) * 2008-06-19 2012-04-18 杭州华三通信技术有限公司 Method for switching message of switching network, and switching device
CN101572673B (en) * 2009-06-19 2013-03-20 杭州华三通信技术有限公司 Distributed packet switching system and distributed packet switching method of expanded switching bandwidth
CN101800757B (en) * 2010-02-03 2012-06-27 国家保密科学技术研究所 No-feedback one-way data transmission method based on single fiber structure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1866927A (en) * 2006-05-08 2006-11-22 国家数字交换系统工程技术研究中心 Information switching realizing system and method and scheduling algorithm
US20080112523A1 (en) * 2006-11-10 2008-05-15 Tenor Electronics Corporation Data synchronization apparatus
CN101227296A (en) * 2007-12-27 2008-07-23 杭州华三通信技术有限公司 Method, system for transmitting PCIE data and plate card thereof
CN101959245A (en) * 2009-07-13 2011-01-26 中兴通讯股份有限公司 Method, device and system for controlling flow in WiMAX (Worldwide Interoperability for Microwave Access) system

Also Published As

Publication number Publication date
CN102356609A (en) 2012-02-15

Similar Documents

Publication Publication Date Title
US10313768B2 (en) Data scheduling and switching method, apparatus, system
US9007909B2 (en) Link layer reservation of switch queue capacity
WO2012119414A1 (en) Method and device for controlling traffic of switching network
US7643420B2 (en) Method and system for transmission control protocol (TCP) traffic smoothing
US9559960B2 (en) Network congestion management
TWI227080B (en) Network switch providing congestion control and method thereof
JP7231749B2 (en) Packet scheduling method, scheduler, network device and network system
WO2022001175A1 (en) Data packet sending method and apparatus
CN109714267A (en) Manage the transfer control method and system of reversed queue
WO2016008399A1 (en) Flow control
WO2013016971A1 (en) Method and device for sending and receiving data packet in packet switched network
WO2013078799A1 (en) Method and network device for controlling transmission rate of communication interface
CN112104562A (en) Congestion control method and device, communication network and computer storage medium
US9160665B2 (en) Method and system of transmission management in a network
US11165705B2 (en) Data transmission method, device, and computer storage medium
CN110391992A (en) Jamming control method and device based on interchanger active queue management
CN102223311A (en) Queue scheduling method and device
EP2477366A1 (en) Data transmission method, apparatus and system
US9590909B2 (en) Reducing TCP timeouts due to Incast collapse at a network switch
JP4382830B2 (en) Packet transfer device
TWI609579B (en) Shaping data packet traffic
WO2008148345A1 (en) A method and system for accessing a shared media, and a single traffic device
CN116346720A (en) Information transmission device and method
CN106330834B (en) Virtual channel connection establishing method and device
JP4630231B2 (en) Packet processing system, packet processing method, and program

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180001854.6

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11860615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11860615

Country of ref document: EP

Kind code of ref document: A1