WO2011100878A1 - 交换网流控实现方法、交换设备及系统 - Google Patents

交换网流控实现方法、交换设备及系统 Download PDF

Info

Publication number
WO2011100878A1
WO2011100878A1 PCT/CN2010/076746 CN2010076746W WO2011100878A1 WO 2011100878 A1 WO2011100878 A1 WO 2011100878A1 CN 2010076746 W CN2010076746 W CN 2010076746W WO 2011100878 A1 WO2011100878 A1 WO 2011100878A1
Authority
WO
WIPO (PCT)
Prior art keywords
port
destination
queue
output port
cell
Prior art date
Application number
PCT/CN2010/076746
Other languages
English (en)
French (fr)
Inventor
孙团会
李德丰
苏皓
曹爱娟
宋健
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20100846000 priority Critical patent/EP2528286B1/en
Publication of WO2011100878A1 publication Critical patent/WO2011100878A1/zh
Priority to US13/589,890 priority patent/US8797860B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element
    • H04L49/505Corrective measures
    • H04L49/506Backpressure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/267Flow control; Congestion control using explicit feedback to the source, e.g. choke packets sent by the destination endpoint

Definitions

  • the present application claims to be submitted to the Chinese Patent Office on February 20, 2010, the application number is 201010113687.1, and the invention name is "exchange network flow control implementation method, switching device and system" Chinese patent Priority of the application, the entire contents of which are incorporated herein by reference.
  • the present invention relates to the field of communications technologies, and in particular, to a switching network flow control implementation method, a switching device, and a system.
  • BACKGROUND OF THE INVENTION In a Switch Fabric of a Combined Input-Output Queued (CIOQ) architecture, a variable length message received by a Line Card
  • Packet is cut into fixed-length cells (Cell) buffer to form a queue at the input end, and N (N is a positive integer) unicast virtual output queue (Virtual Output Queue, VOQ for short) is set at the input end to k (k is a positive integer, and l ⁇ k ⁇ 2 N ) ) multicast virtual output queues.
  • the queue scheduling time is divided into fixed-length time slots. In one time slot, one input port can transmit at most one cell, and one output port can receive at most one cell. If there are multiple input ports in a time slot that need to send data to the same output port at the same time, a port conflict will occur.
  • the multicast scheduling When the multicast message flows into the team multicast virtual output queue according to the data, since the number of multicast data streams 2N is much larger than the number k of the multicast virtual output queues, there must be multiple multicast data flows into the same multicast group.
  • the phenomenon of virtual output queues cells belonging to different messages in the multicast virtual output queue are interleaved. That is, for a multicast virtual output queue, if thousands of cells belonging to one multicast message are continuously enqueued, they belong to another A number of cells broadcasting the message will follow the queue; the above phenomenon inevitably leads to severe head blockage in multicast scheduling.
  • the multicast scheduling In order to avoid the head block phenomenon in the queue that all the data in the back of the queue cannot be scheduled due to the unscheduled data of the head of the queue, the multicast scheduling generally uses fan-out splitting.
  • the switching network flow control mechanism uses multiple iterations to match the input port and the output port. If the output queue of the switching network is congested, the output port cannot send more data to the input port to send flow control information.
  • the network scheduling algorithm matches the input port and the output port, and the input port first filters the output port where congestion occurs before sending data to the output port.
  • An object of the present invention is to provide a method, a switching device, and a system for implementing a flow control of a switching network, which improve data processing efficiency of the switching network.
  • the embodiment of the invention provides a method for implementing flow control of a switching network, including:
  • Each input port sends request information to a destination output port where packet congestion does not occur
  • the destination output port receiving the request information determines whether to return the grant information to the input ports according to the respective back pressure information to establish a matching relationship between the input ports and the destination output port that returns the grant information;
  • each input port dispatches a cell to a destination output port that matches the input port.
  • the embodiment of the invention further provides a switching device, including: an input port processing module, an output port processing module, an arbitration module, and a crossbar switch module.
  • the input port processing module is configured to send request information from each input port to the arbitration module;
  • the output port processing module is configured to send back pressure information from each output port to the arbitration module;
  • the arbitration module is configured to establish, according to the request information and the back pressure information, a matching relationship between each input port and a destination output port that returns the grant information to each input port;
  • the crossbar module is configured to schedule data cells of each input port according to the matching relationship to a destination output port that matches each input port.
  • the embodiment of the present invention further provides a switching system, including: an uplink management queue device and a downlink management queue device for scheduling data cells, where the method further includes at least one switching device as described above;
  • the input port processing module is connected; the downlink management queue device is connected to the output port processing module.
  • the implementation method, the switching device and the system for the switching network flow control provided by the embodiment of the present invention because the output port refers to the back pressure information and returns the grant information to the input port that sends the request information, the output port where the packet congestion occurs does not need to input the sending request information.
  • the port returns flow control information, which reduces the amount of information transmitted between the input port and the output port, and improves the data processing efficiency of the switching network.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for implementing flow control of a switching network according to the present invention
  • FIG. 2 is a schematic flowchart of still another embodiment of a method for implementing flow control of a switching network according to the present invention
  • FIG. 3 is a schematic flowchart of establishing a matching relationship according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of information about an input port and an output port of the embodiment shown in FIG. 3;
  • FIG. 5 is a schematic structural diagram of an embodiment of a switching device according to the present invention.
  • FIG. 6 is a schematic structural diagram of an embodiment of a switching system according to the present invention.
  • Figure ⁇ is a schematic structural view of still another embodiment of the switching system of the present invention.
  • FIG. 8 is a schematic structural diagram of a system applicable to an embodiment of the present invention.
  • the technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. example. Based on the embodiments of the present invention, one of ordinary skill in the art does not create All other embodiments obtained under the premise of sexual labor are within the scope of protection of the present invention.
  • a network device with a switching network function forwards a packet, it needs to divide the packet into multiple cells of fixed length. Different cells of the same "3 ⁇ 4" are scheduled on the same input port, and one cell is scheduled in one time slot.
  • the same input port can maintain "3 ⁇ 4" cells to multiple output ports.
  • the destination output ports that the cells belonging to the same packet need to exchange are the same.
  • the destination output ports that need to be exchanged for the cells belonging to different packets may be the same or different. Before exchanging cells, it is necessary to obtain queue state information of each input port and back pressure information of the destination output port corresponding to each input port.
  • FIG. 1 is a schematic flowchart of an embodiment of a flow control method for a switching network according to the present invention. As shown in FIG. 1, the embodiment of the present invention includes the following steps:
  • Step 101 Each input port sends request information to a destination output port where no message congestion occurs.
  • Step 102 The destination output port that receives the request information determines whether to return the grant information to each input port according to the respective back pressure information to establish each input. a matching relationship between the port and the destination output port that returns the grant information;
  • Step 103 According to the matching relationship, each input port dispatches a cell to a destination output port that matches each input port.
  • the output port refers to the backpressure information to determine whether to return the grant information to the input port that sends the request information, because each input port sends the request information only to the destination output port where the packet congestion does not occur. Therefore, the output port where the message is congested does not need to return the flow control information to the input port that sends the request information, thereby reducing the amount of information transmitted between the input port and the output port, simplifying the design of the switching network, and improving the data processing efficiency of the switching network. .
  • FIG. 2 is a schematic flowchart of still another embodiment of a method for implementing flow control of a switching network according to the present invention. As shown in FIG. 2, the embodiment includes the following steps:
  • Step 201 Each input port sends a request message to a destination output port where no packet congestion occurs.
  • the input port filters the destination output port of the congestion according to the destination port filtering table of each queue of the port when the request information is sent.
  • the port filtering table is used to record whether the destination port is congested. Specifically, if the queue is a unicast queue, there is only one destination port filtering table. When the destination port filtering information is sent, the destination port may not need to send the request information of the unicast queue again. If the queue is a multicast queue, the destination port filtering table is used. There are multiple destination ports. When the multicast cell sends the request information, the input port may still send the request information of the multicast queue to multiple destination output ports after filtering through the destination port filtering table.
  • Step 202 The destination output port that receives the request information determines whether to return the grant information to each input port according to the respective back pressure information to establish a matching relationship between each input port and the destination output port of the return grant information.
  • the output port obtains the congestion status of each port. After the destination output port receives the request information of the input port, the destination port for congestion does not send the grant information to any input port. Specifically, each output port is referenced.
  • the pressure information determines whether the output port is congested. If the back pressure information is set, the output port is in a congested state, and the grant information is not returned to the input port that sends the request information. If the back pressure information is not set, the output port can receive the input. The cell sent by the port, so the output port can return the grant information to the input port that sends the request information.
  • Step 203 According to the matching relationship, each input port dispatches a cell to a destination output port that matches the input port.
  • each input port sends a multicast cell to the destination output port that establishes the matching relationship by using fan-out splitting, and updates the destination port table of the first cell of the multicast queue, and deletes The destination output port of the cell copy has been sent; if the destination port table of the first cell of the multicast queue is empty, each input port deletes the first cell of the multicast queue.
  • each input port sends a unicast cell copy to the destination output port that establishes the matching relationship, and deletes the unicast queue first cell.
  • Step 204 After the cell scheduling, if each input port detects that the input queue buffer length exceeds a preset threshold or the destination port filtering table is not empty, each input port deletes the first cell of the queue, and outputs the residual destination. The port is added to the destination port filtering table;
  • each input port updates the first in the multicast queue according to the destination port filtering table of the multicast queue.
  • the destination port table of the Head Of Line (HOL) according to the destination port filtering table, the output port that is congested is deleted from the destination port table of the first cell, so that the input ports are no longer congested.
  • the output port sends the request information, so that the cell is not scheduled to be dispatched to the congestion output port, and the congestion condition of the output port where the congestion occurs is further prevented from further worsening.
  • HOL Head Of Line
  • Step 205 If each input port detects that the first cell of the deleted queue is the end cell of the message, the destination port filtering table of the queue is cleared.
  • the output port reorganizes the text, if one of the same "3 ⁇ 4 texts is lost, the entire "3 ⁇ 4" text will be discarded due to reorganization failure. Therefore, it is necessary to discard the cell based on the message, that is, the input port is only detected when it is being deleted. When the cell is the end of the message, the destination port filter table of the queue belonging to the same text is cleared.
  • step 204 and step 205 have no chronological order, and the switching device can be executed according to the actual situation of the scheduling message, and when the condition of step 204 or step 205 is not satisfied, step 204 or step 205 may not be performed.
  • each input port will continuously receive the cells belonging to the same message into the same queue according to the order of the cells in the message.
  • the output port refers to the backpressure information to determine whether to return the grant information to the output port that sends the request information, because each input port sends the request information only to the destination output port where the packet congestion does not occur. Therefore, the output port where the message is congested does not need to return the flow control information to the input port that sends the request information, thereby reducing the amount of information transmitted between the input port and the output port, simplifying the design of the switching network, and improving the data processing of the switching network. Efficiency, and avoids the problem of inefficient multicast scheduling due to multicast head blocking.
  • FIG. 3 is a schematic flowchart of establishing a matching relationship according to an embodiment of the present invention. As shown in FIG. 3, after inputting a data cell, each input port is input to the input port.
  • the process of establishing a matching relationship with each output port specifically includes the following steps:
  • Step 301 Each input port detects a queue state of a local virtual output queue, and uses queue state information of each input port as request information.
  • Step 302 Each input port that does not establish a matching relationship sends request information to a destination output port of each input data queue first cell of the port;
  • the destination port When the input port sends the request information, the destination port is filtered according to the destination port filtering table of each queue, and the destination port filtering table is used to record whether the packet is congested on the destination output port. Specifically, if the queue is a unicast queue. The destination port filtering table has a destination output port. If the queue is a multicast queue, the destination port filtering table includes multiple destination ports. When the multicast cell sends the request information, it is filtered by the destination port filtering table. Each input port may still send request information to multiple destination output ports.
  • Step 303 After receiving the request information of the input port, the destination output port that does not establish a matching relationship determines whether to return the grant information to the selected input port according to the respective back pressure information.
  • the output port obtains the congestion status of the port. After the destination output port receives the request information of the input port, the destination port for congestion does not send the grant information to any input port. Specifically, each output port refers to the back pressure. The information determines whether the output port is congested. If the back pressure information is set, the output port is in a congested state, and the grant information is not returned to the input port that sends the request information. If the back pressure information is not set, the output port can receive the input port. The transmitted cell, so the output port can poll the matching mode to select an input port that sends the request information to the output port to return the grant information.
  • Step 304 The input port that receives the grant information selects a destination output port that returns the grant information to the input port in a polling manner to send the acceptance information to establish a matching relationship between each input port and the destination output port.
  • Step 305 determining whether the iteration successfully matches the port, if the judgment result is no, indicating that the maximum match has been established, step 307 is performed; if the determination result is yes, step 306 is performed;
  • Step 306 Determine whether the number of iterations reaches the maximum number of iterations threshold. If the judgment result is yes, go to step 307; if the judgment result is no, continue to step 302.
  • Step 307 After the iteration ends, the matching result corresponding to the matching relationship is output.
  • the maximum number of iterations threshold is a value preset by the switching network, and the threshold may be used. Controlling the number of times the switching network loops to obtain the matching information; if the number of iterations is equal to the threshold, the iteration is ended; if not, the step 302 needs to be re-executed until the maximum matching relationship is established.
  • each input port and each output port can be established, and the cells can be dispatched to the destination output port matching each input port according to the matching of each input port.
  • FIG. 4 is a schematic diagram of information sent by the input port and the output port of the embodiment shown in FIG. 3.
  • the input port and the output port shown in FIG. 4 are respectively described as an example.
  • the specific number does not limit the embodiment of the present invention.
  • each input port (11, 12, 13, 14) maintains two grant vectors, which are a unicast grant vector and a multicast grant vector, respectively.
  • the unicast grant vector is used to indicate that the received grant information is grant information of the unicast queue, and the multicast grant vector is used to indicate that the received grant information is grant information of the multicast queue; each output port (01, 02, 03, 04) Maintain two request vectors (Request Vector), the two request vectors are a unicast request vector and a multicast request vector, respectively, and the unicast request vector is used to indicate that the received request information is a request of a unicast queue. (Request) information, the multicast request is used to indicate that the received request information is a request (Request) information of a multicast queue.
  • the time slot indicator identifies the current time slot type. When 0, it indicates a unicast time slot.
  • the unicast time slot preferentially schedules a unicast queue.
  • 1 indicates a multicast time slot
  • the multicast time slot preferentially schedules a multicast queue.
  • it can also be 1 to indicate the unicast time slot, and 0 to indicate the multicast time slot.
  • the queue status information of each input port is first obtained as the request (Request) information; each output port sends the grant information according to the back pressure information, specifically, if the output port is blocked, the The output port has a back pressure signal, and the output port can refer to the back pressure signal to determine whether it is necessary to return the grant information to the input port that sends the request information.
  • the back pressure information in this embodiment can be implemented by a back pressure vector.
  • the output port has a back pressure signal, and the output port is blocked by the message; if the back pressure vector is 0 in the time slot, the reverse The pressure information is not set, the output port has no back pressure signal, and the output The port can return grant information to the input port.
  • FIG. 5 is a schematic structural diagram of an embodiment of a switching device according to the present invention. As shown in FIG. 5, the embodiment of the present invention includes: an input port processing module 51, an output port processing module 52, an arbitration module 53, and a cross-switch module 54.
  • the input port processing module 51 sends request information from each input port to the arbitration module 53; the output port processing module 52 sends back pressure information from each output port to the arbitration module 53; the arbitration module 53 according to the request information and the The back pressure information establishes a matching relationship between each input port and a destination output port that returns the grant information to the input ports; the cross switch module 54 compares the data signals of the input ports according to the matching relationship established by the arbitration module 53. The meta-scheduling is given to a destination output port that matches each of the input ports.
  • the arbitration module 53 refers to the request information of the input port processing module 51 and the back pressure information of the output port processing module 52 to establish the destination output of each input port and returning the grant information to each input port.
  • the matching relationship between the ports, so the output port where the message is congested does not need to return the flow control information to the input port that sends the request information, which reduces the amount of information transmitted between the input port and the output port, simplifies the design of the switching network, and improves the design. Data processing efficiency of the switching network.
  • the input port processing module 51 acquires the destination output port of each data input queue first cell of the port, and inputs the port processing module when transmitting the request information to the destination output port.
  • the 51 is further configured to filter the destination output port that is congested according to the destination port filtering table of each queue, where the destination port filtering table is used to record whether the destination output port is congested.
  • the input port processing module 51 is further configured to send a multicast cell copy to the destination output port that establishes the matching relationship by using fan-out splitting, and update the destination port table of the first cell of the multicast queue, and delete The destination output port of the cell copy has been sent; if the destination port table of the first cell of the multicast queue is empty, the first cell of the multicast queue is deleted; and the input port processing module 51 is further configured to establish the matching relationship. After the destination output port sends the unicast cell copy, the first cell of the unicast queue is deleted. Further, the input port processing module 51 is further configured to: after the cell scheduling, if the input queue buffer length exceeds a preset threshold, delete the first cell of the queue, and add the remaining destination output port to the Destination port filtering table.
  • the input port processing module 51 is further configured to clear the destination port filtering table of the queue.
  • the input port processing module 51 is further configured to maintain a plurality of queues, and continuously receive the cells belonging to the same packet into the same queue in the order of the cells in the text.
  • the arbitration module 53 is further provided with a time slot indicator, the time slot indicator is used to identify the type of the current time slot, and if the current time slot is a unicast time slot, the unicast data is preferentially scheduled, if the current time The slot is a multicast time slot to preferentially schedule multicast data.
  • FIG. 6 is a schematic structural diagram of an embodiment of a switching system according to the present invention.
  • the embodiment of the present invention includes: an uplink management queue device 61, a switching device 62, and a downlink management queue device 63.
  • FIG. 6 is a schematic diagram showing a configuration of a switching device in the switching system.
  • FIG. 7 is a schematic structural diagram of another embodiment of the switching system according to the present invention. Including a plurality of switching devices, correspondingly, a plurality of uplink management queue devices and a plurality of downlink management queue devices, wherein the uplink management queue devices are respectively connected with the input port processing modules in each switching device; the downlink management queue devices are respectively output Port processing module connection.
  • the uplink management queue device and the downlink management queue device may be set as two independent devices, or may be set in a centralized management queue device.
  • the switching system includes multiple switching devices, in order to reduce the complexity of reassembling packets of the downlink management queue device, different cells belonging to the same packet need to be exchanged in the same switching device.
  • FIG. 3 The description of the embodiment shown in FIG. 3 will not be repeated here.
  • FIG. 8 is a schematic structural diagram of a system according to an embodiment of the present invention.
  • the system includes: N line cards 81 and a switching network card 82.
  • the line card 81 may further include a message processing module 811 and a switching network access management module 812; the switching network access management module 812 is responsible for maintaining the local cache.
  • the cell queue reports the status of the cell queue to the switching network card 82, and the cell is read from the cell queue to the switching network card 82 according to the arbitration result of the switching network card 82.
  • the switching network access management module 812 detects the local cell queue status and reports the cell queue status to the switching network as request information in each time slot.
  • the switching network card 82 may further include: a crossbar 821, an arbitrator (Arbiter) 822; wherein the arbiter 822 receives the cell queue state information and according to the embodiment shown in FIG. 1 to FIG. The method flow establishes a matching relationship, and then configures the state of the crossbar 821 according to the matching relationship; the crossbar 821 transmits the data cell of the input port to the matching output port according to the configured state.
  • Arbiter arbitrator
  • the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

交换网流控实现方法、 交换设备及系统 本申请要求于 2010年 2月 20日提交中国专利局、申请号为 201010113687.1、 发明名称为 "交换网流控实现方法、 交换设备及系统" 的中国专利申请的优先 权, 其全部内容通过引用结合在本申请中。 技术领域 本发明实施例涉及通信技术领域, 尤其是一种交换网流控实现方法、 交换 设备及系统。 背景技术 在组合输入输出队列 ( Combined Input— Output Queued, 简称: CIOQ )架 构的交换网 (Switch Fabric ) 中, 线卡(Line Card )接收到的可变长度的报文
( Packet )被切割成固定长度的信元(Cell )緩冲在输入端形成队列, 在输入端 设置 N ( N为正整数 )个单播虚拟输出队列( Virtual Output Queue, 简称: VOQ ), 以 k ( k为正整数, 且 l < k < 2N ) )个多播虚拟输出队列。 队列调度时间被分割 为固定长度的时间槽,在一个时间槽内,一个输入端口最多能够发送一个信元, 一个输出端口最多能够接收一个信元。 若在一个时间槽内有多个输入端口需同 时向同一个输出端口发送数据, 则会产生端口冲突。
当多播报文按照数据流入队多播虚拟输出队列时, 由于多播数据流的数量 2N远远大于多播虚拟输出队列的数量 k,因此必然存在多个多播数据流入队同一 个多播虚拟输出队列的现象, 多播虚拟输出队列中属于不同报文的信元交错存 在, 即对于一个多播虚拟输出队列, 属于一个多播 文的若千信元连续入队之 后, 属于另一个多播^艮文的若干信元就会紧随其后入队; 上述现象不可避免地 导致多播调度发生严重的头阻现象。 为了尽量避免队列中由于队首数据无法调 度导致队列后部的所有数据无法调度的头阻现象, 多播调度一般采用扇出分割
( fanout split )方式。 现有技术中的交换网流控机制采用多次迭代方式匹配输入端口和输出端 口, 若交换网的输出队列发生拥塞, 则输出端口由于不能接收更多的数据向输 入端口发送流控信息 , 交换网调度算法匹配输入端口和输出端口, 输入端口在 向输出端口发送数据之前首先过滤发生拥塞的输出端口。
现有技术中交换网输出端口向输入端口发送流控信息需要占用交换网带 宽, 因此增加了交换网的负担。 发明内容 本发明实施例的目的在于提供一种交换网流控实现方法、交换设备及系统, 提高交换网的数据处理效率。
本发明实施例提供一种交换网流控实现方法, 包括:
各输入端口向没有发生报文拥塞的目的输出端口发送请求信息;
接收到所述请求信息的目的输出端口根据各自的反压信息确定是否向所述 各输入端口返回准予信息以建立所述各输入端口与返回所述准予信息的目的输 出端口之间的匹配关系;
根据所述匹配关系, 所述各输入端口调度信元到与所述各输入端口相匹配 的目的输出端口。
本发明实施例还提供一种交换设备, 包括: 输入端口处理模块、 输出端口 处理模块、 仲裁模块、 交叉开关模块,
所述输入端口处理模块, 用于向所述仲裁模块发送来自各输入端口的请求 信息;
所述输出端口处理模块, 用于向所述仲裁模块发送来自各输出端口的反压 信息;
所述仲裁模块, 用于根据所述请求信息和所述反压信息建立所述各输入端 口与向所述各输入端口返回准予信息的目的输出端口之间的匹配关系;
所述交叉开关模块, 用于根据所述匹配关系将所述各输入端口的数据信元 调度给与所述各输入端口相匹配的目的输出端口。 本发明实施例还提供一种交换系统, 包括: 用于调度数据信元的上行管理 队列设备和下行管理队列设备, 其中, 还包括至少一个如上所述的交换设备; 所述上行管理队列设备与所述输入端口处理模块连接; 所述下行管理队列设备 与所述输出端口处理模块连接。
本发明实施例提供的交换网流控实现方法、 交换设备及系统, 由于输出端 口参考反压信息向发出请求信息的输入端口返回准予信息, 发生报文拥塞的输 出端口不用向发送请求信息的输入端口返回流控信息, 减少了输入端口与输出 端口之间传输的信息量, 提高了交换网的数据处理效率。 附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实施 例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述 中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付 出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明交换网流控实现方法一个实施例的流程示意图;
图 2为本发明交换网流控实现方法又一个实施例的流程示意图;
图 3为本发明实施例所适用的建立匹配关系的流程示意图;
图 4为图 3所示实施例的输入端口与输出端口发送信息的一个示意图; 图 5为本发明交换设备一个实施例的结构示意图;
图 6为本发明交换系统一个实施例的结构示意图;
图 Ί为本发明交换系统又一个实施例的结构示意图;
图 8为本发明实施例所适用系统的结构示意图。 具体实施方式 下面将结合本发明实施例中的附图 , 对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是 全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。 具有交换网功能的网络设备在转发报文时, 需将报文分成固定长度的多个 信元。 同一 "¾文的不同信元在同一输入端口调度, 一个时间槽内调度一个信元。 同一输入端口可维护有去往多个输出端口的"¾文的信元。 属于同一才艮文的各信 元需要交换到的目的输出端口相同, 属于不同报文的各信元需要交换的目的输 出端口可能相同, 也可能不同。 在交换信元前, 需获取各输入端口的队列状态 信息和与各输入端口相对应的目的输出端口的反压信息。
图 1为本发明交换网流控实现方法一个实施例的流程示意图, 如图 1所示, 本发明实施例包括如下步驟:
步驟 101、 各输入端口向没有发生报文拥塞的目的输出端口发送请求信息; 步驟 102、接收到请求信息的目的输出端口根据各自的反压信息确定是否向 各输入端口返回准予信息以建立各输入端口与返回准予信息的目的输出端口之 间的匹配关系;
步驟 103、根据该匹配关系, 各输入端口调度信元到与各输入端口相匹配的 目的输出端口。
本发明实施例提供的交换网流控实现方法, 由于各输入端口仅向没有发生 报文拥塞的目的输出端口发送请求信息, 输出端口参考反压信息确定是否向发 出请求信息的输入端口返回准予信息, 因此发生报文拥塞的输出端口不用向发 送请求信息的输入端口返回流控信息 , 减少了输入端口与输出端口之间传输的 信息量, 简化了交换网设计, 提高了交换网的数据处理效率。
图 2为本发明交换网流控实现方法又一个实施例的流程示意图, 如图 2所 示, 本实施例包括如下步骤:
步驟 201、 各输入端口向没有发生报文拥塞的目的输出端口发送请求信息; 其中, 各输入端口在发送请求信息时, 根据本端口各队列的目的端口过滤 表过滤发生拥塞的目的输出端口, 目的端口过滤表用于记录目的输出端口是否 发生报文拥塞; 具体地, 若队列为单播队列, 则目的端口过滤表中最多只有一 个目的输出端口, 在发送单播信元请求信息时, 经过目的端口过滤表的过滤后, 输入端口可能无需再发送此单播队列的请求信息; 若队列为多播队列, 则目的 端口过滤表中包含有多个目的端口, 在多播信元发送请求信息时, 经过目的端 口过滤表过滤后输入端口仍可能向多个目的输出端口发送此多播队列的请求信 息。
步骤 202、接收到请求信息的目的输出端口根据各自的反压信息确定是否向 各输入端口返回准予信息以建立各输入端口与返回准予信息的目的输出端口之 间的匹配关系;
其中, 各输出端口获取各个本端口的拥塞状态, 当目的输出端口接收到输 入端口的请求信息之后 , 发生拥塞的目的输出端口不再向任何输入端口发送准 予信息; 具体地, 各输出端口参考反压信息确定输出端口是否发生拥塞, 若反 压信息置位则表示输出端口处于拥塞状态, 不会向发送请求信息的输入端口返 回准予信息, 若反压信息未置位则表示输出端口可以接收输入端口发送的信元, 因此该输出端口即可向发送请求信息的输入端口返回准予信息。
步骤 203、根据匹配关系, 各输入端口调度信元到与所述各输入端口相匹配 的目的输出端口;
其中, 若信元属于多播信元, 则各输入端口采用扇出分割的方式向建立匹 配关系的目的输出端口发送多播信元, 并更新多播队列的首信元的目的端口表, 删除已经发送信元拷贝的目的输出端口; 若多播队列首信元的目的端口表为空, 则各输入端口删除多播队列的首信元。
若信元为单播信元, 各输入端口向建立匹配关系的目的输出端口发送单播 信元拷贝, 删除单播队列首信元。
步驟 204、在信元调度之后, 若各输入端口检测到输入队列緩冲区长度超过 预设阈值或者目的端口过滤表非空, 则各输入端口删除队列的首信元, 并将残 留的目的输出端口累加到所述目的端口过滤表中;
具体地, 各输入端口根据多播队列的目的端口过滤表更新多播队列中的首 信元( Head Of Line, 简称: HOL )的目的端口表, 根据目的端口过滤表将发生 拥塞的输出端口从首信元的目的端口表中删除, 使得各输入端口不会再向发生 拥塞的目的输出端口发送请求信息, 也就不会再向拥塞输出端口调度信元, 避 免了发生拥塞的输出端口的拥塞状况进一步恶化。
步驟 205、若各输入端口检测到删除的队列的首信元为报文末信元, 则清空 该队列的目的端口过滤表。
由于输出端口重组 文时, 若同一个 "¾文中有一个信元丢失, 则整个 "¾文 会因重组失败而丢弃, 因此需要基于报文丢弃信元, 即输入端口只有在检测到 正在被删除的信元是 文末信元时, 则清空属于同一个 文的队列的目的端口 过滤表。
上述步驟 204和步驟 205并无时间上的先后顺序, 交换设备可根据调度报 文的实际情况执行, 并且当不满足执行步驟 204或者步驟 205的条件时, 也可 不执行步骤 204或者步骤 205。
进一步地, 若各输入端口维护多个队列, 则各输入端口将接收到的属于同 一报文的信元按照信元在报文中的顺序连续入队同一个队列。
本发明实施例提供的交换网流控实现方法, 由于各输入端口仅向没有发生 报文拥塞的目的输出端口发送请求信息, 输出端口参考反压信息确定是否向发 出请求信息的输出端口返回准予信息 , 因此发生报文拥塞的输出端口不用向发 送请求信息的输入端口返回流控信息 , 减少了输入端口与输出端口之间传输的 信息量, 简化了交换网的设计, 提高了交换网的数据处理效率, 并且避免了由 于多播头阻造成的多播调度效率低下问题。
为了更容易理解本发明实施例所述的技术方案, 图 3为本发明实施例所适用 的建立匹配关系的流程示意图, 如图 3所示, 输入端口在接收到数据信元后, 各 输入端口与各输出端口建立匹配关系的过程具体包括如下步骤:
步驟 301、 各输入端口检测本地虚拟输出队列的队列状态, 将各输入端口的 队列状态信息作为请求信息; 步驟 302、 各没有建立匹配关系的输入端口向本端口各输入数据队列首信元 的目的输出端口发送请求信息;
各输入端口在发送请求信息时, 根据各队列的目的端口过滤表过滤发生拥 塞的目的输出端口, 目的端口过滤表用于记录目的输出端口是否发生报文拥塞; 具体地, 若队列为单播队列, 则目的端口过滤表中有一个目的输出端口; 若队 列为多播队列, 则目的端口过滤表中包含有多个目的端口, 在多播信元发送请 求信息时 , 经过目的端口过滤表过滤后各输入端口仍可能向多个目的输出端口 发送请求信息。
步驟 303、没有建立匹配关系的目的输出端口接收到所述输入端口的请求信 息之后, 根据各自的反压信息确定是否向选择的输入端口返回准予信息;
其中, 各输出端口获取本端口的拥塞状态, 当目的输出端口接收到输入端 口的请求信息之后, 发生拥塞的目的输出端口不再向任何输入端口发送准予信 息; 具体地, 各输出端口参考反压信息确定输出端口是否发生拥塞, 若反压信 息置位则表示输出端口处于拥塞状态, 不会向发送请求信息的输入端口返回准 予信息, 若反压信息未置位则表示输出端口可以接收输入端口发送的信元, 因 此该输出端口即可以轮询匹配的方式选择一个向本输出端口发送请求信息的输 入端口返回准予信息。
步驟 304、接收到准予信息的输入端口以轮询匹配的方式选择一个向本输入 端口返回准予信息的目的输出端口发送接受信息以建立各输入端口与目的输出 端口之间的匹配关系;
步驟 305、 判断本次迭代是否成功匹配端口, 若判断结果为否, 说明已经建 立最大匹配, 执行步骤 307; 若判断结果为是, 执行步骤 306;
步驟 306、 判断迭代次数是否达到最大迭代次数阈值, 若判断结果为是, 执 行步骤 307; 若判断结果为否, 继续执行步骤 302。
步驟 307、 迭代结束, 输出匹配关系对应的匹配结果。
其中, 最大迭代次数阈值为交换网预先设置的一个数值, 通过该阈值可以 控制交换网循环迭代获取匹配信息的次数; 若迭代次数与该阈值相等, 则结束 迭代; 若不相等, 则仍需要重新执行步骤 302直至建立最大匹配关系为止。
通过上述过程, 即可建立各输入端口与各输出端口之间的匹配关系, 根据 该匹配关各输入端口即可将信元调度到与各输入端口相匹配的目的输出端口。
为了更清楚的理解图 3所述的方法流程, 图 4为图 3所示实施例的输入端 口与输出端口发送信息的一个示意图; 在交换网的输入端存在多个输入端口, 在输出端存在多个输出端口, 本发明实施例为描述方便仅以图 4所示的输入端 口和输出端口分别为 4个为例进行说明, 但具体数目并不能构成对本发明实施 例的限制。 具体地, 如图 4所示, 每个输入端口 (11、 12、 13、 14 )维护两个准 予向量( Grant Vector ),所述两个准予向量分别为单播准予向量和多播准予向量, 其中单播准予向量用于指示接收到的准予信息是单播队列的准予信息, 多播准 予向量用于指示接收到的准予信息是多播队列的准予信息; 每个输出端口(01、 02、 03、 04 )维护两个请求向量( Request Vector ), 所述两个请求向量分别为 单播请求向量和多播请求向量, 单播请求向量用于指示接收到的请求信息是单 播队列的请求(Request )信息, 多播请求用于指示接收到的请求信息是多播队 列的请求(Request )信息。 时间槽指示器标识当前时间槽的类型, 为 0时表示 单播时间槽, 单播时间槽优先调度单播队列, 为 1 时表示多播时间槽, 多播时 间槽优先调度多播队列。 当然, 也可以为 1时表示单播时间槽, 为 0时表示多 播时间槽。
在开始建立匹配关系之前 , 首先获取各输入端口的队列状态信息作为请求 ( Request )信息; 各输出端口参考反压信息发送准予 ( Grant )信息, 具体地, 若输出端口发生报文阻塞, 则该输出端口有反压信号, 输出端口可以参考该反 压信号确定是否需要向发送请求信息的输入端口返回准予信息, 如图 4所示, 本 实施例中的反压信息可以通过反压向量实现, 若该反压向量在该时间槽内为 1 , 表示反压信息置位, 该输出端口有反压信号, 输出端口发生报文阻塞; 若该反 压向量在该时间槽内为 0, 表示反压信息未置位, 该输出端口无反压信号, 输出 端口可以向输入端口返回准予信息。
图 5为本发明交换设备一个实施例的结构示意图, 如图 5所示, 本发明实 施例包括: 输入端口处理模块 51、 输出端口处理模块 52、 仲裁模块 53、 交叉开 关模块 54。
其中, 输入端口处理模块 51向仲裁模块 53发送来自各输入端口的请求信 息; 输出端口处理模块 52向仲裁模块 53发送来自各输出端口的反压信息; 仲 裁模块 53根据所述请求信息和所述反压信息建立所述各输入端口与向所述各输 入端口返回准予信息的目的输出端口之间的匹配关系; 交叉开关模块 54根据仲 裁模块 53建立的匹配关系将所述各输入端口的数据信元调度给与所述各输入端 口相匹配的目的输出端口。
本发明实施例提供的信元交换设备, 由于仲裁模块 53参考了输入端口处理 模块 51的请求信息和输出端口处理模块 52的反压信息建立各输入端口与向各 输入端口返回准予信息的目的输出端口之间的匹配关系 , 因此发生报文拥塞的 输出端口不用向发送请求信息的输入端口返回流控信息, 减少了输入端口与输 出端口之间传输的信息量, 简化了交换网设计, 提高了交换网的数据处理效率。
进一步地, 在上述图 5所示实施例的基础上, 输入端口处理模块 51获取本 端口各数据输入队列首信元的目的输出端口, 并在向目的输出端口发送请求信 息时 , 输入端口处理模块 51还用于根据各队列的目的端口过滤表过滤发生拥塞 的目的输出端口, 所述目的端口过滤表用于记录目的输出端口是否发生才艮文拥 塞。
进一步地 , 输入端口处理模块 51还用于釆用扇出分割的方式向建立所述匹 配关系的目的输出端口发送多播信元拷贝 , 并更新多播队列的首信元的目的端 口表, 删除已经发送信元拷贝的目的输出端口; 若多播队列首信元的目的端口 表为空, 则删除所述多播队列的首信元; 输入端口处理模块 51还用于向建立所 述匹配关系的目的输出端口发送单播信元拷贝之后, 删除所述单播队列的首信 元。 进一步地, 输入端口处理模块 51还用于在信元调度之后, 若检测到输入队 列緩冲区长度超过预设阈值, 则删除队列的首信元, 并将残留的目的输出端口 累加到所述目的端口过滤表中。
若检测到删除的队列首信元为报文末信元, 输入端口处理模块 51还用于清 空所述队列的目的端口过滤表;
输入端口处理模块 51还用于维护多个队列, 将接收到的属于同一报文的信 元按照信元在 文中的顺序连续入队同一个队列。
此外, 仲裁模块 53还设置有时间槽指示器, 所述时间槽指示器用于标识当 前时间槽的类型, 若所述当前时间槽为单播时间槽则优先调度单播数据, 若所 述当前时间槽为多播时间槽则优先调度多播数据。
图 6为本发明交换系统一个实施例的结构示意图, 如图 6所示, 本发明实 施例包括: 上行管理队列设备 61、 交换设备 62、 下行管理队列设备 63。 图 6仅 示意出了交换系统存在一个交换设备的情形, 图 7 为本发明交换系统又一个实 施例的结构示意图, 本实施例还可以包括有多个交换设备的情形, 如图 7所示, 包括多个交换设备, 相应地存在多个上行管理队列设备和多个下行管理队列设 备, 其中, 上行管理队列设备分别与每个交换设备中的输入端口处理模块连接; 下行管理队列设备分别与输出端口处理模块连接。 对于交换系统中包括有一个 交换设备的情形的交换系统架构, 可参见图 5所示实施例的记载。 本实施例中, 上行管理队列设备和下行管理队列设备可作为两个独立的设备设置, 也可以集 中设置在一个管理队列设备中。
如果交换系统中包括有多个交换设备, 为了降低下行管理队列设备重组报 文的复杂度, 属于同一报文的不同信元需要在同一交换设备中进行交换, 具体 交换流程可参见图 1 ~图 3所示实施例的记载, 在此不再赘述。
图 8为本发明实施例所适用系统的结构示意图, 如图 8所示, 本系统中包 含: N个线卡( Line Card ) 81、 交换网板卡 82。 线卡 81还可以包括报文处理模 块 811、 交换网接入管理模块 812; 交换网接入管理模块 812负责维护本地緩存 的信元队列, 同时把信元队列的状态报告给交换网板卡 82, 并根据交换网板卡 82的仲裁结果, 从信元队列读取信元提交给交换网板卡 82。
进一步地, 交换网接入管理模块 812检测本地的信元队列状态, 并在每个 时间槽向交换网 告一次信元队列状态作为请求信息。
进一步地, 交换网板卡 82还可以包括: 交叉开关(Crossbar ) 821、 仲裁器 ( Arbiter ) 822; 其中, 仲裁器 822接收信元队列状态信息并根据上述图 1〜图 3 所示实施例的方法流程建立匹配关系, 然后才艮据匹配关系配置交叉开关 821 的 状态; 交叉开关 821 根据所配置的状态, 将输入端口的数据信元传送到与之匹 配的输出端口。
本领域普通技术人员可以理解: 实现上述实施例的全部或部分步驟可以通 过程序指令相关的硬件来完成, 前述的程序可以存储于计算机可读取存储介质 中, 该程序在执行时, 执行包括上述方法实施例的步驟; 而前述的存储介质包 括: ROM、 RAM, 磁碟或者光盘等各种可以存储程序代码的介质。 最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对其限 制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通技术人员 应当理解: 其依然可以对前述各实施例所记载的技术方案进行修改, 或者对其 中部分技术特征进行等同替换; 而这些修改或者替换, 并不使相应技术方案的 本质脱离本发明各实施例技术方案的精神和范围。

Claims

权利要求
1、 一种交换网流控实现方法, 其特征在于, 包括:
各输入端口向没有发生报文拥塞的目的输出端口发送请求信息;
接收到所述请求信息的目的输出端口根据各自的反压信息确定是否向所述 各输入端口返回准予信息以建立所述各输入端口与返回所述准予信息的目的输 出端口之间的匹配关系;
根据所述匹配关系 , 所述各输入端口调度信元到与所述各输入端口相匹配 的目的输出端口。
2、 根据权利要求 1所述的方法, 其特征在于, 还包括:
所述各输入端口获取本端口各数据输入队列首信元的目的输出端口, 并在 向所述目的输出端口发送请求信息时, 根据各队列的目的端口过滤表过滤发生 拥塞的目的输出端口, 所述目的端口过滤表用于记录目的输出端口是否发生才艮 文拥塞。
3、 根据权利要求 1所述的方法, 其特征在于, 还包括:
所述各输出端口获取本端口的拥塞状态, 当目的输出端口接收到输入端口 的请求信息之后, 发生拥塞的目的输出端口不再向任何输入端口发送准予信息。
4、 根据权利要求 1所述的方法, 其特征在于, 还包括:
所述各输入端口釆用扇出分割的方式向建立所述匹配关系的目的输出端口 发送多播信元拷贝, 并更新多播队列的首信元的目的端口表, 删除已经发送信 元拷贝的目的输出端口;
若多播队列首信元的目的端口表为空, 所述各输入端口删除所述多播队列 的首信元;
所述各输入端口向建立所述匹配关系的目的输出端口发送单播信元拷贝, 删除所述单播队列的首信元。
5、 根据权利要求 1所述的方法, 其特征在于, 还包括:
在信元调度之后 , 若所述各输入端口检测到输入队列緩冲区长度超过预设 阈值或者目的端口过滤表非空, 则所述各输入端口删除队列的首信元, 并将残 留的目的输出端口累加到所述目的端口过滤表中。
6、 根据权利要求 1所述的方法, 其特征在于, 还包括:
若所述各输入端口检测到删除的队列的首信元为 >¾文末信元 , 则清空所述 队列的目的端口过滤表。
7、 根据权利要求 1 ~ 6任一所述的方法, 其特征在于, 还包括:
若所述各输入端口维护多个队列 , 则将接收到的属于同一报文的信元按照 信元在报文中的顺序连续入队同一个队列。
8、 一种交换设备, 其特征在于, 包括: 输入端口处理模块、 输出端口处理 模块、 仲裁模块、 交叉开关模块,
所述输入端口处理模块, 用于向所述仲裁模块发送来自各输入端口的请求 信息;
所述输出端口处理模块, 用于向所述仲裁模块发送来自各输出端口的反压 信息;
所述仲裁模块, 用于根据所述请求信息和所述反压信息建立所述各输入端 口与所述目的输出端口之间的匹配关系;
所述交叉开关模块, 用于根据所述匹配关系将所述各输入端口的数据信元 调度给与所述各输入端口相匹配的目的输出端口。
9、 根据权利要求 8所述的设备, 其特征在于,
所述输入端口处理模块还用于获取本端口各数据输入队列首信元的目的输 出端口, 并在向所述目的输出端口发送请求信息时, 根据各队列的目的端口过 滤表过滤发生拥塞的目的输出端口, 所述目的端口过滤表用于记录目的输出端 口是否发生报文拥塞。
10、 根据权利要求 8所述的设备, 其特征在于,
所述输入端口处理模块还用于采用扇出分割的方式向建立所述匹配关系的 目的输出端口发送多播信元拷贝, 并更新多播队列的首信元的目的端口表, 删 除已经发送信元拷贝的目的输出端口; 若多播队列首信元的目的端口表为空, 则删除所述多播队列的首信元;
所述输入端口处理模块还用于向建立所述匹配关系的目的输出端口发送单 播信元拷贝之后, 删除所述单播队列的首信元。
11、 根据权利要求 8所述的设备, 其特征在于,
所述输入端口处理模块还用于在信元调度之后 , 若检测到输入队列緩冲区 长度超过预设阈值, 则删除队列的首信元, 并将残留的目的输出端口累加到所 述目的端口过滤表中。
12、 根据权利要求 8 ~ 11任一所述的设备, 其特征在于,
所述输入端口处理模块还用于若检测到删除的队列首信元为报文末信元, 则清空所述队列的目的端口过滤表;
若所述输入端口处理模块还用于维护多个队列 , 将接收到的属于同一报文 的信元按照信元在报文中的顺序连续入队同一个队列。
13、 一种交换系统, 包括: 用于调度数据信元的上行管理队列设备和下行 管理队列设备, 其特征在于, 还包括至少一个如权利要求 8所述的交换设备; 所 述上行管理队列设备与所述输入端口处理模块连接; 所述下行管理队列设备与 所述输出端口处理模块连接。
PCT/CN2010/076746 2010-02-20 2010-09-09 交换网流控实现方法、交换设备及系统 WO2011100878A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20100846000 EP2528286B1 (en) 2010-02-20 2010-09-09 Method, switching device and system for realizing flow controlling in a switching network
US13/589,890 US8797860B2 (en) 2010-02-20 2012-08-20 Method for implementing flow control in switch fabric, switching device, and switching system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010113687.1A CN102164067B (zh) 2010-02-20 2010-02-20 交换网流控实现方法、交换设备及系统
CN201010113687.1 2010-02-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/589,890 Continuation US8797860B2 (en) 2010-02-20 2012-08-20 Method for implementing flow control in switch fabric, switching device, and switching system

Publications (1)

Publication Number Publication Date
WO2011100878A1 true WO2011100878A1 (zh) 2011-08-25

Family

ID=44465050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/076746 WO2011100878A1 (zh) 2010-02-20 2010-09-09 交换网流控实现方法、交换设备及系统

Country Status (4)

Country Link
US (1) US8797860B2 (zh)
EP (1) EP2528286B1 (zh)
CN (1) CN102164067B (zh)
WO (1) WO2011100878A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9654206B1 (en) 2013-04-04 2017-05-16 Lockheed Martin Corporation Hub enabled single hop transport forward access
CN103312603B (zh) * 2013-05-29 2017-02-15 龙芯中科技术有限公司 网络拥塞信息传输方法和装置
US20150085871A1 (en) * 2013-09-25 2015-03-26 RIFT.io Inc. Dynamically scriptable ip packet processing engine
CN105337866B (zh) * 2014-06-30 2019-09-20 华为技术有限公司 一种流量切换方法及装置
CN104104617B (zh) * 2014-08-07 2017-10-17 曙光信息产业(北京)有限公司 一种报文仲裁方法及装置
CN105245471A (zh) * 2015-09-25 2016-01-13 京信通信技术(广州)有限公司 报文发送方法及报文发送装置
CN107204940B (zh) * 2016-03-18 2020-12-08 华为技术有限公司 芯片和传输调度方法
WO2018058625A1 (zh) 2016-09-30 2018-04-05 华为技术有限公司 一种检测报文反压的方法及装置
CN108259365A (zh) * 2017-01-24 2018-07-06 新华三技术有限公司 组播报文转发方法和装置
CN109314673B (zh) * 2017-04-24 2022-04-05 华为技术有限公司 一种客户业务传输方法和装置
CN107743101B (zh) * 2017-09-26 2020-10-09 杭州迪普科技股份有限公司 一种数据的转发方法及装置
CN109660463A (zh) * 2017-10-11 2019-04-19 华为技术有限公司 一种拥塞流识别方法及网络设备
US11863451B2 (en) * 2022-05-16 2024-01-02 Huawei Technologies Co., Ltd. Hardware accelerated temporal congestion signals

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1816016A (zh) * 2005-02-04 2006-08-09 三星电子株式会社 用于减少ip分组丢失的路由方法和设备
US20060291458A1 (en) * 1999-03-05 2006-12-28 Broadcom Corporation Starvation free flow control in a shared memory switching device
CN101340393A (zh) * 2008-08-14 2009-01-07 杭州华三通信技术有限公司 组播流控方法、系统及现场可编程门阵列

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6201792B1 (en) 1998-05-14 2001-03-13 3Com Corporation Backpressure responsive multicast queue
CA2239133C (en) 1998-05-28 2007-08-28 Newbridge Networks Corporation Multicast methodology and apparatus for backpressure - based switching fabric
US6519225B1 (en) 1999-05-14 2003-02-11 Nortel Networks Limited Backpressure mechanism for a network device
CA2292828A1 (en) * 1999-12-22 2001-06-22 Nortel Networks Corporation Method and apparatus for traffic flow control in data switches
ATE331369T1 (de) * 2000-03-06 2006-07-15 Ibm Schaltvorrichtung und verfahren
US7023840B2 (en) * 2001-02-17 2006-04-04 Alcatel Multiserver scheduling system and method for a fast switching element
US7068672B1 (en) * 2001-06-04 2006-06-27 Calix Networks, Inc. Asynchronous receive and transmit packet crosspoint
US7151744B2 (en) * 2001-09-21 2006-12-19 Slt Logic Llc Multi-service queuing method and apparatus that provides exhaustive arbitration, load balancing, and support for rapid port failover
US6954811B2 (en) * 2002-07-19 2005-10-11 Calix Networks, Inc. Arbiter for an input buffered communication switch
US7366166B2 (en) * 2003-04-25 2008-04-29 Alcatel Usa Sourcing, L.P. Data switching using soft configuration
US7894343B2 (en) * 2003-06-19 2011-02-22 Polytechnic University Packet sequence maintenance with load balancing, and head-of-line blocking avoidance in a switch
US7376034B2 (en) * 2005-12-15 2008-05-20 Stec, Inc. Parallel data storage system
US20070237082A1 (en) * 2006-03-31 2007-10-11 Woojong Han Techniques for sharing connection queues and performing congestion management
US7782780B1 (en) * 2006-05-30 2010-08-24 Integrated Device Technology, Inc. System and method for arbitration of multicast data packets using holds
GB2461693B (en) * 2008-07-07 2012-08-15 Micron Technology Inc Switching method
US8300650B2 (en) * 2009-06-16 2012-10-30 New Jersey Institute Of Technology Configuring a three-stage Clos-network packet switch

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060291458A1 (en) * 1999-03-05 2006-12-28 Broadcom Corporation Starvation free flow control in a shared memory switching device
CN1816016A (zh) * 2005-02-04 2006-08-09 三星电子株式会社 用于减少ip分组丢失的路由方法和设备
CN101340393A (zh) * 2008-08-14 2009-01-07 杭州华三通信技术有限公司 组播流控方法、系统及现场可编程门阵列

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2528286A4 *

Also Published As

Publication number Publication date
EP2528286A4 (en) 2012-12-05
US8797860B2 (en) 2014-08-05
CN102164067A (zh) 2011-08-24
US20120314577A1 (en) 2012-12-13
EP2528286B1 (en) 2015-04-22
EP2528286A1 (en) 2012-11-28
CN102164067B (zh) 2013-11-06

Similar Documents

Publication Publication Date Title
WO2011100878A1 (zh) 交换网流控实现方法、交换设备及系统
EP1117213B1 (en) Packet switch device and scheduling control method
EP0981878B1 (en) Fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch
JP3866425B2 (ja) パケットスイッチ
CA2255418C (en) Ring interface and ring network bus flow control system
JP4796149B2 (ja) 相互接続の待ち時間を低減するための方法及びシステム
JP3846880B2 (ja) データ・パケット・スイッチのマルチキャスト・トラフィックを制御するためのシステム及び方法
US8121122B2 (en) Method and device for scheduling unicast and multicast traffic in an interconnecting fabric
EP2239895A1 (en) Space-Space-Memory (SSM) Clos-Network Packet Switch
US20060053117A1 (en) Directional and priority based flow control mechanism between nodes
JP3908483B2 (ja) 通信装置
CN104717159A (zh) 一种基于存储转发交换结构的调度方法
US20080273546A1 (en) Data switch and a method of switching
US11646978B2 (en) Data communication method and apparatus
US20120209941A1 (en) Communication apparatus, and apparatus and method for controlling collection of statistical data
US7990873B2 (en) Traffic shaping via internal loopback
WO2012103704A1 (zh) 组播复制方法、装置及系统
JP4568364B2 (ja) 相互接続ファブリックにおいてユニキャスト・トラフィック及びマルチキャスト・トラフィックをスケジューリングする方法、装置、及びコンピュータ・プログラム(相互接続ファブリックにおいてユニキャスト・トラフィック及びマルチキャスト・トラフィックをスケジューリングする方法及び装置)
US20040071144A1 (en) Method and system for distributed single-stage scheduling
CN110430146B (zh) 基于CrossBar交换的信元重组方法及交换结构
JP4630231B2 (ja) パケット処理システム、パケット処理方法、およびプログラム
JP6249156B2 (ja) プル型ネットワーク中継装置、及びネットワーク中継方法
Lien et al. Generalized dynamic frame sizing algorithm for finite-internal-buffered networks
Chrysos et al. Towards low-cost high-performance all-optical interconnection networks
Xi et al. Packet-mode scheduling with proportional fairness for input-queued switches

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10846000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010846000

Country of ref document: EP