EP1222780A1 - Hierarchical output-queued packet-buffering system and method - Google Patents

Hierarchical output-queued packet-buffering system and method

Info

Publication number
EP1222780A1
EP1222780A1 EP00973429A EP00973429A EP1222780A1 EP 1222780 A1 EP1222780 A1 EP 1222780A1 EP 00973429 A EP00973429 A EP 00973429A EP 00973429 A EP00973429 A EP 00973429A EP 1222780 A1 EP1222780 A1 EP 1222780A1
Authority
EP
European Patent Office
Prior art keywords
packet
queues
level
priority
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00973429A
Other languages
German (de)
English (en)
French (fr)
Inventor
Robert Ryan
Leon K. Woo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Enterasys Networks Inc
Original Assignee
Tenor Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tenor Networks Inc filed Critical Tenor Networks Inc
Publication of EP1222780A1 publication Critical patent/EP1222780A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/521Static queue service slot or fixed bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6205Arrangements for avoiding head of line blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • H04L49/9052Buffering arrangements including multiple buffers, e.g. buffer pools with buffers of different sizes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element
    • H04L49/505Corrective measures
    • H04L49/508Head of Line Blocking Avoidance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9023Buffering arrangements for implementing a jitter-buffer

Definitions

  • the present invention relates generally to communication systems, and in particular to movement of data flows in packet-based communication architectures.
  • Data communication involves the exchange of data between two or more entities
  • the data can be, for example, information transferred
  • the protocols define how the packets are constructed and treated as they travel from source to
  • bandwidth information-carrying capacity at high speeds with substantial reliability.
  • Bandwidth is further increased by "multiplexing" strategies, which allow multiple data streams to be sent over the same communication medium without interfering with each other.
  • TDM time- division multiplexing
  • time slot i.e., a short window of availability recurring at fixed intervals (with other time slots scheduled during the intervals).
  • Each time slot represents a separate communication channel.
  • time slots are then multiplexed onto higher speed lines in a predefined bandwidth hierarchy.
  • DWDM dense wavelength division multiplexing
  • the channels are different wavelengths of light, which may be carried simultaneously over the same fiber without
  • networks are designed to balance traffic across different branches as well as to other networks, so that
  • Packet routing is handled by communication devices such as switches, routers, and bridges.
  • a communication device 150 receives information (in the form of packets/frames, cells, or TDM frames) from a communication
  • the communication device 150 can contain a number of network interface cards (NICs), such as NIC 160 and NIC 180, each having
  • Input ports 162, 164, and 166 receive information from the communication network 110 and transfer them to a number of packet processing engines (not shown) that process the packets and prepare them for transmission at one of the output ports 168, 170, and 172, which correspond to a
  • An ideal communication device would be capable of aggregating incoming data from numerous input channels and outputting
  • congestion i.e.. high quality of service, or QoS
  • QoS quality of service
  • a switch 200 includes a series of p input ports denoted as INi ...IN P and a series of p output ports denoted as OUTi ...OUT p .
  • a typical switch is configured to accommodate multiple plug-in network interface cards, with each card carrying a fixed number of input and output ports.
  • each input port is directly connected to every output port; as a result, packets can travel between ports with minimal delay.
  • An incoming packet is examined to
  • Full-mesh switches can also be used to implement an output-buffered architecture that can accommodate rich QoS mechanisms; for example, some customers may pay higher fees for better service guarantees, and different kinds of traffic may be accorded different priorities.
  • output port output the packets in accordance with the priority levels associated with their respective queues. As shown in Fig. 2A, for example, a series of n priority queues 205 ⁇ , 205
  • ...205 n is associated with output port OUTi, and a distributed scheduler module 210 selects packets from these queues from transmission in accordance with their queue-level priorities.
  • Proportional fairness recognizes that packet size can vary, so that if prioritization were applied strictly on a per-packet basis, larger
  • a switch 250 based on a partial-mesh design is depicted in Fig. 2B.
  • the switch 250 also contains a series of p input ports and a complementary series of p output ports. In this case, however, each input port
  • a central scheduling module 255 connects input ports to output ports on an as-need basis.
  • partial-mesh architectures support high aggregate bandwidths, but will block, or congest, when certain traffic patterns appear at the
  • output queues 260 organized as p sets of q queues - that is, q priority queues for each output port 1 through p. In this way, incoming packets can be prioritized before they have a chance to cause
  • the present invention utilizes a hierarchically organized output-queuing system that
  • the architecture of the present invention facilitates output-
  • a packet-buffering system and method incorporating aspects of the
  • present invention is used in transferring packets from a series of input ports to a series of output ports in a communication device that is coupled to a communications network.
  • a first packet buffer is organized into a first series of queues.
  • the first-series queues can
  • Each first-series priority queue set is also associated with one of the output ports of the
  • a second packet buffer (and, if desired, additional packet buffers) is also organized into a series of queues that can be grouped into priority queue sets
  • the first packet buffer receives packets from the input ports of the communication device at the aggregate network rate (i.e., the overall transmission rate of the network itself).
  • the aggregate network rate i.e., the overall transmission rate of the network itself.
  • received packets are then examined by an address lookup engine to ascertain their forwarding
  • the packets are transferred at the aggregate network rate to first-series queues having priority levels consistent
  • second-series queues at a rate less than the aggregate network rate. These second-series queues are part of the second-series priority queue set whose priorities are consistent with those of the received packets and which are also associated with the designated output ports. The order in which the packets are transferred from the first-series queues to the second-series queues is based
  • any of various dequeuing systems associated with that second packet buffer, together with a scheduler, may schedule and transfer the packets to the designated output ports. Alternatively (and as discussed below), the packets may be transferred to additional, similarly organized packet
  • the type of memory selected for use as the first packet buffer should have performance
  • characteristics that include relatively fast access times e.g., embedded ASIC packet buffers,
  • the first-series queues have a relatively shallow
  • bandwidth means the speed at which the queues can absorb
  • the second packet buffer is able to receive packets from the first packet buffer at less
  • the queue depth of the second-series queues is typically larger than the queue depth of the first series queues. Consequently, the performance characteristics of the memory forming the second packet buffer does not require access times as fast as those of the first packet buffer (e.g., field-configurable memory elements such as DRAM,
  • packet buffers is equal to or greater than a sum of the first packet-buffer bandwidths, although the individual second packet buffer bandwidths are less than the aggregate first buffer bandwidth.
  • second packet buffers can exhibit substantially similar performance characteristics.
  • a homogeneous memory can be organized to accommodate both first-series and second-series
  • the present invention can accommodate a third packet
  • This third packet buffer coupled to and receiving packets from at least one of the second packet buffers for subsequent transfer to a designated output port.
  • This third packet buffer would also be comprised of third-series queues grouped as third-series priority queue sets so that third-series
  • the sum of the third packet-buffer bandwidths would generally be equal to or greater than that of the corresponding second packet-buffer bandwidths and the sum of third packet-buffer depths would generally exceed the sum of the second packet-buffer depths.
  • packets may be aggregated into queue flows with a
  • the hierarchical memory architecture of the present invention overcomes the hierarchical memory architecture of the present invention
  • the benefits of the present invention not only include enhancing the scalability of full- mesh systems (output-queued) while avoiding head of line blocking, but they are also beneficial in partial-mesh systems.
  • queued packet-buffering systems can be interconnected by a partial-mesh interconnect and still preserve many of the QoS features of the singular system.
  • FIG. 1 schematically illustrates a prior-art communication device coupling a communication network to other networks, such as LANs, MANs, and WANs;
  • FIG. 2A schematically illustrates a prior-art, full-mesh interconnect system implementing
  • FIG. 2B schematically illustrates a prior-art, partial-mesh interconnect system exhibiting
  • FIG. 2C schematically illustrates a prior-art, partial-mesh interconnect system
  • FIG. 3 A schematically illustrates a hierarchical queue system in accordance with an embodiment of the present invention
  • FIG. 3B schematically illustrates several components in a network interface card that are
  • FIG. 4 provides a flow diagram of the steps performed when operating the network interface card of FIG. 3B, in accordance with one embodiment of the present invention
  • FIG. 5 illustrates the memory, packet, and queue structure of the hierarchical queue system of the network interface card of FIG. 3B, in accordance with one embodiment of the
  • FIG. 6 provides a flow diagram of the steps performed by the dequeue and hierarchical queue system of FIG. 5, in accordance with one embodiment of the present invention
  • FIG. 7 illustrates the memory, packet, and queue structure of the hierarchical queue
  • FIG. 8 illustrates an embodiment of the hierarchical queue system in a partial-mesh
  • the present invention incorporates a hierarchical queue system 320 to transfer packets received over the communication network 110 from a plurality of input
  • the hierarchical queue system 320 buffers the received packets in a plurality of memory elements, such as a level-one memory
  • level-two memory 3144 a level-two memory 3144 and a level-X memory 316.
  • Level-one memory 312 must be fast enough to buffer at line rate the aggregate traffic of all input ports 302, 304, 306 without loss.
  • Level-one memory can be typically constructed of
  • memory bandwidth can be increased by making the memory width wider. But because the memory storage density is also limited by the technology of the day. making the memories wider necessitates that they become shallower. The resulting reduction in memory depth can be recovered by adding a plurality of level-two memories 314, 316 whose aggregate bandwidth is equal to or greater than the bandwidth of the level-one memory 312.
  • network environment may be achieved as memory technology improves, the problem resurfaces when trying to scale the communication device 150 at even higher packet-buffer bandwidths.
  • Hierarchical queue system 320 incorporates memory levels 314, 316 that are organized according to successively deeper packet-buffer depths (i.e.. capable of storing more bytes) and that exhibit
  • level-two memory 314 and level-X memory 316 essentially make up for the sacrifice in packet-buffer depth in the level-one memory 312 through organization into deeper packet-buffer depths.
  • Hierarchical queue system 320 can exhibit substantially similar performance characteristics
  • level-two memory 314 and level- X memory 316 allow the use of denser memory types (i.e., greater packet-buffer depth) for the
  • system 320 of the present invention can be implemented in a wide variety of communication devices (e.g., switches and routers), in a shared memory accessible to one or more
  • NIC network interface card
  • the NIC 328 receives packets from the packet-based communication
  • the forwarding engine 330 together with the ALE 332, detennine the destination output ports of the packets by looking up the
  • the modified packets are then routed to the full-mesh interconnect 311 via the
  • the hierarchical queue system 320 of the NIC 328 normally receives the modified packets via the full-mesh interconnect 311 so that it can funnel packets originally received at the input ports 162, 164, 166, 224, 226, 228 of any NIC installed within the communication device 150, including the packets received by the input ports 302, 304, 306 of its own NIC 328, to one or more of the output ports 322, 324, 326 of its own NIC 328.
  • packets received at input ports 302, 304, 306 are transferred directly to the
  • the forwarding engine 330 bypass the interconnect interface 310 and full-mesh interconnect 311 altogether.
  • the forwarding engine 330 bypass the interconnect interface 310 and full-mesh interconnect 311 altogether.
  • forwarding engine 330 transfers the packets to the interconnect interface 310, which then directly forwards the packets to the hierarchical queue system 320. thus bypassing the full-mesh
  • the modified packets are received at a first-level memory 312 of the hierarchical queue system (step 418).
  • the packets in the first-level memory are received at a first-level memory 312 of the hierarchical queue system (step 418).
  • step 420 coiTesponding to memory elements organized into increasingly deeper queue depths as described below.
  • packets are scheduled for transmission to the selected output ports 322, 324, 326 (step 424).
  • the packets are then transmitted from the selected output ports 322, 324, 326 to a communication network such as the LAN 120, MAN 130, or WAN 140.
  • a forwarding engine 330 associated with the input port 302 is selected.
  • the selected forwarding engine parses the received packet header.
  • the forwarding engine 330 processes the packet header by checking the integrity of the
  • ALE 332 are used to report the processing activity involving this packet header to modules external to the selected forwarding engine, and communicating with the ALE 332 to obtain routing infonriation for one of the output ports 322, 324, 326 associated with the destination of the packet.
  • the engine can modify the packet header to include routing information (e.g., by prepending a
  • the modified packet header is then written to a buffer of the forwarding engine 330 where it is
  • the modified packets 510 which are received at the first-level memory or first packet buffer 312 (step 610), comprise a plurality of packets having varying priority levels and designated for various output ports (i.e., physical or virtual ports) of the NIC 328.
  • the received packets 510 comprise a plurality of packets having varying priority levels and designated for various output ports (i.e., physical or virtual ports) of the NIC 328.
  • packets 510 may include a plurality of high-priority packets 512, medium-priority packets 514, and low-priority packets 516, some of which are destined for output port 322 and others for one
  • the present invention examines the forwarding vectors and the packet header information in the received packets 510 to determine their destination output port 322 (step 612).
  • the received packets 510 for a particular output port 322 are
  • step 614 organized into groups of queues or priority queue sets that correspond, for example, to
  • a high-priority queue set 520 (including high-priority packets 512), a medium-priority queue set 522 (including medium-priority packets 514), and a low-priority queue set 524 (including low-
  • the packets in the first-series priority queue sets 520. 522, 524 of the first packet buffer 312 are then funneled into second-series priority queue sets 530, 532, 534 in the second level
  • the second-series queue sets 530. 532. 534 are associated with the same output port 322 as the first-series priority queue sets 520, 522, 524.
  • the second-series queue sets 530, 532, 534 comprise second-series queues that have a greater buffer depth 536 than the corresponding first-series queues in the first-series queue sets so as to provide deeper buffering at a slower operating rate (and thus enable the use of less expensive memory as
  • buffer depth refers to the maximum
  • first packet buffer 312 operates at the aggregate network
  • the first packet-buffer 312 is able to receive packet data in the amount and rate that such data is provided by the communication network 110. In order to support these operating parameters while remaining non-blocking and output buffered, the first
  • the packet buffer 312 uses a wide data bus (to achieve high data rates) and a multiple bank architecture (to achieve high frame rates).
  • the first packet buffer 312 is also relatively shallow (e.g., tens of thousands of packets of storage) so that the first packet-buffer depth 526 of the first-
  • the second-series queues have a greater packet- buffer depth 536 (e.g., millions of packets of storage).
  • the second packet-buffer depth is often
  • a sum of the second packet-buffer bandwidths of all the second packet buffers can exceed the sum of the first packet-buffer bandwidths of all the first packet buffers.
  • the packet-handling capabilities of the second packet buffers are equal to, and may in fact be greater than, the capabilities of the first packet buffers.
  • individual second packet-buffer bandwidths are typically less than the aggregate bandwidth of the
  • queues in the hierarchical queue system 340 enables the use of different memory types for the first and second packet buffers and can thus result in significant cost savings without material
  • first and second packet buffers can be organized within the same pool of memory and exhibit the same performance characteristics (with just a difference in their buffer depths), but this implementation is not as cost effective.
  • the hierarchical queue system 320 incorporates more than two levels of packet buffering, such as a level-X memory 316. Similarly, the level-X memory 316 would provide a packet-buffer depth 542 that exceeds the depth 536 of the corresponding second packet buffer. Once the received packets 510 have been funneled down to the lowest level of memory (with the
  • the first packet buffer 312 receives packets in parallel from all of the NICS 160, 180, 328 of the communication device 150 via the
  • Enqueue engines 313 parse the forwarding vectors to determine whether the received packets are destined for this NIC 328. If the packets are destined for an output port 322. 326 of the NIC 328, the enqueue engines further determine the priority level for the received packets 510 and determine which of the queues (with a consistent priority
  • each memory level of the hierarchical queue system 320 will buffer the received packet.
  • the received packets 510 are then sorted by output port and priority level and grouped into first - series queues in the first packet buffer 312.
  • the packets in the first-series queues are then transferred to corresponding second-series queues in the second packet buffer 314.
  • the second packet buffer 314 provides the bulk of the
  • RED Early Detection
  • wRED weighted RED
  • level-X memories 314, 316 facilitates the implementation of a richer set of QoS mechanisms.
  • the distributed scheduler 210 can donate bandwidth from idle high-priority queues to busy lower-priority queues that have packets to transmit.
  • the higher-priority queues are
  • the reverse may also be done (i.e., donating bandwidth from idle low-priority queues to higher-priority queues).
  • QoS techniques may be used such as combining pure priority scheduling with Weighted Fair Queuing and bandwidth donation.
  • the hierarchical queue system 320 can also be used to aggregate
  • the sorting burden on the first-level memory 710 is alleviated, because the first-level memory 710 need only sort through the prioritized queue flows to locate packets destined for the output port 322 associated with the first-level memory 710 rather than sort by both priority level and output
  • a level- zero memory 710 sorts the received packets 510 by priority level into priority queue sets 712,
  • a subset of the packets in the level-zero memory 710 that correspond to a particular output port 322 of the NIC 328 are then transferred to
  • the first-level memory 710. which organizes the packet data into priority queue sets 520. 522.
  • a communication device 810 includes a plurality of instances 820', 820". 820'" of the hierarchical queue system of the present invention.
  • the communication device 810 receives packets from a full-mesh or partial-mesh interconnect 850.
  • Incoming
  • packets enter a level-zero memory 840 and are prioritized/sorted by an enqueue engine 842.
  • the prioritized packets are routed to one of the plurality of instances of the hierarchical queue system 820', 820", 820"' that is associated with a particular destination outport port (not shown) of the communication device 810 for which the packets are destined.
  • the level-zero memory 840 will route the packets to a level-zero memory 880 of the communication device 870 via the full-mesh or partial-mesh interconnect 850.
  • the packets will then be prioritized/sorted by enqueue engine 882 and routed
  • the interconnection of the level-zero memory 840, 850 via a partial-mesh interconnect is
EP00973429A 1999-10-06 2000-10-06 Hierarchical output-queued packet-buffering system and method Withdrawn EP1222780A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15792599P 1999-10-06 1999-10-06
US157925P 1999-10-06
PCT/US2000/027753 WO2001026309A1 (en) 1999-10-06 2000-10-06 Hierarchical output-queued packet-buffering system and method

Publications (1)

Publication Number Publication Date
EP1222780A1 true EP1222780A1 (en) 2002-07-17

Family

ID=22565924

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00973429A Withdrawn EP1222780A1 (en) 1999-10-06 2000-10-06 Hierarchical output-queued packet-buffering system and method

Country Status (5)

Country Link
EP (1) EP1222780A1 (ja)
JP (1) JP2003511909A (ja)
AU (1) AU1193401A (ja)
CA (1) CA2388348A1 (ja)
WO (1) WO2001026309A1 (ja)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516619B (zh) * 2012-06-29 2017-11-17 华为技术有限公司 一种网络虚拟化系统中带宽调整的方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5440523A (en) * 1993-08-19 1995-08-08 Multimedia Communications, Inc. Multiple-port shared memory interface and associated method
JP2682434B2 (ja) * 1994-04-05 1997-11-26 日本電気株式会社 出力バッファ型atmスイッチ
JP3673025B2 (ja) * 1995-09-18 2005-07-20 株式会社東芝 パケット転送装置
JP2827998B2 (ja) * 1995-12-13 1998-11-25 日本電気株式会社 Atm交換方法
DE19617816B4 (de) * 1996-05-03 2004-09-09 Siemens Ag Verfahren zum optimierten Übertragen von ATM-Zellen über Verbindungsabschnitte
US5831980A (en) * 1996-09-13 1998-11-03 Lsi Logic Corporation Shared memory fabric architecture for very high speed ATM switches
US6324165B1 (en) * 1997-09-05 2001-11-27 Nec Usa, Inc. Large capacity, multiclass core ATM switch architecture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO0126309A1 *

Also Published As

Publication number Publication date
WO2001026309A1 (en) 2001-04-12
AU1193401A (en) 2001-05-10
JP2003511909A (ja) 2003-03-25
CA2388348A1 (en) 2001-04-12

Similar Documents

Publication Publication Date Title
US6850490B1 (en) Hierarchical output-queued packet-buffering system and method
US7099275B2 (en) Programmable multi-service queue scheduler
US8023521B2 (en) Methods and apparatus for differentiated services over a packet-based network
US7936770B1 (en) Method and apparatus of virtual class of service and logical queue representation through network traffic distribution over multiple port interfaces
US6680933B1 (en) Telecommunications switches and methods for their operation
US7796610B2 (en) Pipeline scheduler with fairness and minimum bandwidth guarantee
US20030048792A1 (en) Forwarding device for communication networks
US7065089B2 (en) Method and system for mediating traffic between an asynchronous transfer mode (ATM) network and an adjacent network
US7023856B1 (en) Method and system for providing differentiated service on a per virtual circuit basis within a packet-based switch/router
US20030198241A1 (en) Allocating buffers for data transmission in a network communication device
US7385993B2 (en) Queue scheduling mechanism in a data packet transmission system
GB2339371A (en) Rate guarantees through buffer management
KR20060023579A (ko) 시스템 패브릭에서의 개방 루프 정체 제어를 위한 방법,장치, 제품 및 시스템
US7197051B1 (en) System and method for efficient packetization of ATM cells transmitted over a packet network
US7382792B2 (en) Queue scheduling mechanism in a data packet transmission system
US7324536B1 (en) Queue scheduling with priority and weight sharing
WO2001026309A1 (en) Hierarchical output-queued packet-buffering system and method
EP1521411A2 (en) Method and apparatus for request/grant priority scheduling
JP3570991B2 (ja) パケット交換のためのフレーム破棄機構
Song et al. Two scheduling algorithms for input-queued switches guaranteeing voice QoS
Li System architecture and hardware implementations for a reconfigurable MPLS router
Katevenis et al. ATLAS I: A Single-Chip ATM Switch with HIC Links and Multi-Lane Back-Pressure
Li et al. Performance evaluation of crossbar switch fabrics in core routers
Li et al. Architecture and performance of a multi-Tbps protocol independent switching fabric
Pi et al. An integrated scheduling and buffer management scheme for packet-switched routers

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020503

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ENTERASYS NETWORKS, INC.

17Q First examination report despatched

Effective date: 20061115

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20070327