WO2013083191A1 - Queuing apparatus - Google Patents

Queuing apparatus Download PDF

Info

Publication number
WO2013083191A1
WO2013083191A1 PCT/EP2011/072086 EP2011072086W WO2013083191A1 WO 2013083191 A1 WO2013083191 A1 WO 2013083191A1 EP 2011072086 W EP2011072086 W EP 2011072086W WO 2013083191 A1 WO2013083191 A1 WO 2013083191A1
Authority
WO
WIPO (PCT)
Prior art keywords
queuing
queue
memory
queues
sub
Prior art date
Application number
PCT/EP2011/072086
Other languages
English (en)
French (fr)
Inventor
Yaron Shachar
Rami Zecharia
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to EP11794131.0A priority Critical patent/EP2783277A1/en
Priority to CN201180075404.1A priority patent/CN103988167A/zh
Priority to PCT/EP2011/072086 priority patent/WO2013083191A1/en
Publication of WO2013083191A1 publication Critical patent/WO2013083191A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/564Attaching a deadline to packets, e.g. earliest due date first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/10Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor having a sequence of storage locations each being individually accessible for both enqueue and dequeue operations, e.g. using random access memory

Definitions

  • the invention relates to a queuing apparatus having a queuing engine and to a method for queuing Packet Descriptors.
  • High-speed queuing systems are required in many applications, especially in high-speed routers and switches of networks. When many inputs are connected to one output, it is necessary to provide a queuing engine especially in cases where the ingress data rate be- comes higher than the egress data rate.
  • Conventional queuing engines provide either ingress queuing or use higher-frequency queuing systems. However, both approaches have disadvantages. Ingress queues use much more memory and have disadvantages like higher head-of-line blocking (HOL blocking) and the necessity of providing a complex traffic management.
  • HOL blocking head-of-line blocking
  • the use of high-frequency queuing systems using a high-frequency clock causes an increase of power consumption and moreover leads to scalability problems in terms of timing closure.
  • a queuing apparatus having a queuing engine comprising
  • each queue has a number, N, of sub-queues, SQ, associated to a corresponding number, N, of input lanes of said queuing engine,
  • each sub-queue, SQ is used for storing Packet Descriptors applied by each asso- ciated input lane of said queuing engine and
  • a shared descriptor memory adapted to store all Descriptors of the sub-queues, SQ, of the predetermined number, K, of queues, Q, of this queuing engine,
  • each sub-queue, SQ is adapted to store a maximum number, M, of Packet Descriptors applied to the associated input lane of said sub-queue, SQ, and wherein each sub-queue, SQ, can use the entire shared description memory,
  • each enqueuing unit is adapted to enqueue Packet Descriptors applied to the respective input lane during each system clock cycle of the system clock signal, CLK, applied to the queuing engine into the corresponding sub-queues, SQ, of the respective input lane.
  • the queuing engine further comprises a dequeuing unit adapted to dequeue a descriptor from any of the queues, Q, during the system clock cycle of the system clock signal, CLK, applied to said queuing engine.
  • the queuing apparatus further comprises a scheduler adapted to select a queue, Q, for being dequeued by the dequeuing unit of the queuing engine.
  • one of the sub-queues, SQ, of the selected queue, Q is selected by the memory controller using a sequencing function, SF.
  • the sequencing function, SF used by the memory controller comprises a First-in-First-Out, FIFO, se- quencing function,
  • the sequencing function, SF used by the memory controller comprises a Round Robin sequencing function, wherein a different sub-queue, SQ, is selected among the N sub-queues, SQ, of a queue, Q, each time a queue, Q, is selected by the scheduler for being dequeued by the dequeuing unit of the queuing engine.
  • the sequencing function, SF used by the memory controller comprises a deficit Round Robin sequencing function.
  • a shared descriptor memory comprises a multi-write-single read memory system.
  • the multi-write-single read memory system comprises a control logic adapted to receive a number, n, of write requests and to receive a read request within a clock cycle of the system clock signal, CLK.
  • the multi-write-single read memory system further comprises n+1 memory banks adapted to store data, wherein n is an integer number.
  • a control logic of the multi-write-single read memory system is adapted to control memory bank occupancy levels, MBOLs, of each memory bank such that the difference between memory bank occupancy levels, MBOLs, of the memory banks are minimized.
  • the queuing engine further comprises a memory controller adapted to receive queued data comprising Descriptors from said enqueuing unit and to supply dequeuing data comprising Descriptors to the dequeuing unit.
  • the memory controller is connected to the control logic of the multi-write-single read memory system forming the shared descriptor memory of the queuing engine.
  • a memory bank of the multi-write-single read memory system is formed by a single port random access memory RAM.
  • the queuing engine comprises a next pointer memory formed by a
  • the queuing engine further comprises a free buffer pool memory formed by a multi-write-single read memory system connected to the memory controller.
  • a queuing engine comprises a time stamp memory formed by a multi-write-single read memory system connected to the memory controller.
  • a time stamp memory is provided for storing time stamps, TS, attached to Packet Descriptors packets received via an input lane of the queuing engine.
  • the enqueuing units and the dequeuing units of the queuing engine have access to a queue information memory which stores information data of queues, Q.
  • a time stamp, TS is attached to its Packet Descriptor wherein the time stamp, TS, indicates an arrival time of the respective packet.
  • a Packet Descriptor comprises a memory address to address payload data of the received packet stored in a shared data memory of the queuing apparatus.
  • a traffic management device comprising a queuing apparatus having a queuing engine according to the first aspect of the present invention.
  • an integrated circuit comprising a queuing apparatus having a queuing engine according to the first aspect of the present invention.
  • Fig. 1 shows a block diagram of a possible implementation of a queuing apparatus having a queuing engine according to the first aspect of the present invention
  • Fig. 2 shows a block diagram of a possible implementation of a queuing engine provided in the queuing apparatus as shown in Fig. 1;
  • Fig. 3 shows a diagram for illustrating a logical view of a single queue employed by the queuing apparatus according to the first aspect of the present invention
  • Fig. 4 shows a diagram for illustrating an aggregation of queues of a butterfly queuing mechanism as employed by a queuing apparatus according to the first aspect of the present invention.
  • the queuing apparatus 1 comprises in the shown implementation as its core element a queuing engine 2 connected to a scheduler 3 via a data bus.
  • the queuing apparatus 1 further comprises a shared data memory 4 connected to a data storage unit 5 and to a data retrieval unit 6.
  • the queuing engine 2 is adapted to receive N Descriptors from the data storage unit 5 and to output one descriptor to the data retrieval unit 6.
  • a Packet Descriptor is a set of information that describes the packet. Packet Descriptors can hold all kinds of information on a data packet. For instance, the Packet De- scriptor comprises a pointer to a data-memory in which the packet is stored.
  • Packet Descriptor is sent.
  • the packet itself is stored in a data-memory. There could be systems in which this kind of descriptor would be enough. However, there could be cases in which the descriptor can hold more information, e.g. a packet ID. In systems in which the packet length is varies from one packet to another, the packet length might be added to the Packet Descriptor in order to enable the scheduler 3 to compute exactly the amount of bytes really scheduled.
  • a Packet Descriptor can include the header of a packet itself.
  • the Packet Descriptor can include one or several data fields of the header within the packet. There are many variants of Packet Descriptors, above were mentioned examples for such.
  • the data storage unit 5 is connected via N input lanes of the queuing engine 2 to supply N Descriptors, e.g. Packet Descriptor, as shown in Fig. 1 to the queuing engine 2.
  • N Descriptors e.g. Packet Descriptor
  • the data packets are stored in the shared data memory 4 and a reference to the stored data (usually a pointer) is put inside a Packet Descriptor which is sent to the queuing engine 2.
  • the queuing engine 2 comprises a predetermined number, K, of queues Q, wherein each queue Q has a number, N, of sub-queues, SQ, associated to the corresponding number N of input lanes connected to the queuing engine 2.
  • Each sub-queue, SQ is used for storing Packet Descriptors applied by each associated input lane of the queuing engine 2.
  • the scheduler 3 is adapted to select a queue among the K queues for deqeue. It is recognized by a person skilled the art that there are many state of art implementations of the scheduler 3. On each cycle the queuing engine 2 pops the Packet Descriptor at the head of the queue that is selected by the scheduler 3. This Packet Descriptor is then sent to the data retrieval unit 6 which looks up the data packet of the popped Packet Descriptor from the shared data memory 4 and sends it to the output lane.
  • a queue is a particular kind of collection in which the entities in the collection are kept in order and the principal operations on the collection are the addition of entities to the queue and removal of entities from the queue. For instance, for a First- In-First-Out (FIFO) queue, the first entity added to the queue will be the first one to be removed. This is equivalent to the requirement that once an entity is added, all entities that were added before have to be removed before the new entity can be invoked.
  • a queue is an example of a linear data struc- lure. Take the FIFO queue as an example, an operation of dequeuing is the leave of one or more entity from the front terminal position of the queue, while an operation of enqueuing is one or more entity enters to the rear terminal position of the queue. It is clear to a person skilled in the art that the operations of dequeuing and enqueuing apply to any types of queues, are not limited to FIFO queues.
  • the input lane is a bus of data, in the form of packets that streams into the queuing engine 2.
  • each path of enqueuing is called an input lane.
  • FIG. 2 An implementation of the queuing engine 2 shown in Fig. 1 is illustrated in more detail by the block diagram of Fig. 2.
  • a corresponding enqueuing unit 7-1, 7-2 . . . 7-N is provided.
  • Each enqueuing unit 7-i is adapted to enqueue Packet Descriptors applied to the respective input lane during each system clock cycle of the system clock signal CLK to the queuing engine 2 into a corresponding sub-queue SQ of the respective input lane.
  • the queuing engine 2 further comprises a dequeuing unit 8 adapted to dequeue a descriptor from any of the queues Q during the system clock cycle of the system clock signal CLK applied to the queuing engine 2.
  • the queuing engine 2 comprises in the shown implementation of Fig. 2 a memory controller 9 adapted to receive queued data comprising Descriptors from the enqueuing units 7 and to supply dequeuing data comprising Descriptors to the dequeuing unit 8.
  • the queuing engine 2 comprises a shared descriptor memory 10 adapted to store all Descriptors of the sub-queues SQ of the predetermined number, K, of queues, Q, of the queuing engine 2, wherein during each system clock cycle of the system clock signal CLK applied to the queuing engine 2 up to N input lanes can request a write operation to queues Q and up to one read operation from any queue Q.
  • the shared descriptor memory 10 can comprise a multi-write-single read memory system comprising a control logic adapted to receive a number, n, of write requests WRs and to receive a read request RR within a clock cycle of the system clock signal CLK.
  • the multi-write-single read memory system can comprise n+1 memory banks adapted to store data, wherein n is an integer number.
  • the control logic of the multi-write-single read memory system can be adapted to control memory bank occupancy levels MBOLs of each memory bank such that the difference between memory bank occupancy levels MBOLs of the memory banks are minimized.
  • the memory controller 9 as shown in Fig. 2 can be connected to the control logic of the multi-write-single read memory system forming the shared descriptor memory 10 of the queuing engine 2.
  • Each memory bank of the multi-write-single read memory system can be formed by a single port random access memory RAM.
  • the enqueuing units 7-i and dequeuing units 8 of the queuing engine 2 can have access to a queue information memory 11 which stores information of the queues Q.
  • the queue information memory 11 can hold details on each of the sub-queues SQ.
  • the detail can be a combination of the size of the sub-queue SQ, a read pointer and a write pointer.
  • the queue information memory 11 comprises at least the head of queue and tail of queue pointers.
  • the queue information memory 11 can further comprise information about the queue size and state information.
  • the queue information memory 11 can store further information about each sub-queue SQ.
  • Each enqueuing unit 7-i of the queuing engine 2 receives a descriptor, reads the queue, Q, it needs to insert data into and read the queue control information such as head of queue, tail of queue and queue size from the queue information memory 11.
  • the enqueuing unit 7-i then sends a command to the memory controller 9 requesting to store the descriptor and to update a next pointer memory on the same address of the newly stored descriptor to receive a vacant address that was used for the descriptor and the next pointer.
  • the enqueuing unit 7-i then can update the queue information memory 11 with the latest data. Concurrently, in case that the queue, Q, changed its state, for example from empty to non-empty, the enqueuing unit 7-i updates the scheduler 3 accordingly.
  • the dequeuing unit 8 receives in a possible implementation a queue number from the scheduler 3 and reads the queue information memory 11. According to the received queue information such as the queue head the dequeuing unit 8 can request the memory controller 9 to retrieve the Packet Descriptor at the head of the queue from shared descriptor memory 10 and send it to the output of the queuing engine 2. Further, the dequeuing unit 8 can update the scheduler 3 about the dequeue process. In a further possible embodiment further memories are connected to the memory controller 9. In a possible implementation the queuing engine 2 comprises a next pointer memory that can also be formed by a multi-write-single-read memory system connected to the memory controller 9. The next pointer memory is used to maintain multi-linked lists in one memory.
  • the queuing engine 2 further comprises a free buffer pool memory which can be formed by a multi-write-single-read memory system connected to the memory controller 9.
  • the free buffer pool memory can be used to track descriptor buffers that are vacant and can be used for incoming Packet Descriptors. Upon enqueuing the buffer is marked as occupied and upon dequeuing the buffer is marked as vacant.
  • the queuing engine further comprises a time stamp memory which can be formed by a multi-write-single-read memory system connected to the memory controller 9.
  • the time stamp memory is provided for storing time stamps TS attached to the Packet Descriptors packets received via an input lane by the queuing engine 2.
  • a time stamp TS is attached to its Packet Descriptor, wherein the time stamp TS indicates an arrival time of the respective packet.
  • the Packet Descriptor can comprise a memory address to ad- dress the received packet stored in the shared data memory 4 of the queuing apparatus 1. It is a preferred option that the Packet Descriptor holds the pointer to the data packet inside the shared data memory, otherwise, there will be no way to pull the data packet information when the Packet Descriptor is taken out of the queue.
  • a queuing mechanism provided by the queuing apparatus 1 as shown in Fig. 1 fulfils a function to receive control words or Packet Descriptors describing each data segment such as a packet and to store them in a set of queues Q. Under control of a decision engine, i.e. the scheduler 3, Packet Descriptors are dequeued from the head of one of the queues Q out of the queuing apparatus. The data segments themselves can be stored in the shared data memory 4 prior to the enqueuing process.
  • the scheduler 3 as shown in Fig. 1 is adapted to select a queue Q for being dequeued by the dequeuing unit 8 of the queuing engine 2.
  • the scheduler 3 is configured to selects a queue Q.
  • the scheduler 3 even does not need to know that a queue is represented by a set of sub-queues SQ.
  • the memory controller 9 inside the queuing engine 2 chooses the correct sub-queue SQ of the queue that was decided by the scheduler 3.
  • the manner in which the queuing engine 2 selects the sub-queue SQ is done according to one of the sequencing functions SF: a First-in-First-Out FIFO sequencing function, a Round Robin sequencing function and a deficit Round Robin sequencing function.
  • the sequencing function SF comprises a First-in-First-Out FIFO sequencing function, wherein the sub-queue SQ among the N sub-queues SQ of the selected queue Q is selected which comprises the descriptor in its head of queue to which a minimum time stamp TS is attached.
  • This mechanism allows for packets to be selected based on the arrival time. The first arriving packet to any sub-queue SQ is selected to be dequeued. In this mode, as shown in Fig. 4, a time stamp TS is required to be attached to each arriving Packet Descriptor of any sub-queue SQ.
  • This implementation mimics a logical FIFO for each of the queues Q.
  • the sequencing function SF comprises a Round Robin sequencing function, wherein a different sub-queue SQ is selected among the N sub-queues of a queue Q each time a queue Q is selected by the scheduler 3 for being dequeued by the dequeuing unit 8 of the queuing engine 2.
  • This implementation can be used, if there is a need to sequence the arriving data traffic between different sources.
  • a different sub-queue SQ is selected each time the queue Q is selected for being dequeued.
  • the sequencing function SF comprises a deficit Round Robin sequencing function. If the sub-queue SQ contains packets with different size, a Round Robin mechanism may not be fair. The deficit Round Robin mechanism allows to perform a Round Robin in a fair way such that each sub-queue SQ counts bytes or data volume and not packets.
  • the queuing mechanism employed by the queuing engine 2 allows to insert several Packet Descriptors together to the same or different queues Q within the same clock cycle by allowing the dequeued process on the same clock cycle.
  • the number of lanes i.e. the maximum amount of enqueues during the same clock cycle, is denoted by the parameter N.
  • the number of queues Q in the system is denoted by the parameter K.
  • a butterfly queuing engine 2 is provided having a shared pool of memory for data and replicated control information.
  • Each queue Q in the queuing mechanism provided by the queuing engine 2 is represented by a set of N sub-queues SQ.
  • Each sub-queue SQ represents a source such as an input port of the queuing apparatus 1.
  • the Packet Descriptors are stored in the relevant sub-queue SQ such that N arriving packets can be stored at the same time into N separate sub-queues SQ.
  • a queue Q is selected by the scheduler 3 for a dequeue, one of the sub-queues SQ of a queue Q can be selected by the memory controller 9 inside the queuing engine 2.
  • the selection between the sub-queues SQ of a queue Q can be performed by using different methods or sequencing functions SF comprising a First-In-First-Out FIFO or a Round Robin sequencing function.
  • the queuing engine 2 can use other selection functions SF for selection of a sub-queue SQ within a queue Q as well.
  • Fig. 3 shows a diagram for illustrating a logical view on a single queue Qi.
  • a corresponding sub-queue SQ within a queue Q is provided for storing Packet Descriptors D, wherein each sub-queue SQ is connected to a multiplexing logic MUX controlled by a sub-queue select logic SQSL as shown in Fig. 3 which evaluates for example time stamps TS associated with the data Descriptors D.
  • the MUX and the SQSL being a logical element in a logical view as shown in Fig. 3, are used to logically show the functionality done by the memory controller 9 of Fig. 2.
  • Fig. 4 is a logical view (conceptual view) of the scheme that is implemented by the queuing engine 2 as shown in Fig. 2.
  • Fig. 2 is an implementation of the logical view illustrated in Figure 4.
  • FIG. 4 An aggregation of K queues Q each having N sub-queues SQ is illustrated by the diagram of Fig. 4.
  • the input lane i corresponds to the sub-queue SQ i of each queue Q.
  • a corresponding sub-queue SQ i within each queue Q is provided for storing Packet Descriptors D, wherein each sub-queue SQ is connected to a multiplexing logic MUX con- trolled by a sub-queue select logic SQSL as shown in Fig. 3.
  • a time stamp TS is attached to each sub-queue SQ within each queue Q.
  • the time stamps TS attached to each sub-queue SQ constitute inputs of the SQSL.
  • the SQSL can evaluate time stamps TS associated with the Packet Descriptors D.
  • An output of each queue Q is coupled to a further MUX,
  • Figure 4 shows that each sub-queue SQ is selected according to a certain function SQSL, while each queue Q is selected according to the decision that is passed to the queuing engine 2 from the scheduler 3.
  • the queuing engine 2 uses N x K queues which are implemented using a shared descriptor memory. This kind of high-level division between the different queues Q enables to share a descriptor memory between the different queues Q.
  • the shared descriptor memory is in form of using a plurality of
  • the multi-write-single-read memories can be used to hold a Packet Descriptor database which forms a major part of the stored data.
  • the rest of the data such as queue control information such as a queue read and write pointer is duplicated per lane.
  • the queue control information forms only a minor part of the stored data and comprises a constant data volume being not dependant on data traffic and being scalable.
  • a queue Q is selected by the scheduler 3 for a dequeue phase, there can be packets ready to be delivered from more than one sub-queue SQ of the selected queue Q. Therefore, one sub-queue SQ is selected by the memory controller 9 inside the queuing engine 2 and the selection between each sub-queue SQ can be performed by a predetermined sequencing function SF.
  • the sequencing function SF used by the memory controller 9 can be changed in response to a control signal, e.g. to change an op- eration mode of the queuing apparatus 1.
  • the queuing apparatus 1 shown in Fig. 1 can be implemented in a traffic management device according to a second aspect of the present invention.
  • the traffic management device can be, for example, a switch device or a router device of a network.
  • the queuing apparatus 1 as shown in Fig. 1 can be integrated in an integrated circuit forming a third aspect of the present invention.
  • the integrated circuit can be a chip comprising a queuing apparatus 1 as shown in Fig. 1.
  • the invention further provides according to a fourth aspect a method for queuing Packet Descriptors received on each system clock cycle of the system clock signal CLK from a number N of input lanes concurrently using a queuing engine 2, wherein the queuing engine 2 comprises a predetermined number K of queues Q each having a number N of sub-queues SQ associated to the corresponding input lanes of the queuing engine 2.
  • All Descriptors of the sub-queues SQ of the predetermined number K of queues Q are stored in a shared descriptor memory 10 of the queuing engine 2 being adapted to store up to N Descriptors and to retrieve one descriptor per system clock cycle of the system clock signal CLK.
  • a shared descriptor memory 10 of the queuing engine 2 being adapted to store up to N Descriptors and to retrieve one descriptor per system clock cycle of the system clock signal CLK.
  • the system clock signal CLK Packet Descriptors applied to the N input lanes are written in parallel into the shared descriptor memory 10 and a single descriptor is read from the shared descriptor memory 10.
  • the method for queuing Descriptors can be performed by executing a control program having instructions to perform the method according to the fourth aspect of the present invention.
  • This control program can be stored in a possible implementation in a program memory.
  • this control program can be loaded from a data carrier storing such a control program.
  • a method and apparatus for sharing memory in a queuing system that can withstand multiple writes and a single read in every system clock cycle can be used in a VLSI application.
  • the queuing apparatus 1 uses a multi-port shared memory to create a queuing system. It also reduces the silicon area needed when integrating the queuing apparatus 1 on a chip. Furthermore, a lower power consumption is achieved by using a lower frequency.
  • the queue size is limited only by the total descriptor memory.
  • a single queue Q can use the entire descriptor memory.
  • Different inputs of the queuing apparatus 1 can send data to the same queue Q within the same clock cycle.
  • a single queue Q is represented by N sub-queues SQ.
  • the input lane is connected to a specific sub-queue SQ.
  • a queue Q is selected by the scheduler 3 and a sub-queue SQ is selected using a predetermined sequencing function SF.
  • the queuing apparatus 1 allows to provide a less complex scheduler 3 since the scheduler 3 does only see K queues Q.
  • each of the lanes inserts its incoming packet to its own set of queues.
  • a time stamp TS can be used to define an order between sub-queues SQ of the same queue Q.
  • each queue Q comprises four sub-queues SQ.
  • all four head time stamps TS are read and compared by an evaluation unit controlling a multiplexing logic as shown in Fig. 4.
  • the lane number can serve in a possible implementation as a differentiator.
  • the queuing apparatus 1 according to the present invention only the queue head/tail memory is duplicated, wherein the main Packet Descriptor memory is not duplicated thus saving memory space.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
PCT/EP2011/072086 2011-12-07 2011-12-07 Queuing apparatus WO2013083191A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP11794131.0A EP2783277A1 (en) 2011-12-07 2011-12-07 Queuing apparatus
CN201180075404.1A CN103988167A (zh) 2011-12-07 2011-12-07 排队装置
PCT/EP2011/072086 WO2013083191A1 (en) 2011-12-07 2011-12-07 Queuing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2011/072086 WO2013083191A1 (en) 2011-12-07 2011-12-07 Queuing apparatus

Publications (1)

Publication Number Publication Date
WO2013083191A1 true WO2013083191A1 (en) 2013-06-13

Family

ID=45315788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/072086 WO2013083191A1 (en) 2011-12-07 2011-12-07 Queuing apparatus

Country Status (3)

Country Link
EP (1) EP2783277A1 (zh)
CN (1) CN103988167A (zh)
WO (1) WO2013083191A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001084773A2 (en) * 2000-04-27 2001-11-08 Mosaid Technologies Incorporated Port packet queuing
US20020110139A1 (en) * 2001-02-15 2002-08-15 Jihad Boura Logical multicast packet handling
US20070147404A1 (en) * 2005-12-27 2007-06-28 Lucent Technologies, Inc. Method and apparatus for policing connections using a leaky bucket algorithm with token bucket queuing
EP1811370A2 (en) * 2005-12-30 2007-07-25 STMicroelectronics Pvt. Ltd. A first-in first-out (FIFO) memory architecture providing multiport functionality
US20090034549A1 (en) * 2007-08-01 2009-02-05 Texas Instruments Incorporated Managing Free Packet Descriptors in Packet-Based Communications

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524265A (en) * 1994-03-08 1996-06-04 Texas Instruments Incorporated Architecture of transfer processor
SE508050C2 (sv) * 1995-11-09 1998-08-17 Ericsson Telefon Ab L M Anordning och förfarande vid paketförmedling
US6134638A (en) * 1997-08-13 2000-10-17 Compaq Computer Corporation Memory controller supporting DRAM circuits with different operating speeds

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001084773A2 (en) * 2000-04-27 2001-11-08 Mosaid Technologies Incorporated Port packet queuing
US20020110139A1 (en) * 2001-02-15 2002-08-15 Jihad Boura Logical multicast packet handling
US20070147404A1 (en) * 2005-12-27 2007-06-28 Lucent Technologies, Inc. Method and apparatus for policing connections using a leaky bucket algorithm with token bucket queuing
EP1811370A2 (en) * 2005-12-30 2007-07-25 STMicroelectronics Pvt. Ltd. A first-in first-out (FIFO) memory architecture providing multiport functionality
US20090034549A1 (en) * 2007-08-01 2009-02-05 Texas Instruments Incorporated Managing Free Packet Descriptors in Packet-Based Communications

Also Published As

Publication number Publication date
CN103988167A (zh) 2014-08-13
EP2783277A1 (en) 2014-10-01

Similar Documents

Publication Publication Date Title
US11082366B2 (en) Method and apparatus for using multiple linked memory lists
US8861515B2 (en) Method and apparatus for shared multi-bank memory in a packet switching system
US6272567B1 (en) System for interposing a multi-port internally cached DRAM in a control path for temporarily storing multicast start of packet data until such can be passed
EP1774714B1 (en) Hierarchal scheduler with multiple scheduling lanes
EP0886939B1 (en) Efficient output-request packet switch and method
US20050243829A1 (en) Traffic management architecture
US9602436B2 (en) Switching device
US9769092B2 (en) Packet buffer comprising a data section and a data description section
EP2526478B1 (en) A packet buffer comprising a data section an a data description section
US20070297330A1 (en) Scalable Link-Level Flow-Control For A Switching Device
US7483377B2 (en) Method and apparatus to prioritize network traffic
US6598132B2 (en) Buffer manager for network switch port
US7110405B2 (en) Multicast cell buffer for network switch
US10021035B1 (en) Queuing methods and apparatus in a network device
EP1488600B1 (en) Scheduling using quantum and deficit values
CN114531488A (zh) 一种面向以太网交换器的高效缓存管理系统
EP2754050B1 (en) A method and apparatus for storing data
US8156265B2 (en) Data processor coupled to a sequencer circuit that provides efficient scalable queuing and method
WO2013083191A1 (en) Queuing apparatus
Guesmi et al. Design of a priority active queue management
Alisafaee et al. Architecture of an embedded queue management engine for high-speed network devices
Feng Design of per Flow Queuing Buffer Management and Scheduling for IP Routers
WO2010040983A1 (en) Switching device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11794131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2011794131

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011794131

Country of ref document: EP