US20050243829A1 - Traffic management architecture - Google Patents
Traffic management architecture Download PDFInfo
- Publication number
- US20050243829A1 US20050243829A1 US10/534,346 US53434605A US2005243829A1 US 20050243829 A1 US20050243829 A1 US 20050243829A1 US 53434605 A US53434605 A US 53434605A US 2005243829 A1 US2005243829 A1 US 2005243829A1
- Authority
- US
- United States
- Prior art keywords
- processor
- packets
- sorting
- packet
- exit order
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 28
- 230000015654 memory Effects 0.000 claims abstract description 26
- 230000008569 process Effects 0.000 claims abstract description 5
- 238000012913 prioritisation Methods 0.000 claims 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 claims 1
- 229910052710 silicon Inorganic materials 0.000 claims 1
- 239000010703 silicon Substances 0.000 claims 1
- 238000007726 management method Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 5
- 239000004744 fabric Substances 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/466—Transaction processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
- H04L47/562—Attaching a time tag to queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/60—Queue scheduling implementing hierarchical scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6215—Individual queue per QOS, rate or priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/624—Altering the ordering of packets in an individual queue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9042—Separate storage for different parts of the packet, e.g. header and payload
Definitions
- the present invention concerns the management of traffic, such as data and communications traffic, and provides an architecture for a traffic manager that surpasses known traffic management schemes in terms of speed, efficiency and reliability.
- a router's switch fabric can deliver packets from multiple ingress ports to one of a number of egress ports. The linecard connected to this egress port must then transmit these packets over some communication medium to the next router in the network.
- the rate of transmission is normally limited to a standard rate. For instance, an OC-768 link would transmit packets over an optical fibre at a rate of 40 Gbits/s.
- the time-averaged rate of delivery cannot exceed 40 Gbits/s for this example.
- the short term delivery of traffic by the fabric is “bursty” in nature with rates often peaking above the 40 Gbits/s threshold. Since the rate of receipt can be greater than the rate of transmission, short term packet queueing is required at egress to prevent packet loss.
- a simple FIFO queue is adequate for this purpose for routers which provide a flat grade of service to all packets.
- more complex schemes are required in routes which provide Traffic Management. In a converged internetwork, different end user applications require different grades of service in order to run effectively.
- Email can be carried on a best effort service where no guarantees are made regarding rate of or delay in delivery.
- Real-time voice data has a much more demanding requirement for reserved transmission bandwidth and guaranteed minimum delay in delivery. This cannot be achieved if all traffic is buffered in the same FIFO queue.
- a queue per so-called “Class of Service” is required so that traffic routed through higher priority queues can bypass that in lower priority queues. Certain queues may also be assured a guaranteed portion of the available output line bandwidth.
- Packets are placed in queues according to their required class of service. For every forwarding treatment that a system provides, a queue must be implemented. These queues are then managed by the following mechanisms:
- WFQ Weighted Fair Queucing
- DRR Deficit Round Robin
- WRED Weighted Random Early Detect
- Priority queue ordering for some (FQ) scheduling algorithms is a non-trivial problem at high speeds.
- FIG. 1 shows the basic layout of the current approach to traffic management. It can be thought of as a “queue first, think later” strategy.
- Data received at the input 1 is split into a number of queues in parallel channels 2 . 1 to 2 .n.
- a traffic scheduler processor 3 receives the data from the parallel channels and sorts them into order. The order may be determined by the priority attributes, for example, mentioned above. State is stored in memory 4 accessible by the processor.
- the output from the processor represents the new order as determined by the processor in dependence on the quality of service attributes assigned to the data at the outset.
- the traffic scheduler 3 determines the order of dc-queuing. Since the scheduling decision can be processing-intensive as the number of input queues increases, queues are often arranged into small groups which are locally scheduled into an intermediate output queue.
- This output queue is then the input queue to a following scheduling stage.
- the scheduling problem is thus simplified using a “divide-and-conquer” approach, whereby high performance can be achieved through parallelism between groups of queues in a tree type structure, or so-called hierarchical link sharing scheme.
- the invention provides a system comprising means for sorting incoming data packets in real time before said packets are stored in memory.
- the invention provides a data packet handling system, comprising means whereby incoming data packets are assigned an exit order before being stored in memory.
- the invention provides a method for sorting incoming data packets in real time, comprising sorting the packets into an exit order before storing them in memory.
- the sorting means may be responsive to information contained within a packet and/or within a table and/or information associated with a data packet stream in which said packet is located, whereby to determine an exit order number for that packet.
- the packets may be inserted into one or more queues by a queue manager adapted to insert packets into the queue means in exit order. There may be means to drop certain packets before being output from said queue means or before being queued in the queue means.
- the system may be such that the sorting means and the queue means process only packet records containing information about the packets, whereas data portions of the packets are stored in the memory for output in accordance with an exit order determined for the corresponding packet record.
- the sorting means preferably comprises a parallel processor, such as an array processor, more preferably a SIMD processor.
- a state engine may control access to the shared state.
- Tables of information for sorting said packets or said packet records may be provided, wherein said tables are stored locally to each processor or to each processor element of a parallel processor.
- the tables may be the same on each processor or on each processor element of a parallel processor.
- the tables may be different on different processors or on different processor elements of a parallel processor.
- the processors or processor elements may share information from their respective tables, such that: (a) the information held in the table for one processor is directly accessible by a different processor or the information held in the table in one processor element may be accessible by other processing element(s) of the processor; and (b) processors may have access to tables in other processors or processor elements have access to other processor elements in the processor, whereby processors or processor elements can perform table lookups on behalf of other processor(s) or processor elements of the processor.
- the invention also encompasses a computer system, comprising a data handling system as previously specified; a network processing system, comprising a data handling system as previously specified; and a data carrier containing program means adapted to perform a corresponding method.
- FIG. 1 is a schematic representation of a prior art traffic handler
- FIG. 2 is a schematic representation of a traffic handler in accordance with the invention.
- FIG. 2 shows schematically the basic structure underlying the new strategy for effective traffic management. It could be described as a “think first, queue laterTM” strategy.
- Packet data (traffic) received at the input 20 has the header portions stripped off and record portions of fixed length generated therefrom, containing information about the data, so that the record portions and the data portions can be handled separately.
- the data portions take the lower path and are stored in Memory Hub 21 .
- a processor 22 such as a SIMD parallel processor, comprising one or more arrays of processor elements (PEs).
- PEs processor elements
- each PE typically contains its own processor unit, local memory and register(s).
- the present architecture shares state 23 in the PE arrays under the control of a State Engine (not shown) communicating with the PE array(s). It should be emphasised that only the record portions are processed in the PE array. The record portions are all the same length, so their handling is predictable, at least in terms of length.
- the record portions are handled in the processor 22 .
- information about the incoming packets is distributed amongst the PEs in the array.
- This array basically performs the same function as the processor 3 in the prior art ( FIG. 1 ) but the operations are spread over the PE array for vastly more rapid processing.
- This processing effectively “time-stamps” the packet records to indicate when the corresponding data should be exited, assuming that it should actually be exited and not jettisoned, for example.
- the results of this processing are sent to the orderlist manager 24 , which is an “intelligent” queue system which places the record portions in the appropriate exit order, for example in bins allocated to groups of data exit order numbers.
- the manager 24 is preferably dynamic, so that new data packets with exit numbers having a higher priority than those already in an appropriate exit number bin can take over the position previously allocated. It should be noted that the PE array 22 simply calculates the order in which the data portions are to be output but the record portions themselves do not have to be put in that order. In other words, the PEs do not have to maintain the order of packets being processed nor sort them before they are queued.
- the memory hub 21 can handle packets streaming in at real time.
- the memory hub can nevertheless divide larger data portions into fragments, if necessary, and store them in physically different locations, provided, of course, there are pointers to the different fragments to ensure read out of the entire content of such data packets.
- the output 25 in dependence on the exit order queue held in the Orderlist Manager 24 , instructs the Memory Hub 21 to read out the corresponding packets in that required order, thereby releasing memory locations for newly received data packets in the process.
- the chain-dotted line 26 enclosing the PE array 22 , shared state/State Engine 23 and Orderlist Manager 24 signifies that this combination of elements can be placed on a single chip and that this chip can be replicated, so that there may be one or two (or more) chips interfacing with single input 20 , output 25 and Memory Hub 21 .
- the chip will also include necessary additional components, such as a distributor and a collector per PE array to distribute data to the individual PEs and to collect processed data from the PEs, plus semaphore block(s) and interface elements.
- CoS parameters are used in scheduling and congestion avoidance calculations. They are conventionally read by processors as a fixed group of values from a class of service table in a shared memory. This places further demands on system bus and memory access bandwidth. The table size also limits the number of different classes of service which may be stored.
- PEs can perform proxy lookups on behalf of each other.
- a single CoS table can therefore be split across two PEs, thus halving the memory requirement.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Multi Processors (AREA)
- Complex Calculations (AREA)
- Communication Control (AREA)
- Optical Communication System (AREA)
Abstract
Description
- The present invention concerns the management of traffic, such as data and communications traffic, and provides an architecture for a traffic manager that surpasses known traffic management schemes in terms of speed, efficiency and reliability.
- The problem that modern traffic management schemes have to contend with is the sheer volume. Data arrives at a traffic handler from multiple sources at unknown rates and volumes and has to be received, sorted and passed on “on the fly” to the next items of handling downstream. Received data may be associated with a number of attributes by which priority allocation, for example, is applied to individual data packets or streams, depending on the class of service offered to an individual client. Some traffic may therefore have to be queued whilst later arriving but higher priority traffic is processed. A router's switch fabric can deliver packets from multiple ingress ports to one of a number of egress ports. The linecard connected to this egress port must then transmit these packets over some communication medium to the next router in the network. The rate of transmission is normally limited to a standard rate. For instance, an OC-768 link would transmit packets over an optical fibre at a rate of 40 Gbits/s.
- With many independent ingress paths delivering packets for transmission at egress, the time-averaged rate of delivery cannot exceed 40 Gbits/s for this example. Although over time the input and output rates are equivalent, the short term delivery of traffic by the fabric is “bursty” in nature with rates often peaking above the 40 Gbits/s threshold. Since the rate of receipt can be greater than the rate of transmission, short term packet queueing is required at egress to prevent packet loss. A simple FIFO queue is adequate for this purpose for routers which provide a flat grade of service to all packets. However, more complex schemes are required in routes which provide Traffic Management. In a converged internetwork, different end user applications require different grades of service in order to run effectively. Email can be carried on a best effort service where no guarantees are made regarding rate of or delay in delivery. Real-time voice data has a much more demanding requirement for reserved transmission bandwidth and guaranteed minimum delay in delivery. This cannot be achieved if all traffic is buffered in the same FIFO queue. A queue per so-called “Class of Service” is required so that traffic routed through higher priority queues can bypass that in lower priority queues. Certain queues may also be assured a guaranteed portion of the available output line bandwidth. On first sight the traffic handling task appears to be straightforward. Packets are placed in queues according to their required class of service. For every forwarding treatment that a system provides, a queue must be implemented. These queues are then managed by the following mechanisms:
-
- Queue management assigns buffer space to queues and prevents overflow
- Measures are implemented to cause traffic sources to slow their transmission rates if queues become backlogged
- Scheduling controls the de-queuing process by dividing the available output line bandwidth between the queues.
- Different service levels can be provided by weighting the amount of bandwidth and buffer space allocated to different queues, and by prioritised packet dropping in times of congestion. Weighted Fair Queucing (WFQ), Deficit Round Robin (DRR) scheduling, Weighted Random Early Detect (WRED) are just a few of the many algorithms which might be employed to perform these scheduling and congestion avoidance tasks. In reality, system realisation is confounded by some difficult implementation issues:
-
- High line speeds can cause large packet backlogs to rapidly develop during brief congestion events. Large memories of the order 500 MBytes to 1 GBytes are required for 40 Gbits/s line rates.
- The packet arrival rate can be very high due to overspeed in the packet delivery from the switch fabric. This demands high data read and write bandwidth into memory. More importantly, high address bandwidth is also required.
- The processing overhead of some scheduling and congestion avoidance algorithms is high.
- Priority queue ordering for some (FQ) scheduling algorithms is a non-trivial problem at high speeds.
-
- A considerable volume of state must be maintained in support of scheduling and congestion avoidance algorithms, to which low latency access is required. The volume of state increases with the number of queues implemented.
- As new standards and algorithms emerge, the specification is a moving target. To find a flexible (ideally programmable) solution is therefore a high priority.
- In a conventional approach to traffic scheduling, one might typically place packets directly into an appropriate queue on arrival, and then subsequently dequeue packets from those queues into an output stream.
-
FIG. 1 shows the basic layout of the current approach to traffic management. It can be thought of as a “queue first, think later” strategy. Data received at the input 1 is split into a number of queues in parallel channels 2.1 to 2.n. A traffic scheduler processor 3 receives the data from the parallel channels and sorts them into order. The order may be determined by the priority attributes, for example, mentioned above. State is stored in memory 4 accessible by the processor. The output from the processor represents the new order as determined by the processor in dependence on the quality of service attributes assigned to the data at the outset. - The traffic scheduler 3 determines the order of dc-queuing. Since the scheduling decision can be processing-intensive as the number of input queues increases, queues are often arranged into small groups which are locally scheduled into an intermediate output queue.
- This output queue is then the input queue to a following scheduling stage. The scheduling problem is thus simplified using a “divide-and-conquer” approach, whereby high performance can be achieved through parallelism between groups of queues in a tree type structure, or so-called hierarchical link sharing scheme.
- This approach works in hardware up to a point. For the exceptionally large numbers of input queues (of the order 64 k) required for per-flow traffic handling, the first stage becomes unmanageably wide to a point that it becomes impractical to implement the required number of schedulers.
- Alternatively, in systems which aggregate all traffic into a small number of queues parallelism between hardware schedulers cannot be exploited. It then becomes extremely difficult to implement a single scheduler—even in optimised hardware—that can meet the required performance point.
- With other congestion avoidance and queue management tasks to perform in addition to scheduling, it is apparent that a new approach to traffic handling is required. The queue first, think later strategy often fails and data simply has to be jettisoned. There is therefore a need for an approach to traffic management that does not suffer from the same defects as the prior art and does not introduce its own fallibilities,
- In one aspect, the invention provides a system comprising means for sorting incoming data packets in real time before said packets are stored in memory.
- In another aspect, the invention provides a data packet handling system, comprising means whereby incoming data packets are assigned an exit order before being stored in memory.
- In yet another aspect, the invention provides a method for sorting incoming data packets in real time, comprising sorting the packets into an exit order before storing them in memory.
- The sorting means may be responsive to information contained within a packet and/or within a table and/or information associated with a data packet stream in which said packet is located, whereby to determine an exit order number for that packet. The packets may be inserted into one or more queues by a queue manager adapted to insert packets into the queue means in exit order. There may be means to drop certain packets before being output from said queue means or before being queued in the queue means.
- The system may be such that the sorting means and the queue means process only packet records containing information about the packets, whereas data portions of the packets are stored in the memory for output in accordance with an exit order determined for the corresponding packet record.
- The sorting means preferably comprises a parallel processor, such as an array processor, more preferably a SIMD processor.
- There may be further means to provide access for the parallel processors to shared state. A state engine may control access to the shared state.
- Tables of information for sorting said packets or said packet records may be provided, wherein said tables are stored locally to each processor or to each processor element of a parallel processor. The tables may be the same on each processor or on each processor element of a parallel processor. The tables may be different on different processors or on different processor elements of a parallel processor.
- The processors or processor elements may share information from their respective tables, such that: (a) the information held in the table for one processor is directly accessible by a different processor or the information held in the table in one processor element may be accessible by other processing element(s) of the processor; and (b) processors may have access to tables in other processors or processor elements have access to other processor elements in the processor, whereby processors or processor elements can perform table lookups on behalf of other processor(s) or processor elements of the processor.
- The invention also encompasses a computer system, comprising a data handling system as previously specified; a network processing system, comprising a data handling system as previously specified; and a data carrier containing program means adapted to perform a corresponding method.
- The invention will be described with reference to the following drawings, in which:
-
FIG. 1 is a schematic representation of a prior art traffic handler, and -
FIG. 2 is a schematic representation of a traffic handler in accordance with the invention. - The present invention turns current thinking on its head.
FIG. 2 shows schematically the basic structure underlying the new strategy for effective traffic management. It could be described as a “think first, queue later™” strategy. - Packet data (traffic) received at the
input 20 has the header portions stripped off and record portions of fixed length generated therefrom, containing information about the data, so that the record portions and the data portions can be handled separately. Thus, the data portions take the lower path and are stored inMemory Hub 21. At this stage, no attempt is made to organise the data portions in any particular order. However, the record portions are passed to aprocessor 22, such as a SIMD parallel processor, comprising one or more arrays of processor elements (PEs). Typically, each PE contains its own processor unit, local memory and register(s). - In contrast to the prior architecture outlined in
FIG. 1 , the present architecture shares state 23 in the PE arrays under the control of a State Engine (not shown) communicating with the PE array(s). It should be emphasised that only the record portions are processed in the PE array. The record portions are all the same length, so their handling is predictable, at least in terms of length. - The record portions are handled in the
processor 22. Here, information about the incoming packets is distributed amongst the PEs in the array. This array basically performs the same function as the processor 3 in the prior art (FIG. 1 ) but the operations are spread over the PE array for vastly more rapid processing. This processing effectively “time-stamps” the packet records to indicate when the corresponding data should be exited, assuming that it should actually be exited and not jettisoned, for example. The results of this processing are sent to theorderlist manager 24, which is an “intelligent” queue system which places the record portions in the appropriate exit order, for example in bins allocated to groups of data exit order numbers. Themanager 24 is preferably dynamic, so that new data packets with exit numbers having a higher priority than those already in an appropriate exit number bin can take over the position previously allocated. It should be noted that thePE array 22 simply calculates the order in which the data portions are to be output but the record portions themselves do not have to be put in that order. In other words, the PEs do not have to maintain the order of packets being processed nor sort them before they are queued. - Previous systems in which header and data portions were treated as one entity became unwieldy, slow and cumbersome because of the innate difficulty of preserving the integrity of the whole packet yet still providing enough bandwidth to handle the combination. In the present invention, it is only necessary for the
Memory Hub 21 to provide sufficient bandwidth to handle just the data portions. The memory hub can handle packets streaming in at real time. The memory hub can nevertheless divide larger data portions into fragments, if necessary, and store them in physically different locations, provided, of course, there are pointers to the different fragments to ensure read out of the entire content of such data packets. - In order to overcome the problem of sharing state over all the PEs in the array, multiple PEs are permitted to access (and modify) the state variables. Such access is under the control of a State Engine (not shown), which automatically handles the “serialisation” problem of parallel access to shared state.
- The
output 25, in dependence on the exit order queue held in theOrderlist Manager 24, instructs theMemory Hub 21 to read out the corresponding packets in that required order, thereby releasing memory locations for newly received data packets in the process. - The chain-dotted
line 26 enclosing thePE array 22, shared state/State Engine 23 andOrderlist Manager 24 signifies that this combination of elements can be placed on a single chip and that this chip can be replicated, so that there may be one or two (or more) chips interfacing withsingle input 20,output 25 andMemory Hub 21. As is customary, the chip will also include necessary additional components, such as a distributor and a collector per PE array to distribute data to the individual PEs and to collect processed data from the PEs, plus semaphore block(s) and interface elements. - The following features are significant to the new architecture:
-
- There are no separate, physical stage one input queues.
- Packets are effectively sorted directly into the output queue on arrival. A group of input queues thus exists in the sense of being interleaved together within the single output queue.
- These interleaved “input queues” are represented by state in the queue state engine. This state may track queue occupancy, finish time/number of the last packet in the queue etc. Occupancy can be used to determine whether or not a newly arrived packet should be placed in the output queue or whether it should be dropped (congestion management). Finish numbers are used to preserve the order of the “input queues” within the output queue and determine an appropriate position in the output queue for newly arrived packets (scheduling).
- Scheduling and congestion avoidance decisions are thus made “on the fly” prior to enqueuing (ie “Think fist, queue later”™).
- This technique is made possible by the deployment of a high performance data flow processor which can perform the required functions at wire speed. Applicant's array processor is ideal for this purpose, providing a large number of processing cycles per packet for packets arriving at rates as high as one every couple of system clock cycles.
- Ancillary Features
- Class of Service (CoS) Tables:
- CoS parameters are used in scheduling and congestion avoidance calculations. They are conventionally read by processors as a fixed group of values from a class of service table in a shared memory. This places further demands on system bus and memory access bandwidth. The table size also limits the number of different classes of service which may be stored.
- An intrinsic capability of Applicant's array processor is rapid, parallel local memory access. This can be used to advantage as follows:
-
- The Class of Service table is mapped into each PE's memory. This means that all passive state does not require lookup from external memory. The enormous internal memory addressing bandwidth of SIMD processor is utilised.
- By performing multiple lookups into local memories in a massively parallel fashion instead of single large lookups from a shared external table there is a huge number of different Class of Service combinations available from a relatively small volume of memory.
- Table sharing between PEs—PEs can perform proxy lookups on behalf of each other. A single CoS table can therefore be split across two PEs, thus halving the memory requirement.
- It can thus be appreciated that the present invention is capable of providing the following key features, marking considerable improvements over the prior art:
-
- Traditional packet scheduling involves parallel enqueuing and then serialised scheduling from those queues. For high performance traffic handling we have turned this around. Arriving packets are first processed in parallel and subsequently enqueued in a serial orderlist. This is referred to as “Think First Queue Later”™
- The deployment of a single pipeline parallel processing architecture (Applicant's array processor) is innovative in a Traffic Handling application. It provides the wire speed processing capability which is essential for the implementation of this concept.
- An alternate form of parallelism (compared to independent parallel schedulers) is thus exploited in order to solve the processing issues in high speed Traffic Handling.
Claims (47)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB0226249.1A GB0226249D0 (en) | 2002-11-11 | 2002-11-11 | Traffic handling system |
PCT/GB2003/004893 WO2004045162A2 (en) | 2002-11-11 | 2003-11-11 | Traffic management architecture |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050243829A1 true US20050243829A1 (en) | 2005-11-03 |
Family
ID=9947583
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/534,343 Expired - Fee Related US7843951B2 (en) | 2002-11-11 | 2003-11-11 | Packet storage system for traffic handling |
US10/534,308 Expired - Fee Related US7522605B2 (en) | 2002-11-11 | 2003-11-11 | Data packet handling in computer or communication systems |
US10/534,346 Abandoned US20050243829A1 (en) | 2002-11-11 | 2003-11-11 | Traffic management architecture |
US10/534,430 Expired - Fee Related US7882312B2 (en) | 2002-11-11 | 2003-11-11 | State engine for data processor |
US12/955,684 Expired - Fee Related US8472457B2 (en) | 2002-11-11 | 2010-11-29 | Method and apparatus for queuing variable size data packets in a communication system |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/534,343 Expired - Fee Related US7843951B2 (en) | 2002-11-11 | 2003-11-11 | Packet storage system for traffic handling |
US10/534,308 Expired - Fee Related US7522605B2 (en) | 2002-11-11 | 2003-11-11 | Data packet handling in computer or communication systems |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/534,430 Expired - Fee Related US7882312B2 (en) | 2002-11-11 | 2003-11-11 | State engine for data processor |
US12/955,684 Expired - Fee Related US8472457B2 (en) | 2002-11-11 | 2010-11-29 | Method and apparatus for queuing variable size data packets in a communication system |
Country Status (5)
Country | Link |
---|---|
US (5) | US7843951B2 (en) |
CN (4) | CN100557594C (en) |
AU (4) | AU2003283559A1 (en) |
GB (5) | GB0226249D0 (en) |
WO (4) | WO2004045162A2 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030041163A1 (en) * | 2001-02-14 | 2003-02-27 | John Rhoades | Data processing architectures |
US20060156316A1 (en) * | 2004-12-18 | 2006-07-13 | Gray Area Technologies | System and method for application specific array processing |
US20060206761A1 (en) * | 2003-09-12 | 2006-09-14 | Jeddeloh Joseph M | System and method for on-board timing margin testing of memory modules |
US20070300105A1 (en) * | 2004-06-04 | 2007-12-27 | Micron Technology Inc. | Memory hub tester interface and method for use thereof |
US20090080379A1 (en) * | 2007-09-25 | 2009-03-26 | Mitsuhiro Takashima | Communication Equipment |
US7913122B2 (en) | 2003-08-19 | 2011-03-22 | Round Rock Research, Llc | System and method for on-board diagnostics of memory modules |
US20110069716A1 (en) * | 2002-11-11 | 2011-03-24 | Anthony Spencer | Method and apparatus for queuing variable size data packets in a communication system |
US20110170557A1 (en) * | 2010-01-08 | 2011-07-14 | Nvidia Corporation | System and Method for Traversing a Treelet-Composed Hierarchical Structure |
US8589643B2 (en) | 2003-10-20 | 2013-11-19 | Round Rock Research, Llc | Arbitration system and method for memory responses in a hub-based memory system |
US20140376564A1 (en) * | 2013-06-19 | 2014-12-25 | Huawei Technologies Co., Ltd. | Method and apparatus for implementing round robin scheduling |
US20170013332A1 (en) * | 2014-02-27 | 2017-01-12 | National Institute Of Information And Communications Technology | Optical delay line and electronic buffer merged-type optical packet buffer control device |
US10972398B2 (en) * | 2016-08-27 | 2021-04-06 | Huawei Technologies Co., Ltd. | Method and apparatus for processing low-latency service flow |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6944636B1 (en) * | 2004-04-30 | 2005-09-13 | Microsoft Corporation | Maintaining time-date information for syncing low fidelity devices |
US8316431B2 (en) * | 2004-10-12 | 2012-11-20 | Canon Kabushiki Kaisha | Concurrent IPsec processing system and method |
US20060101210A1 (en) * | 2004-10-15 | 2006-05-11 | Lance Dover | Register-based memory command architecture |
US7855974B2 (en) * | 2004-12-23 | 2010-12-21 | Solera Networks, Inc. | Method and apparatus for network packet capture distributed storage system |
US20100195538A1 (en) * | 2009-02-04 | 2010-08-05 | Merkey Jeffrey V | Method and apparatus for network packet capture distributed storage system |
US7392229B2 (en) * | 2005-02-12 | 2008-06-24 | Curtis L. Harris | General purpose set theoretic processor |
US7746784B2 (en) * | 2006-03-23 | 2010-06-29 | Alcatel-Lucent Usa Inc. | Method and apparatus for improving traffic distribution in load-balancing networks |
US8065249B1 (en) | 2006-10-13 | 2011-11-22 | Harris Curtis L | GPSTP with enhanced aggregation functionality |
US7774286B1 (en) | 2006-10-24 | 2010-08-10 | Harris Curtis L | GPSTP with multiple thread functionality |
US8166212B2 (en) * | 2007-06-26 | 2012-04-24 | Xerox Corporation | Predictive DMA data transfer |
US7830918B2 (en) * | 2007-08-10 | 2010-11-09 | Eaton Corporation | Method of network communication, and node and system employing the same |
US8004998B2 (en) * | 2008-05-23 | 2011-08-23 | Solera Networks, Inc. | Capture and regeneration of a network data using a virtual software switch |
US8625642B2 (en) | 2008-05-23 | 2014-01-07 | Solera Networks, Inc. | Method and apparatus of network artifact indentification and extraction |
US8521732B2 (en) | 2008-05-23 | 2013-08-27 | Solera Networks, Inc. | Presentation of an extracted artifact based on an indexing technique |
US20090292736A1 (en) * | 2008-05-23 | 2009-11-26 | Matthew Scott Wood | On demand network activity reporting through a dynamic file system and method |
JP5300355B2 (en) * | 2008-07-14 | 2013-09-25 | キヤノン株式会社 | Network protocol processing apparatus and processing method thereof |
US9213665B2 (en) * | 2008-10-28 | 2015-12-15 | Freescale Semiconductor, Inc. | Data processor for processing a decorated storage notify |
US8627471B2 (en) * | 2008-10-28 | 2014-01-07 | Freescale Semiconductor, Inc. | Permissions checking for data processing instructions |
EP2409143A4 (en) | 2009-03-18 | 2017-11-08 | Texas Research International, Inc. | Environmental damage sensor |
US8266498B2 (en) | 2009-03-31 | 2012-09-11 | Freescale Semiconductor, Inc. | Implementation of multiple error detection schemes for a cache |
WO2011060368A1 (en) * | 2009-11-15 | 2011-05-19 | Solera Networks, Inc. | Method and apparatus for storing and indexing high-speed network traffic data |
US20110125748A1 (en) * | 2009-11-15 | 2011-05-26 | Solera Networks, Inc. | Method and Apparatus for Real Time Identification and Recording of Artifacts |
US8295287B2 (en) * | 2010-01-27 | 2012-10-23 | National Instruments Corporation | Network traffic shaping for reducing bus jitter on a real time controller |
US8990660B2 (en) | 2010-09-13 | 2015-03-24 | Freescale Semiconductor, Inc. | Data processing system having end-to-end error correction and method therefor |
US8504777B2 (en) | 2010-09-21 | 2013-08-06 | Freescale Semiconductor, Inc. | Data processor for processing decorated instructions with cache bypass |
US8667230B1 (en) | 2010-10-19 | 2014-03-04 | Curtis L. Harris | Recognition and recall memory |
KR20120055779A (en) * | 2010-11-23 | 2012-06-01 | 한국전자통신연구원 | System and method for communicating audio data based zigbee and method thereof |
KR20120064576A (en) * | 2010-12-09 | 2012-06-19 | 한국전자통신연구원 | Apparatus for surpporting continuous read/write in asymmetric storage system and method thereof |
US8849991B2 (en) | 2010-12-15 | 2014-09-30 | Blue Coat Systems, Inc. | System and method for hypertext transfer protocol layered reconstruction |
US8666985B2 (en) | 2011-03-16 | 2014-03-04 | Solera Networks, Inc. | Hardware accelerated application-based pattern matching for real time classification and recording of network traffic |
US8566672B2 (en) | 2011-03-22 | 2013-10-22 | Freescale Semiconductor, Inc. | Selective checkbit modification for error correction |
US8607121B2 (en) | 2011-04-29 | 2013-12-10 | Freescale Semiconductor, Inc. | Selective error detection and error correction for a memory interface |
US8990657B2 (en) | 2011-06-14 | 2015-03-24 | Freescale Semiconductor, Inc. | Selective masking for error correction |
US9100291B2 (en) | 2012-01-31 | 2015-08-04 | Db Networks, Inc. | Systems and methods for extracting structured application data from a communications link |
US9525642B2 (en) | 2012-01-31 | 2016-12-20 | Db Networks, Inc. | Ordering traffic captured on a data connection |
US9092318B2 (en) * | 2012-02-06 | 2015-07-28 | Vmware, Inc. | Method of allocating referenced memory pages from a free list |
US9665233B2 (en) * | 2012-02-16 | 2017-05-30 | The University Utah Research Foundation | Visualization of software memory usage |
EP2944055A4 (en) | 2013-01-11 | 2016-08-17 | Db Networks Inc | Systems and methods for detecting and mitigating threats to a structured data storage system |
CA2928595C (en) * | 2013-12-04 | 2019-01-08 | Db Networks, Inc. | Ordering traffic captured on a data connection |
US10210592B2 (en) | 2014-03-30 | 2019-02-19 | Teoco Ltd. | System, method, and computer program product for efficient aggregation of data records of big data |
WO2016145405A1 (en) * | 2015-03-11 | 2016-09-15 | Protocol Insight, Llc | Intelligent packet analyzer circuits, systems, and methods |
KR102449333B1 (en) | 2015-10-30 | 2022-10-04 | 삼성전자주식회사 | Memory system and read request management method thereof |
WO2017164804A1 (en) * | 2016-03-23 | 2017-09-28 | Clavister Ab | Method for traffic shaping using a serial packet processing algorithm and a parallel packet processing algorithm |
JP6943942B2 (en) | 2016-03-23 | 2021-10-06 | クラビスター アクティエボラーグ | A method of performing traffic shaping by using a sequential packet processing algorithm and a parallel packet processing algorithm. |
WO2018081582A1 (en) * | 2016-10-28 | 2018-05-03 | Atavium, Inc. | Systems and methods for random to sequential storage mapping |
CN107656895B (en) * | 2017-10-27 | 2023-07-28 | 上海力诺通信科技有限公司 | Orthogonal platform high-density computing architecture with standard height of 1U |
RU2718215C2 (en) * | 2018-09-14 | 2020-03-31 | Общество С Ограниченной Ответственностью "Яндекс" | Data processing system and method for detecting jam in data processing system |
US11138044B2 (en) * | 2018-09-26 | 2021-10-05 | Micron Technology, Inc. | Memory pooling between selected memory resources |
US11093403B2 (en) | 2018-12-04 | 2021-08-17 | Vmware, Inc. | System and methods of a self-tuning cache sizing system in a cache partitioning system |
EP3866417A1 (en) * | 2020-02-14 | 2021-08-18 | Deutsche Telekom AG | Method for an improved traffic shaping and/or management of ip traffic in a packet processing system, telecommunications network, network node or network element, program and computer program product |
Citations (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5187780A (en) * | 1989-04-07 | 1993-02-16 | Digital Equipment Corporation | Dual-path computer interconnect system with zone manager for packet memory |
US5280483A (en) * | 1990-08-09 | 1994-01-18 | Fujitsu Limited | Traffic control system for asynchronous transfer mode exchange |
US5513134A (en) * | 1995-02-21 | 1996-04-30 | Gte Laboratories Incorporated | ATM shared memory switch with content addressing |
US5633865A (en) * | 1995-03-31 | 1997-05-27 | Netvantage | Apparatus for selectively transferring data packets between local area networks |
US5751987A (en) * | 1990-03-16 | 1998-05-12 | Texas Instruments Incorporated | Distributed processing memory chip with embedded logic having both data memory and broadcast memory |
US5768275A (en) * | 1994-08-31 | 1998-06-16 | Brooktree Corporation | Controller for ATM segmentation and reassembly |
US5822608A (en) * | 1990-11-13 | 1998-10-13 | International Business Machines Corporation | Associative parallel processing system |
US5956340A (en) * | 1997-08-05 | 1999-09-21 | Ramot University Authority For Applied Research And Industrial Development Ltd. | Space efficient fair queuing by stochastic Memory multiplexing |
US6052375A (en) * | 1997-11-26 | 2000-04-18 | International Business Machines Corporation | High speed internetworking traffic scaler and shaper |
US6088771A (en) * | 1997-10-24 | 2000-07-11 | Digital Equipment Corporation | Mechanism for reducing latency of memory barrier operations on a multiprocessor system |
US6094715A (en) * | 1990-11-13 | 2000-07-25 | International Business Machine Corporation | SIMD/MIMD processing synchronization |
US6097403A (en) * | 1998-03-02 | 2000-08-01 | Advanced Micro Devices, Inc. | Memory including logic for operating upon graphics primitives |
US6160814A (en) * | 1997-05-31 | 2000-12-12 | Texas Instruments Incorporated | Distributed shared-memory packet switch |
US20010021174A1 (en) * | 2000-03-06 | 2001-09-13 | International Business Machines Corporation | Switching device and method for controlling the routing of data packets |
US20010021967A1 (en) * | 1997-06-30 | 2001-09-13 | Tetrick Raymond S. | Method and apparatus for arbitrating deferred read requests |
US20010024446A1 (en) * | 2000-03-21 | 2001-09-27 | Craig Robert George Alexander | System and method for adaptive, slot-mapping input/output queuing for TDM/TDMA systems |
US6314489B1 (en) * | 1998-07-10 | 2001-11-06 | Nortel Networks Limited | Methods and systems for storing cell data using a bank of cell buffers |
US20020012348A1 (en) * | 2000-07-26 | 2002-01-31 | Nec Corporation | Router device and priority control method for use in the same |
US6356546B1 (en) * | 1998-08-11 | 2002-03-12 | Nortel Networks Limited | Universal transfer method and network with distributed switch |
US20020031086A1 (en) * | 2000-03-22 | 2002-03-14 | Welin Andrew M. | Systems, processes and integrated circuits for improved packet scheduling of media over packet |
US20020036984A1 (en) * | 2000-06-02 | 2002-03-28 | Fabio Chiussi | Method and apparatus for guaranteeing data transfer rates and enforcing conformance with traffic profiles in a packet network |
US20020051458A1 (en) * | 1998-04-24 | 2002-05-02 | Avici Systems | Composite trunking |
US20020062415A1 (en) * | 2000-09-29 | 2002-05-23 | Zarlink Semiconductor N.V. Inc. | Slotted memory access method |
US6396843B1 (en) * | 1998-10-30 | 2002-05-28 | Agere Systems Guardian Corp. | Method and apparatus for guaranteeing data transfer rates and delays in data packet networks using logarithmic calendar queues |
US20020064156A1 (en) * | 2000-04-20 | 2002-05-30 | Cyriel Minkenberg | Switching arrangement and method |
US20020075882A1 (en) * | 1998-05-07 | 2002-06-20 | Marc Donis | Multiple priority buffering in a computer network |
US20020118689A1 (en) * | 2000-09-27 | 2002-08-29 | Luijten Ronald P. | Switching arrangement and method with separated output buffers |
US20030081623A1 (en) * | 2001-10-27 | 2003-05-01 | Amplify.Net, Inc. | Virtual queues in a single queue in the bandwidth management traffic-shaping cell |
US20030174699A1 (en) * | 2002-03-12 | 2003-09-18 | Van Asten Kizito Gysbertus Antonius | High-speed packet memory |
US20030179644A1 (en) * | 2002-03-19 | 2003-09-25 | Ali Anvar | Synchronous global controller for enhanced pipelining |
US20030188056A1 (en) * | 2002-03-27 | 2003-10-02 | Suresh Chemudupati | Method and apparatus for packet reformatting |
US6643298B1 (en) * | 1999-11-23 | 2003-11-04 | International Business Machines Corporation | Method and apparatus for MPEG-2 program ID re-mapping for multiplexing several programs into a single transport stream |
US6662263B1 (en) * | 2000-03-03 | 2003-12-09 | Multi Level Memory Technology | Sectorless flash memory architecture |
US20030227925A1 (en) * | 2002-06-07 | 2003-12-11 | Fujitsu Limited | Packet processing device |
US20040022094A1 (en) * | 2002-02-25 | 2004-02-05 | Sivakumar Radhakrishnan | Cache usage for concurrent multiple streams |
US20040044815A1 (en) * | 2002-08-28 | 2004-03-04 | Tan Loo Shing | Storage replacement |
US20040117715A1 (en) * | 2002-11-23 | 2004-06-17 | Sang-Hyuck Ha | Method and apparatus for controlling turbo decoder input |
US20040213291A1 (en) * | 2000-12-14 | 2004-10-28 | Beshai Maged E. | Compact segmentation of variable-size packet streams |
US6829218B1 (en) * | 1998-09-15 | 2004-12-07 | Lucent Technologies Inc. | High speed weighted fair queuing system for ATM switches |
US6907041B1 (en) * | 2000-03-07 | 2005-06-14 | Cisco Technology, Inc. | Communications interconnection network with distributed resequencing |
US20050163049A1 (en) * | 2000-05-17 | 2005-07-28 | Takeki Yazaki | Packet shaper |
US20050167648A1 (en) * | 2001-09-21 | 2005-08-04 | Chang-Hasnain Connie J. | Variable semiconductor all-optical buffer using slow light based on electromagnetically induced transparency |
US20050265368A1 (en) * | 2002-11-11 | 2005-12-01 | Anthony Spencer | Packet storage system for traffic handling |
US6993027B1 (en) * | 1999-03-17 | 2006-01-31 | Broadcom Corporation | Method for sending a switch indicator to avoid out-of-ordering of frames in a network switch |
US6996117B2 (en) * | 2001-09-19 | 2006-02-07 | Bay Microsystems, Inc. | Vertical instruction and data processing in a network processor architecture |
US7035212B1 (en) * | 2001-01-25 | 2006-04-25 | Optim Networks | Method and apparatus for end to end forwarding architecture |
US20070171900A1 (en) * | 2001-11-13 | 2007-07-26 | Beshai Maged E | Data Burst Scheduling |
US7342887B1 (en) * | 1999-11-24 | 2008-03-11 | Juniper Networks, Inc. | Switching device |
US7382787B1 (en) * | 2001-07-30 | 2008-06-03 | Cisco Technology, Inc. | Packet routing and switching device |
US7499456B2 (en) * | 2002-10-29 | 2009-03-03 | Cisco Technology, Inc. | Multi-tiered virtual local area network (VLAN) domain mapping mechanism |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4914650A (en) * | 1988-12-06 | 1990-04-03 | American Telephone And Telegraph Company | Bandwidth allocation and congestion control scheme for an integrated voice and data network |
JP2596718B2 (en) * | 1993-12-21 | 1997-04-02 | インターナショナル・ビジネス・マシーンズ・コーポレイション | How to manage network communication buffers |
SE9803901D0 (en) * | 1998-11-16 | 1998-11-16 | Ericsson Telefon Ab L M | a device for a service network |
US6246682B1 (en) * | 1999-03-05 | 2001-06-12 | Transwitch Corp. | Method and apparatus for managing multiple ATM cell queues |
US6574231B1 (en) * | 1999-05-21 | 2003-06-03 | Advanced Micro Devices, Inc. | Method and apparatus for queuing data frames in a network switch port |
US6671292B1 (en) * | 1999-06-25 | 2003-12-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for adaptive voice buffering |
DE60038538T2 (en) * | 2000-02-28 | 2009-06-25 | Alcatel Lucent | Switching device and conciliation procedure |
US7139282B1 (en) * | 2000-03-24 | 2006-11-21 | Juniper Networks, Inc. | Bandwidth division for packet processing |
US6647477B2 (en) * | 2000-10-06 | 2003-11-11 | Pmc-Sierra Ltd. | Transporting data transmission units of different sizes using segments of fixed sizes |
US6871780B2 (en) * | 2000-11-27 | 2005-03-29 | Airclic, Inc. | Scalable distributed database system and method for linking codes to internet information |
US20020126659A1 (en) * | 2001-03-07 | 2002-09-12 | Ling-Zhong Liu | Unified software architecture for switch connection management |
US6728857B1 (en) * | 2001-06-20 | 2004-04-27 | Cisco Technology, Inc. | Method and system for storing and retrieving data using linked lists |
US20030145086A1 (en) * | 2002-01-29 | 2003-07-31 | O'reilly James | Scalable network-attached storage system |
US6862639B2 (en) * | 2002-03-11 | 2005-03-01 | Harris Corporation | Computer system including a receiver interface circuit with a scatter pointer queue and related methods |
US7239608B2 (en) * | 2002-04-26 | 2007-07-03 | Samsung Electronics Co., Ltd. | Router using measurement-based adaptable load traffic balancing system and method of operation |
US20040039884A1 (en) * | 2002-08-21 | 2004-02-26 | Qing Li | System and method for managing the memory in a computer system |
WO2005036839A2 (en) * | 2003-10-03 | 2005-04-21 | Avici Systems, Inc. | Rapid alternate paths for network destinations |
US7668100B2 (en) * | 2005-06-28 | 2010-02-23 | Avaya Inc. | Efficient load balancing and heartbeat mechanism for telecommunication endpoints |
-
2002
- 2002-11-11 GB GBGB0226249.1A patent/GB0226249D0/en not_active Ceased
-
2003
- 2003-11-11 CN CNB200380108223XA patent/CN100557594C/en not_active Expired - Fee Related
- 2003-11-11 GB GB0511588A patent/GB2413031B/en not_active Expired - Fee Related
- 2003-11-11 WO PCT/GB2003/004893 patent/WO2004045162A2/en not_active Application Discontinuation
- 2003-11-11 AU AU2003283559A patent/AU2003283559A1/en not_active Abandoned
- 2003-11-11 CN CN2003801085295A patent/CN1736068B/en not_active Expired - Fee Related
- 2003-11-11 AU AU2003283545A patent/AU2003283545A1/en not_active Abandoned
- 2003-11-11 GB GB0511589A patent/GB2412035B/en not_active Expired - Fee Related
- 2003-11-11 US US10/534,343 patent/US7843951B2/en not_active Expired - Fee Related
- 2003-11-11 CN CN2003801084790A patent/CN1736066B/en not_active Expired - Fee Related
- 2003-11-11 WO PCT/GB2003/004854 patent/WO2004045160A2/en not_active Application Discontinuation
- 2003-11-11 GB GB0511587A patent/GB2412537B/en not_active Expired - Fee Related
- 2003-11-11 US US10/534,308 patent/US7522605B2/en not_active Expired - Fee Related
- 2003-11-11 CN CN2003801085308A patent/CN1736069B/en not_active Expired - Fee Related
- 2003-11-11 WO PCT/GB2003/004867 patent/WO2004044733A2/en not_active Application Discontinuation
- 2003-11-11 WO PCT/GB2003/004866 patent/WO2004045161A1/en not_active Application Discontinuation
- 2003-11-11 AU AU2003283539A patent/AU2003283539A1/en not_active Abandoned
- 2003-11-11 US US10/534,346 patent/US20050243829A1/en not_active Abandoned
- 2003-11-11 GB GB0509997A patent/GB2411271B/en not_active Expired - Fee Related
- 2003-11-11 US US10/534,430 patent/US7882312B2/en not_active Expired - Fee Related
- 2003-11-11 AU AU2003283544A patent/AU2003283544A1/en not_active Abandoned
-
2010
- 2010-11-29 US US12/955,684 patent/US8472457B2/en not_active Expired - Fee Related
Patent Citations (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5187780A (en) * | 1989-04-07 | 1993-02-16 | Digital Equipment Corporation | Dual-path computer interconnect system with zone manager for packet memory |
US5751987A (en) * | 1990-03-16 | 1998-05-12 | Texas Instruments Incorporated | Distributed processing memory chip with embedded logic having both data memory and broadcast memory |
US5280483A (en) * | 1990-08-09 | 1994-01-18 | Fujitsu Limited | Traffic control system for asynchronous transfer mode exchange |
US6094715A (en) * | 1990-11-13 | 2000-07-25 | International Business Machine Corporation | SIMD/MIMD processing synchronization |
US5822608A (en) * | 1990-11-13 | 1998-10-13 | International Business Machines Corporation | Associative parallel processing system |
US5768275A (en) * | 1994-08-31 | 1998-06-16 | Brooktree Corporation | Controller for ATM segmentation and reassembly |
US5513134A (en) * | 1995-02-21 | 1996-04-30 | Gte Laboratories Incorporated | ATM shared memory switch with content addressing |
US5633865A (en) * | 1995-03-31 | 1997-05-27 | Netvantage | Apparatus for selectively transferring data packets between local area networks |
US6160814A (en) * | 1997-05-31 | 2000-12-12 | Texas Instruments Incorporated | Distributed shared-memory packet switch |
US20010021967A1 (en) * | 1997-06-30 | 2001-09-13 | Tetrick Raymond S. | Method and apparatus for arbitrating deferred read requests |
US5956340A (en) * | 1997-08-05 | 1999-09-21 | Ramot University Authority For Applied Research And Industrial Development Ltd. | Space efficient fair queuing by stochastic Memory multiplexing |
US6088771A (en) * | 1997-10-24 | 2000-07-11 | Digital Equipment Corporation | Mechanism for reducing latency of memory barrier operations on a multiprocessor system |
US6052375A (en) * | 1997-11-26 | 2000-04-18 | International Business Machines Corporation | High speed internetworking traffic scaler and shaper |
US6097403A (en) * | 1998-03-02 | 2000-08-01 | Advanced Micro Devices, Inc. | Memory including logic for operating upon graphics primitives |
US20020051458A1 (en) * | 1998-04-24 | 2002-05-02 | Avici Systems | Composite trunking |
US20020075882A1 (en) * | 1998-05-07 | 2002-06-20 | Marc Donis | Multiple priority buffering in a computer network |
US6314489B1 (en) * | 1998-07-10 | 2001-11-06 | Nortel Networks Limited | Methods and systems for storing cell data using a bank of cell buffers |
US6356546B1 (en) * | 1998-08-11 | 2002-03-12 | Nortel Networks Limited | Universal transfer method and network with distributed switch |
US6829218B1 (en) * | 1998-09-15 | 2004-12-07 | Lucent Technologies Inc. | High speed weighted fair queuing system for ATM switches |
US6396843B1 (en) * | 1998-10-30 | 2002-05-28 | Agere Systems Guardian Corp. | Method and apparatus for guaranteeing data transfer rates and delays in data packet networks using logarithmic calendar queues |
US6993027B1 (en) * | 1999-03-17 | 2006-01-31 | Broadcom Corporation | Method for sending a switch indicator to avoid out-of-ordering of frames in a network switch |
US6643298B1 (en) * | 1999-11-23 | 2003-11-04 | International Business Machines Corporation | Method and apparatus for MPEG-2 program ID re-mapping for multiplexing several programs into a single transport stream |
US7342887B1 (en) * | 1999-11-24 | 2008-03-11 | Juniper Networks, Inc. | Switching device |
US6662263B1 (en) * | 2000-03-03 | 2003-12-09 | Multi Level Memory Technology | Sectorless flash memory architecture |
US20010021174A1 (en) * | 2000-03-06 | 2001-09-13 | International Business Machines Corporation | Switching device and method for controlling the routing of data packets |
US6907041B1 (en) * | 2000-03-07 | 2005-06-14 | Cisco Technology, Inc. | Communications interconnection network with distributed resequencing |
US20010024446A1 (en) * | 2000-03-21 | 2001-09-27 | Craig Robert George Alexander | System and method for adaptive, slot-mapping input/output queuing for TDM/TDMA systems |
US20020031086A1 (en) * | 2000-03-22 | 2002-03-14 | Welin Andrew M. | Systems, processes and integrated circuits for improved packet scheduling of media over packet |
US20020064156A1 (en) * | 2000-04-20 | 2002-05-30 | Cyriel Minkenberg | Switching arrangement and method |
US20050163049A1 (en) * | 2000-05-17 | 2005-07-28 | Takeki Yazaki | Packet shaper |
US20020036984A1 (en) * | 2000-06-02 | 2002-03-28 | Fabio Chiussi | Method and apparatus for guaranteeing data transfer rates and enforcing conformance with traffic profiles in a packet network |
US20020012348A1 (en) * | 2000-07-26 | 2002-01-31 | Nec Corporation | Router device and priority control method for use in the same |
US20020118689A1 (en) * | 2000-09-27 | 2002-08-29 | Luijten Ronald P. | Switching arrangement and method with separated output buffers |
US20020062415A1 (en) * | 2000-09-29 | 2002-05-23 | Zarlink Semiconductor N.V. Inc. | Slotted memory access method |
US20040213291A1 (en) * | 2000-12-14 | 2004-10-28 | Beshai Maged E. | Compact segmentation of variable-size packet streams |
US7035212B1 (en) * | 2001-01-25 | 2006-04-25 | Optim Networks | Method and apparatus for end to end forwarding architecture |
US7382787B1 (en) * | 2001-07-30 | 2008-06-03 | Cisco Technology, Inc. | Packet routing and switching device |
US6996117B2 (en) * | 2001-09-19 | 2006-02-07 | Bay Microsystems, Inc. | Vertical instruction and data processing in a network processor architecture |
US20050167648A1 (en) * | 2001-09-21 | 2005-08-04 | Chang-Hasnain Connie J. | Variable semiconductor all-optical buffer using slow light based on electromagnetically induced transparency |
US20030081623A1 (en) * | 2001-10-27 | 2003-05-01 | Amplify.Net, Inc. | Virtual queues in a single queue in the bandwidth management traffic-shaping cell |
US20070171900A1 (en) * | 2001-11-13 | 2007-07-26 | Beshai Maged E | Data Burst Scheduling |
US20040022094A1 (en) * | 2002-02-25 | 2004-02-05 | Sivakumar Radhakrishnan | Cache usage for concurrent multiple streams |
US7126959B2 (en) * | 2002-03-12 | 2006-10-24 | Tropic Networks Inc. | High-speed packet memory |
US20030174699A1 (en) * | 2002-03-12 | 2003-09-18 | Van Asten Kizito Gysbertus Antonius | High-speed packet memory |
US20030179644A1 (en) * | 2002-03-19 | 2003-09-25 | Ali Anvar | Synchronous global controller for enhanced pipelining |
US20030188056A1 (en) * | 2002-03-27 | 2003-10-02 | Suresh Chemudupati | Method and apparatus for packet reformatting |
US20030227925A1 (en) * | 2002-06-07 | 2003-12-11 | Fujitsu Limited | Packet processing device |
US20040044815A1 (en) * | 2002-08-28 | 2004-03-04 | Tan Loo Shing | Storage replacement |
US7499456B2 (en) * | 2002-10-29 | 2009-03-03 | Cisco Technology, Inc. | Multi-tiered virtual local area network (VLAN) domain mapping mechanism |
US20050265368A1 (en) * | 2002-11-11 | 2005-12-01 | Anthony Spencer | Packet storage system for traffic handling |
US7522605B2 (en) * | 2002-11-11 | 2009-04-21 | Clearspeed Technology Plc | Data packet handling in computer or communication systems |
US20040117715A1 (en) * | 2002-11-23 | 2004-06-17 | Sang-Hyuck Ha | Method and apparatus for controlling turbo decoder input |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8200686B2 (en) | 2001-02-14 | 2012-06-12 | Rambus Inc. | Lookup engine |
US20070217453A1 (en) * | 2001-02-14 | 2007-09-20 | John Rhoades | Data Processing Architectures |
US8127112B2 (en) * | 2001-02-14 | 2012-02-28 | Rambus Inc. | SIMD array operable to process different respective packet protocols simultaneously while executing a single common instruction stream |
US20030041163A1 (en) * | 2001-02-14 | 2003-02-27 | John Rhoades | Data processing architectures |
US20110083000A1 (en) * | 2001-02-14 | 2011-04-07 | John Rhoades | Data processing architectures for packet handling |
US7917727B2 (en) * | 2001-02-14 | 2011-03-29 | Rambus, Inc. | Data processing architectures for packet handling using a SIMD array |
US7856543B2 (en) * | 2001-02-14 | 2010-12-21 | Rambus Inc. | Data processing architectures for packet handling wherein batches of data packets of unpredictable size are distributed across processing elements arranged in a SIMD array operable to process different respective packet protocols at once while executing a single common instruction stream |
US8472457B2 (en) | 2002-11-11 | 2013-06-25 | Rambus Inc. | Method and apparatus for queuing variable size data packets in a communication system |
US20110069716A1 (en) * | 2002-11-11 | 2011-03-24 | Anthony Spencer | Method and apparatus for queuing variable size data packets in a communication system |
US7913122B2 (en) | 2003-08-19 | 2011-03-22 | Round Rock Research, Llc | System and method for on-board diagnostics of memory modules |
US7958412B2 (en) | 2003-09-12 | 2011-06-07 | Round Rock Research, Llc | System and method for on-board timing margin testing of memory modules |
US7689879B2 (en) | 2003-09-12 | 2010-03-30 | Micron Technology, Inc. | System and method for on-board timing margin testing of memory modules |
US20060206761A1 (en) * | 2003-09-12 | 2006-09-14 | Jeddeloh Joseph M | System and method for on-board timing margin testing of memory modules |
US8589643B2 (en) | 2003-10-20 | 2013-11-19 | Round Rock Research, Llc | Arbitration system and method for memory responses in a hub-based memory system |
US7823024B2 (en) * | 2004-06-04 | 2010-10-26 | Micron Technology, Inc. | Memory hub tester interface and method for use thereof |
US20070300105A1 (en) * | 2004-06-04 | 2007-12-27 | Micron Technology Inc. | Memory hub tester interface and method for use thereof |
US20090193225A1 (en) * | 2004-12-18 | 2009-07-30 | Gray Area Technologies, Inc. | System and method for application specific array processing |
US20060156316A1 (en) * | 2004-12-18 | 2006-07-13 | Gray Area Technologies | System and method for application specific array processing |
US20090080379A1 (en) * | 2007-09-25 | 2009-03-26 | Mitsuhiro Takashima | Communication Equipment |
US8208432B2 (en) * | 2007-09-25 | 2012-06-26 | Hitachi Kokusai Electric Inc. | Communication equipment |
US20110170557A1 (en) * | 2010-01-08 | 2011-07-14 | Nvidia Corporation | System and Method for Traversing a Treelet-Composed Hierarchical Structure |
US8472455B2 (en) * | 2010-01-08 | 2013-06-25 | Nvidia Corporation | System and method for traversing a treelet-composed hierarchical structure |
US20140376564A1 (en) * | 2013-06-19 | 2014-12-25 | Huawei Technologies Co., Ltd. | Method and apparatus for implementing round robin scheduling |
US9571413B2 (en) * | 2013-06-19 | 2017-02-14 | Huawei Technologies Co., Ltd. | Method and apparatus for implementing round robin scheduling |
US20170013332A1 (en) * | 2014-02-27 | 2017-01-12 | National Institute Of Information And Communications Technology | Optical delay line and electronic buffer merged-type optical packet buffer control device |
US9961420B2 (en) * | 2014-02-27 | 2018-05-01 | National Institute Of Information And Communications Technology | Optical delay line and electronic buffer merged-type optical packet buffer control device |
US10972398B2 (en) * | 2016-08-27 | 2021-04-06 | Huawei Technologies Co., Ltd. | Method and apparatus for processing low-latency service flow |
US11616729B2 (en) | 2016-08-27 | 2023-03-28 | Huawei Technologies Co., Ltd. | Method and apparatus for processing low-latency service flow |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20050243829A1 (en) | Traffic management architecture | |
US6959002B2 (en) | Traffic manager for network switch port | |
US8077618B2 (en) | Using burst tolerance values in time-based schedules | |
US7426185B1 (en) | Backpressure mechanism for switching fabric | |
US6785236B1 (en) | Packet transmission scheduling with threshold based backpressure mechanism | |
US6687781B2 (en) | Fair weighted queuing bandwidth allocation system for network switch port | |
US8644327B2 (en) | Switching arrangement and method with separated output buffers | |
US7190674B2 (en) | Apparatus for controlling packet output | |
JP4605911B2 (en) | Packet transmission device | |
US7990858B2 (en) | Method, device and system of scheduling data transport over a fabric | |
KR102082020B1 (en) | Method and apparatus for using multiple linked memory lists | |
US8325736B2 (en) | Propagation of minimum guaranteed scheduling rates among scheduling layers in a hierarchical schedule | |
US7346067B2 (en) | High efficiency data buffering in a computer network device | |
US7483429B2 (en) | Method and system for flexible network processor scheduler and data flow | |
US20050018601A1 (en) | Traffic management | |
GB2339371A (en) | Rate guarantees through buffer management | |
US7522620B2 (en) | Method and apparatus for scheduling packets | |
US7116680B1 (en) | Processor architecture and a method of processing | |
US7269180B2 (en) | System and method for prioritizing and queuing traffic | |
US8879578B2 (en) | Reducing store and forward delay in distributed systems | |
US7623456B1 (en) | Apparatus and method for implementing comprehensive QoS independent of the fabric system | |
US11824791B2 (en) | Virtual channel starvation-free arbitration for switches | |
Benet et al. | Providing in-network support to coflow scheduling | |
EP1774721B1 (en) | Propagation of minimum guaranteed scheduling rates | |
Feng | Design of per Flow Queuing Buffer Management and Scheduling for IP Routers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CLEARSPEED TECHNOLOGY PLC, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SPENCER, ANTHONY;REEL/FRAME:016267/0314 Effective date: 20050629 |
|
AS | Assignment |
Owner name: CLEARSPEED TECHNOLOGY LIMITED,UNITED KINGDOM Free format text: CHANGE OF NAME;ASSIGNOR:CLEARSPEED TECHNOLOGY PLC;REEL/FRAME:024576/0975 Effective date: 20090729 Owner name: CLEARSPEED TECHNOLOGY LIMITED, UNITED KINGDOM Free format text: CHANGE OF NAME;ASSIGNOR:CLEARSPEED TECHNOLOGY PLC;REEL/FRAME:024576/0975 Effective date: 20090729 |
|
AS | Assignment |
Owner name: RAMBUS INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CLEARSPEED TECHNOLOGY LTD;REEL/FRAME:024964/0861 Effective date: 20100818 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |