WO2002023823A9 - Apparatus and methods for processing packets in a broadband data stream - Google Patents

Apparatus and methods for processing packets in a broadband data stream

Info

Publication number
WO2002023823A9
WO2002023823A9 PCT/US2001/028276 US0128276W WO2002023823A9 WO 2002023823 A9 WO2002023823 A9 WO 2002023823A9 US 0128276 W US0128276 W US 0128276W WO 2002023823 A9 WO2002023823 A9 WO 2002023823A9
Authority
WO
Grant status
Application
Patent type
Prior art keywords
packet
data
module
channel
packet header
Prior art date
Application number
PCT/US2001/028276
Other languages
French (fr)
Other versions
WO2002023823A1 (en )
Inventor
Rob Liston
Robert Colvin
Jim Jacobsen
Glenn Gracon
Original Assignee
Vivage Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup or address filtering
    • H04L45/7453Address table lookup or address filtering using hashing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Switching fabric construction
    • H04L49/103Switching fabric construction using shared central buffer, shared memory, e.g. time switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Application independent communication protocol aspects or techniques in packet data networks
    • H04L69/22Header parsing or analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3018Input queuing

Abstract

A switch (100) which separates the packet headers from the payload, stores the payload using a delay line, and schedules the order in which packets are transmitted using a scheduler (106,122).

Description

APPARATUS AND METHODS FOR PROCESSING PACKETS IN A

10 BROADBAND DATA STREAM

RELATED APPLICATIONS

This application relates to an application entitled "Apparatus and Methods for

15 Managing Packets in a Broadband Data Stream" filed on bearing serial no. , an application entitled "Apparatus and Methods for

Scheduling Packets in a Broadband Data Stream" filed on bearing serial no. , and an application entitled "Apparatus and Methods for

Establishing Virtual Private Networks in a Broadband Network" filed on

20 bearing serial no. . These related applications are hereby incorporated by reference for all purposes.

FIELD OF THE INVENTION

This invention relates to apparatus and methods for packet processing in a data

25 stream. In particular, this invention relates to apparatus and methods for packet processing in a broadband data stream.

BACKGROUND OF THE INVENTION

As the Internet evolves into a worldwide commercial data network for on electronic commerce and managed public data services, increasingly, customer demands have focused on the need for advanced Internet Protocol (IP) services to enhance content hosting, broadcast video and application outsourcing. To remain competitive, network operators and Internet service providers (ISPs) must resolve two main issues: meeting continually increasing backbone traffic demands and providing a 35 suitable Quality of Service (QoS) for that traffic. Currently, many ISPs have implemented various virtual path techniques to meet the new challenges. Generally, the existing virtual path techniques require a collection of physical overlay networks and equipment. The most common existing virtual path techniques are: optical transport, asynchronous transfer mode (ATM)/ frame relay (FR) switched layer, and narrowband internet protocol virtual private networks (IP VPN).

The optical transport technique is the most widely used virtual path technique. Under this technique, an ISP uses point-to-point broadband bit pipes to custom design a point-to-point circuit or network per customer. Thus, this technique requires the ISP

10 to create a new circuit or network whenever a new customer is added. Once a circuit or network for a customer is created, the available bandwidth for that circuit or network remains static.

The ATM/FR switched layer technique provides QoS and traffic engineering via point-to-point virtual circuits. Thus, this technique does not require creations of dedicated physical circuits or networks compared to the optical transport technique. Although this technique is an improvement over the optical transport technique, this technique has several drawbacks. One major drawback of the ATM FR technique is that this type of network is not scalable. In addition, the ATM/FR technique also requires that a virtual circuit be established every time a request to send data is

90 u received from a customer.

The narrowband IP VPN technique uses best effort delivery and encrypted tunnels to provide secured paths to the customers. One major drawback of a best effort delivery is the lack of guarantees that a packet will be delivered at all. Thus, this is not a good candidate when transmitting critical data.

*)C

ΔJ Thus, it is desirable to provide apparatus and methods that reduce operating costs for service providers by collapsing multiple overlay networks into a multi-service IP backbone. In particular, it is desirable to provide apparatus and methods that allow an ISP to build the network once and sell such network multiple times to multiple customers.

hi addition, data packets coming across a network may be encapsulated in different protocol headers or have nested or stacked protocols. Examples of existing protocols are: IP, ATM, FR, IPV4. multi-protocol label switching (MPLS), and

Ethernet. Thus, it is further desirable to provide apparatus that are programmable to accommodate existing protocols and to anticipate any future protocols. 35 SUMMARY OF THE INVENTION

An exemplary method for processing data packets in a data stream comprises the steps of: receiving a set of data packets, each of the data packets having a packet header and a data portion; separating the packet header and the data portion; generating at least one unique identifier; appending the at least one unique identifier to the packet header to obtain a modified packet header; and combining the modified packet header with the data portion.

In one embodiment, the packet header and the data portion of a data packet are υ separated by inputting the packet header into a parser array having a set of engines; and inputting the data portion into a delay line. In another embodiment, the modified packet header is combined with the data portion by selectively outputting the modified packet header from an engine in the parser array to the delay line and combining the modified packet header and the data portion in the delay line. In yet another

^ embodiment, the selectively outputting step includes the step of utilizing a multiplexing architecture in the parser array, h yet another embodiment, the at least one unique identifier is generated by assigning the data packet to a data flow position and specifying the data flow position in the at least one unique identifier, h an exemplary embodiment, the at least one unique identifier includes an input connection

identifier, a destination card, and an output connection identifier.

In one embodiment, the exemplary method further comprises the step of processing header packets of the set of data packets by substantially concurrently: generating a first key for a content addressable memory based on a first packet header in a first module; and z performing a set of fixed functions on a second packet header in a second module, i another embodiment, the exemplary method further comprises the steps of substantially concurrently: generating a second key for the content addressable memory based on the second packet header in the first module; and performing the set of fixed functions on the first packet header in the second module. on

hi yet another embodiment, the exemplary method further comprises steps of: determining a channel in a multi-channel memory to write the data packet; writing the data packet into the channel; generating a schedule for retrieving the data packet from the channel; and reading the data packet from the channel based on the schedule, h one embodiment, the schedule is generated by determining a bandwidth for a selected channel in the multi-channel memory and scheduling a data packet retrieval from the selected channel in accordance with the bandwidth.

An exemplary apparatus for processing data packets in a data stream, each of the data packets having a packet header and a data portion, comprises a parser array for processing the packet header and a delay line for storing the data portion. The parser array generates at least one unique identifier and appends the at least one unique identifier to the packet header to obtain a modified packet header and the modified packet header is combined with the data portion in the delay line.

In one embodiment, the parser array includes a set of engines, hi one

1 10 υ embodiment, the set of engines is connected by a multiplexing architecture that delivers engine output values to the delay line in a coordinated manner. Each of the engines includes a programmable module and a fixed function module. The fixed function module performs a set of fixed functions. For example, the set of fixed functions includes content addressable memory look-ups, cyclic redundancy checks, and read/write functions. In an exemplary embodiment, the programmable module executes a set of instructions to parse the packet header to generate a key for performing a look up in an external content addressable memory, perform the look up, generate the at least one unique identifier, and append the unique identifier to the packet header, hi another exemplary embodiment, the programmable module includes

90 a packet offset register for indexing an instruction source and destination addresses in the packet header. hi one embodiment, each engine in the set of engines is configured to process two data packets substantially concurrently, such that a first data packet is processed by the programmable module and a second data packet is processed by the fixed

^ 95 function module and the programmable module swaps data packets with the fixed function module when the programmable module completes processing of the first data packet.

In an exemplary embodiment, the apparatus further comprises a first logic module for receiving data packets, a multi-channel memory for storing the data

3 ->0υ packets, a second logic module for retrieving the data packets from the multi-channel memory, and a scheduler for scheduling data packet retrieval, h one embodiment, a data packet received by the first logic module is stored into a channel in the multichannel memory, retrieved by the second logic module from the channel in the multichannel memory based on instructions from the scheduler, and sent to the parser array 35 and the delay line by the second logic module. In another embodiment, each channel in the multi-channel memory has a predetermined bandwidth and the scheduler schedules the data packet retrieval from a channel based on the predetermined bandwidth. 5

BRIEF DESCRIPTION OF THE DRAWINGS

FIGURE 1 schematically illustrates an exemplary traffic management system in accordance with an embodiment of the invention.

FIGURE 2 schematically illustrates an exemplary packet processor in

1 1 accordance with an embodiment of the invention.

FIGURE 3 schematically illustrates an exemplary TDM FIFO in accordance with an embodiment of the invention.

FIGURE 4 schematically illustrates an exemplary engine in accordance with an embodiment of the invention. 15 FIGURE 5 schematically illustrates an exemplary packet classification device in accordance with an embodiment of the invention.

DETAILED DESCRIPTION OF THE INVENTION

Figure 1 schematically illustrates a traffic management system 100 for managing packet traffic in a network, hi the ingress direction, the traffic management system 100 comprises a packet processor 102, a packet manager 104, a packet scheduler 106, a switch interface 112, and a switch fabric 114. The packet processor 102 receives packets from physical input ports 108 in the ingress direction.

In the ingress direction, the packet processor 102 receives incoming packets ΔJ from the input ports 108 and, after some processing, stores the packets in a buffer 116 managed by the packet manager 104. In an exemplary embodiment, the packet processor 102 removes packet encapsulations from incoming packets when appropriate. For example, the packet processor 102 performs cyclic redundancy check (CRC)/checksum verifications and/or protocol specific modifications (e.g., transistor- transistor logic (TTL) checking and decrementing) on packet headers of the incoming packets. After a packet is stored in the buffer 116, a copy of the packet's identification information (or a packet identifier) is sent from the packet manager 104 to the packet scheduler 106 to be processed for traffic control. The packet scheduler 106 performs policing and congestion management processes on any received packet identifier. The packet scheduler 106 sends instructions to the packet manager 104 to either drop a packet due to congestion or send a packet according to a schedule. If a packet is to be sent, the packet identifier of that packet is shaped and queued by the packet scheduler 106. Typically, the packet scheduler 106 modifies a packet identifier to include a time

5 slot designation. The packet scheduler 106 then sends the modified packet identifier to the packet manager 104. Upon receipt of a modified packet identifier, the packet manager 104 transmits the packet identified by the packet identifier to the switch interface 112 during the designated time slot to be sent out via the switch fabric 114. In the egress direction, outgoing packets arrive through the switch fabric 114

10 and switch interface 118, and go through similar processes in a packet manager 120, a packet scheduler 122, a buffer 124, and a packet processor 126. h an exemplary embodiment, the egress packet processor 126 adds encapsulations to outgoing packets when appropriate. For example, the packet processor 126 performs protocol CRC/checksum verifications, protocol specific modifications, asynchronous transfer 3 mode (ATM) segmentation, frame relay (FR) fragmentation, and/or internet protocol (IP) fragmentation when appropriate. Finally, egress packets exit the system through output ports 128. Operational differences between ingress and egress are configurable. The packet manager 104 and the packet scheduler 106 are described in more detail in related applications as referenced above.

^ 90u Figure 2 illustrates an exemplary embodiment of the packet processor 102.

The packet processor 102 includes an input packet over sonet interface 202, an input time division multiplexing (TDM) FIFO buffer 204a, a packet classification device

205, an aligner 212, a statistics collector 224, an output TDM FIFO buffer 204b, and an output packet over sonet interface 222. In the egress direction, the packet

25 processor also includes an IP fragmentator 214, a FR fragmentator 216, and an ATM segmentator 218. The packet classification device 205 includes a parser array 206 and a delay line 210.

In one embodiment, the egress packet processor 126 also includes a packet over sonet polling 228 and a flow control (FC) interface 230 for sending flow control

^ information to the packet scheduler 106 on a per channel basis between the egress physical line device and the egress packet processor 126 and between the egress packet processor 126 and the egress packet scheduler 122.

In an exemplary embodiment, the input standard packet over sonet interface

202 is a standard FIFO interface that monitors and sends requests to the parser array 35 206 through the input TDM FIFO buffer 204a. The input TDM FIFO buffer 204a divides a main pipe/channel into standard sonet pipes/channels to facilitate parallel processing. For example, a 10 GHz pipe/channel can be divided into 192 (52MHz) standard sonet pipes/channels. Thus, multiple data packets entering into the TDM FIFO buffer 204a can be substantially simultaneously stored in the multiple standard sonet pipes/channels. An exemplary embodiment of the TDM FIFO buffer 204 is provided in Figure 3, which is discussed below. After a data packet exits from the input TDM FIFO buffer 204a, the data packet is processed by the packet classification device 205. Because many existing protocols require that the order of data packets within a data flow be maintained during traffic management, data packets entering the packet classification device 205 via a signal line 201 must exit the packet classification device 205 in the same order via a signal line 203.

In the packet classification device 205, the packet header of a data packet is separated from the data portion of the data packet. The packet header enters into the parser array 206 to be processed and the data portion enters into the delay line 210. At the parser array 206, the packet header is parsed to obtain information for generating keys to perform look-ups in an external content addressable memory (CAM) 208. hi an exemplary embodiment, the parser array 206 generates at least one unique identifier (e.g., input connection ID, destination card, and output connection ID) based on the look-ups. The generated at least one unique identifier is attached to the packet header. The modified packet header is then reunited with the data portion in the delay line 210. In an exemplary embodiment, the parser array 206 acts in accordance with a set of instructions executed in the parser array 206.

In an exemplary embodiment, an alignment of the modified packet header and the data portion is further performed by the aligner 212. The IP fragmentator 214, the FR fragmentator 216, and the ATM sequencer 218 are standard components well known in the art for splitting a large data packet into industry standard data packets in the egress direction. The output TDM FIFO buffer 204b performs the same functions as the input TDM FIFO buffer 204a at the output end of the process. In an exemplary embodiment, the output packet over sonet interface 222 is a standard FIFO interface.

The statistic collector 224 collects statistics of the traffic flow for record keeping purposes, hi an exemplary embodiment, the statistic collector 224 reads and writes to an external memory 226. After a data packet is processed by the packet processor 102, the data packet emerges with a modified packet header (including at least one unique identifier) that allows the data packet to be efficiently managed and scheduled by the packet manager 104 and packet scheduler 106, respectively.

Figure 3 schematically illustrates an exemplary TDM FIFO buffer 204. The exemplary TDM FIFO buffer 204 includes push logic 302, a channelized FIFO

5 memory 304, pop logic 306, a pointer logic controller 308, a TDM scheduler 310, and a full flags memory monitor 312. The channelized FIFO memory 304 is reconfigurable and is capable of storing data in multiple channels. In an exemplary embodiment, each channel in the channelized FIFO memory 304 has its own space address, size, a write pointer, and a read pointer. Each channel can be located by its space address via its write pointer or read pointer stored in the pointer logic controller 308. For example, when data arrives at the TDM FIFO buffer 204 via a signal on line 301 and an enable signal on line 303, the push logic 302 looks up the pointer logic controller 308 for a channel space address and a write pointer and writes the received data in a channel based on the space address via the write pointer. The push logic 302 5 then increments the space address to the next available channel and updates the space address and the write pointer stored in the pointer logic controller 308.

Because the TDM FIFO buffer 204 uses a channelized FIFO memory 304, it is capable of storing data at every clock cycle. After data has been written into an appropriate channel, it is pushed out on a first-in-first-out basis and read by the pop υ logic 306 using the channel's read pointer stored in the pointer logic controller 308. Typically, the pop logic 306 is controlled by the TDM scheduler 310. hi an exemplary embodiment, the TDM scheduler 310 services the channelized FIFO memory 304 on a bandwidth-proportional basis. For example, if all of the channels in the channelized FIFO memory 304 has the same bandwidth, the channels in the memory 304 are

25 serviced on a round robin basis. If, for example, one channel in the channelized FIFO memory 304 has ten times the bandwidth than all the other channels, the channel having the wide bandwidth is serviced ten times more often than the other channels, hi an exemplary embodiment, the pop logic 306 accounts for per channel flow control, such that no data is outputted for channels that are flow controlled. Each time a

channel is serviced, the pop logic 306 accessed the data at the front of the queue of that channel via its read pointer. The memory monitor 312 monitors the channelized FIFO memory 304, such that when the memory 304 is full, no more data is inputted into the TDM FIFO buffer 204 via the data signal line 301.

35 Referring back to Figure 2, the parser array 206 includes multiple engines to support a variety of packet protocols. Figure 4 schematically illustrates an exemplary engine 400. The engine 400 includes a foreground module 402 and a background module 404. Generally, the background module 404 includes a CPU, a CAM, and

5 packet I/O interfaces (not shown), hi an exemplary embodiment, the background module 404 is a fixed function module for implementing CAM look-ups, CPU read/write, cyclic redundancy check (CRC)/checksum, and/or other data packet input/output functions. The foreground module 402 is programmable and includes a control module 406, an instruction memory 408, a datapath module 410, a register file ιυ 412, and a multiplexer 414. In an exemplary embodiment, the register file 412 is dual ported and has a master-slave architecture. The instruction memory 408 stores a set of instructions that, when executed, allow the foreground module 402 to generate a key for accessing the external CAM 208 (see Figure 2). h one embodiment, the set of instructions, when executed, also allows the foreground module 402 to add or remove

1 J encapsulations and make protocol specific modifications to the packet, such as TTL checking/decrementing, h an exemplary embodiment, the control module 406 performs a five stage pipeline operation including fetching an instruction from the instruction memory 408, decoding the fetched instruction, reading the register file 412, executing the instruction via the datapath module 410, and writing the executed results

/ 90 into the register file 412. To optimize the pipeline operation, in an exemplary embodiment, the instructions in the instruction memory 408 are designed to conserve as many gates as possible, thereby, simplifying the instruction logic code. For example, the instructions are designed to have the same length; thus, some instructions may include null fields in order to achieve that length. In addition, in one embodiment, logic code is simplified by aligning fields in an instruction to fields in another instruction. Thus simplified, the control module 406 can decode such logic code more efficiently.

In an exemplary embodiment, the engine 400 is capable of handling two packet headers at one time. While the foreground module 402 is generating a key for a first υ packet header, a second packet header is processed by the fixed function background module 404. When the foreground module 402 finishes generating a key for the first packet header, the foreground module 402 swaps packet headers with the background module 404. Thus, the foreground module 402 begins generating a key for the second packet header while the background module 404 begins to perform fixed functions on 35 the first packet header. The swapping or switching process between the foreground module 402 and the background module 404 is facilitated by the multiplexer 414. In an exemplary embodiment, a key that is generated by the foreground module 402 is used to perform look-ups in the external CAM 208 by the background module 404.

5 The results of CAM look-ups can be used by the engines 400 to generate additional keys or to create at least one unique identifier that is added to the original packet header. Such unique identifier (e.g., input connection ID, destination card, and output connection ID) classifies a data packet into a proper data flow and is later used by the rest of the traffic managing system 100 to manage and schedule the data packet. In an exemplary embodiment, the parser array 206 includes twenty-four programmable engines 400. Thus, the parser array 206 is capable of processing forty- eight packet headers substantially simultaneously, an exemplary embodiment, the target performance for the packet processor 102 is 25M packets per second, assuming an average of two external CAM look-ups per packet header. Generally, data packets

15 from the same data flow are of the same size and require the same processing time.

However, data packets from different data flows may require different processing time. For example, an IP packet header may require two CAM look-ups while an ATM packet header may require one CAM look-up. A processed (or modified) packet header of a data packet is re-aligned with the data portion of the data packet in the

20 delay line 210 (see Figure 2).

Figure 5 schematically illustrates an exemplary packet classification device 205. The packet classification device 205 includes a parser array 206 and a delay line 210. The parser array 206 includes twenty four engines 400. Data packets enter into the packet classification device 205 via the signal line 201 and exits via the signal line

95 203. When data packets are received by the packet classification device 205, the packet headers of the data packets are entered into the parser array 206 and the data portion of the data packets are entered into the delay line 210. At the input end of the parser array 206, any entering packet header is processed by any available engine 400.

After a packet header is processed by an engine 400, the modified packet header exits

3 o0u the parser array 206 via a signal line 502 to be reunited with an appropriate data portion in the delay line 210. If all twenty-four engines 400 complete processing in a sequential manner, the modified packet headers can be tunneled elegantly via the signal line 502 into the delay line 210. However, in reality, multiple engines 400 are likely to complete packet header processing and attempt to output data on the signal

35 line 502 at the same time. In the worse case scenario, all twenty-four engines 400 may attempt to output via the signal line 502 at the same time. Due to physical limitations on the size of the signal line 502, a multiplexing process is employed to allow data to pass to the delay line 210 in an orderly fashion.

In an exemplary embodiment, a so-called "daisy chain" multiplexing scheme is employed to solve this problem. A daisy chain multiplexer comprises a series of nodes. Each node includes an input port and an output port. The nodes are connected in series, such that the output of each node feeds the input of the next node. At each node, a selection is made whether to pass data from input to output, or to inject local

1 ι 0υ data into the output. When the nodes are controlled in such a way that only one node is injecting local data at any given time, then a node's local data will ripple through the other nodes and appear at the final output. This is a way to implement a wide multiplexer with a small amount of logic and wiring, at the expense of the delay required to propagate the data through the various nodes. A person skilled in the art

15 will appreciate that other multiplexing embodiments can also be applied to allow output data from multiple engines to orderly pass through an output line.

The foregoing examples illustrate certain exemplary embodiments of the invention from which other embodiments, variations, and modifications will be apparent to those skilled in the art. The invention should therefore not be limited to

90

Δ the particular embodiments discussed above, but rather is defined by the claims.

25

30

35

Claims

WHAT IS CLAIMED IS:
1. A method for processing data packets in a data stream, comprising the steps of: receiving a set of data packets, each of said data packets having a packet header
5 and a data portion; separating said packet header and said data portion; generating at least one unique identifier; appending said at least one unique identifier to said packet header to obtain a modified packet header; and 1 υ combining said modified packet header with said data portion.
2. The method of claim 1, wherein said separating step includes the steps of: inputting said packet header into a parser array having a set of engines; and inputting said data portion into a delay line.
15
3. The method of claim 2, wherein said combining step includes the steps of: selectively outputting said modified packet header from an engine in said parser array to said delay line; and combining said modified packet header and said data portion in said delay line.
20
4. The method of claim 3, wherein said selectively outputting step includes the step of utilizing a multiplexing architecture in said parser array.
5. The method of claim 1, wherein said generating step includes the steps of:
9-r
ΔJ assigning said data packet to a data flow position; and specifying said data flow position in said at least one unique identifier.
6. The method of claim 1, wherein said generating step includes the steps of: generating an input connection identifier; υ generating a destination card; and generating an output connection identifier.
7. The method of claim 1, further comprising the step of processing header packets of said set of data packets by substantially concurrently:
35 (1) generating a first key for a content addressable memory based on a first packet header in a first module; and
(2) performing a set of fixed functions on a second packet header in a second module. 5
8. The method of claim 7, further comprising the steps of substantially concurrently:
(1) generating a second key for said content addressable memory based on said second packet header in said first module; and ιυ (2) performing said set of fixed functions on said first packet header in said second module.
9. The method of claim 1, further comprising steps of: determining a channel in a multi-channel memory to write said data packet; ■• writing said data packet into said channel; generating a schedule for retrieving said data packet from said channel; and reading said data packet from said channel based on said schedule.
10. The method of claim 9, wherein said step of generating a schedule includes the
9 ^0u steps of: determining a bandwidth for a selected channel in said multi-channel memory; and scheduling a data packet retrieval from said selected channel in accordance with said bandwidth.
11. An apparatus for processing data packets in a data stream, each of said data
95 packets having a packet header and a data portion, comprising: a parser array for processing said packet header; and a delay line for storing said data portion; wherein said parser array generates at least one unique identifier and appends said at least one unique identifier to said packet header to obtain a modified packet J header; and wherein said modified packet header is combined with said data portion in said delay line.
12. The apparatus of claim 11, wherein said parser array includes a set of engines.
35
13. The apparatus of claim 12, wherein each of said engines includes a programmable module and a fixed function module.
14. The apparatus of claim 13, wherein said fixed function module performs a set
5 of fixed functions.
15. The apparatus of claim 14, wherein said set of fixed functions includes content addressable memory look-ups, cyclic redundancy checks, and read/write functions.
1 ^ 16. The apparatus of claim 12, wherein said set of engines is connected by a multiplexing architecture that delivers engine output values to said delay line in a coordinated manner.
17. The apparatus of claim 13, wherein each engine in said set of engines is configured to process two data packets substantially concurrently, such that a first data packet is processed by said programmable module and a second data packet is processed by said fixed function module and said programmable module swaps data packets with said fixed function module when said programmable module completes processing of said first data packet. 20
18. The apparatus of claim 13 , wherein said programmable module executes a set of instructions to parse said packet header to generate a key for performing a look up in an external content addressable memory, perform said look up, generate said at least one unique identifier, and append said at least one unique identifier to said packet
25 header.
19. The apparatus of claim 13, wherein said programmable module includes a packet offset register for indexing an instruction source and destination addresses in said packet header. 30
20. The apparatus of claim 11, wherein said at least one unique identifier includes an input connection identifier, a destination card, and an output connection identifier.
21. The apparatus of claim 11 , further comprising:
*}
J a first logic module for receiving data packets; a multi-channel memory for storing said data packets; a second logic module for retrieving said data packets from said multi-channel memory; and a scheduler for scheduling data packet retrieval; 5 wherein a data packet received by said first logic module is stored into a channel in said multi-channel memory, retrieved by said second logic module from said channel in said multi-channel memory based on instructions from said scheduler, and sent to said parser array and said delay line by said second logic module.
22. The apparatus of claim 21, wherein each channel in said multi-channel memory has a predetermined bandwidth.
23. The apparatus of claim 22, wherein said scheduler schedules said data packet retrieval from a channel based on said predetermined bandwidth. 15
20
25
30
35
PCT/US2001/028276 2000-09-13 2001-09-10 Apparatus and methods for processing packets in a broadband data stream WO2002023823A9 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US66124400 true 2000-09-13 2000-09-13
US09/661,244 2000-09-13

Publications (2)

Publication Number Publication Date
WO2002023823A1 true WO2002023823A1 (en) 2002-03-21
WO2002023823A9 true true WO2002023823A9 (en) 2003-10-30

Family

ID=24652779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/028276 WO2002023823A9 (en) 2000-09-13 2001-09-10 Apparatus and methods for processing packets in a broadband data stream

Country Status (1)

Country Link
WO (1) WO2002023823A9 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9014003B2 (en) * 2011-11-08 2015-04-21 Futurewei Technologies, Inc. Decoupled and concurrent packet processing and packet buffering for load-balancing router architecture

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6091705A (en) * 1996-12-20 2000-07-18 Sebring Systems, Inc. Method and apparatus for a fault tolerant, software transparent and high data integrity extension to a backplane bus or interconnect
US6111673A (en) * 1998-07-17 2000-08-29 Telcordia Technologies, Inc. High-throughput, low-latency next generation internet networks using optical tag switching

Also Published As

Publication number Publication date Type
WO2002023823A1 (en) 2002-03-21 application

Similar Documents

Publication Publication Date Title
US6975617B2 (en) Network monitoring system with built-in monitoring data gathering
US6529508B1 (en) Methods and apparatus for packet classification with multiple answer sets
US5418779A (en) High-speed switched network architecture
US7299487B1 (en) Control program, for a co-processor in a video-on-demand system, which uses transmission control lists to send video data packets with respective subgroups of internet protocol headers
US6628615B1 (en) Two level virtual channels
US6571291B1 (en) Apparatus and method for validating and updating an IP checksum in a network switching system
US5905725A (en) High speed switching device
US7099275B2 (en) Programmable multi-service queue scheduler
US6775284B1 (en) Method and system for frame and protocol classification
US7453892B2 (en) System and method for policing multiple data flows and multi-protocol data flows
US7035212B1 (en) Method and apparatus for end to end forwarding architecture
US6813243B1 (en) High-speed hardware implementation of red congestion control algorithm
US6633576B1 (en) Apparatus and method for interleaved packet storage
US7327748B2 (en) Enterprise switching device and method
US20060140130A1 (en) Mirroring in a network device
US6778546B1 (en) High-speed hardware implementation of MDRR algorithm over a large number of queues
US6721316B1 (en) Flexible engine and data structure for packet header processing
US20030193927A1 (en) Random access memory architecture and serial interface with continuous packet handling capability
US20040037313A1 (en) Packet data service over hyper transport link(s)
US6434145B1 (en) Processing of network data by parallel processing channels
US20050135355A1 (en) Switching device utilizing internal priority assignments
US20030076832A1 (en) Data path optimization algorithm
US5936966A (en) Data receiving device which enables simultaneous execution of processes of a plurality of protocol hierarchies and generates header end signals
US6714553B1 (en) System and process for flexible queuing of data packets in network switching
US20030061338A1 (en) System for multi-layer broadband provisioning in computer networks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct app. not ent. europ. phase
COP Corrected version of pamphlet

Free format text: PAGES 1/5-5/5, DRAWINGS, REPLACED BY NEW PAGES 1/5-5/5; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

NENP Non-entry into the national phase in:

Ref country code: JP