EP1260067A1 - Serveur de milieu de reseaux a large bande - Google Patents

Serveur de milieu de reseaux a large bande

Info

Publication number
EP1260067A1
EP1260067A1 EP01908601A EP01908601A EP1260067A1 EP 1260067 A1 EP1260067 A1 EP 1260067A1 EP 01908601 A EP01908601 A EP 01908601A EP 01908601 A EP01908601 A EP 01908601A EP 1260067 A1 EP1260067 A1 EP 1260067A1
Authority
EP
European Patent Office
Prior art keywords
packet
processing
server
module
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01908601A
Other languages
German (de)
English (en)
Inventor
Jean Pierre Bordes
Otto Andreas Schmid
Curtis Davis
Monier Maher
Manju Hegde
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Celox Networks Inc
Original Assignee
Celox Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Celox Networks Inc filed Critical Celox Networks Inc
Publication of EP1260067A1 publication Critical patent/EP1260067A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element
    • H04L49/501Overload detection
    • H04L49/503Policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/608ATM switches adapted to switch variable length packets, e.g. IP packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5665Interaction of ATM with other protocols
    • H04L2012/5667IP over ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane

Definitions

  • the present invention relates to internetworked communication systems, and especially (but not exclusively) to a highly scalable broadband mid-network, server for performing mid-network processing functions including routing functions, per user processing, encryption, bandwidth distribution and traffic shaping.
  • IP Internet Protocol
  • VoIP Voice over IP
  • Real Time Video are envisioned to be two significant applications for propelling Internet growth to the next level. VoIP can be defined as the ability to make telephone calls and send faxes over IP networks.
  • Real Time Video is a "direct-to-user" technique in which a video signal is transmitted to the client device and presentation of the video begins after a short delay for data buffering, and eliminates the need for significant client-site storage capacity. It is also expected to become popular with businesses.
  • webconferencing which requires high bandwidth since it is a continuous transfer of image information together with voice transfer. Webconferencing also requires real time traffic handling because it is usually implemented as an interactive application.
  • Virtual Private Network services allow a private network to be configured within a public network. This is one of the drivers for Internet access amongst businesses. To allow Virtual Private Networks to coexist on the public Internet, and to encourage business use of the Internet, great care must be taken with respect to security and authentication issues, and tunneling protocols such as L2TP and IPSec must be efficiently supported. • The number of subscribers handled by one system and the different qualities of service provided will make service provider administration more complex. To make provisioning of broadband access more attractive to service providers, subscriber management and usage accounting must be simplified, and differentiated services must be provided.
  • Broadband makes it possible to provide different amounts of bandwidth to users and to smaller Internet Service Providers. To make wholesaling of IP connectivity possible, and to simplify service and repair functions, the ability to support multiple service providers with one mid-network server must be provided.
  • a large number of connections are serviced with a broadband mid-network server.
  • the broadband server In order to ensure that service is not interrupted, the broadband server must have very high availability. Such availability is also required for mission-critical business applications .
  • the inventors hereof have succeeded at designing and developing a broadband mid-network server that, in the most preferred embodiment, satisfies all of the requirements described above.
  • This inventive server provides reliable, secure, fast, flexible, high-bandwidth, and easily managed access to the Internet so as to accommodate all current Internet services including email, file transfer, web surfing and e-commerce, as well as the new value added services such as VoIP and Real Time Video.
  • the broadband mid- network server of the present invention has been designed to scale not only in bandwidth, but also in processing power and state space.
  • the architecture allows a service provider to configure the cards chosen for use in the available chassis space to suit his particular application.
  • a service provider could increase the number of IPE cards at the expense of a fewer number of line cards; as few as one line card. In the case of one line card, the maximum amount of processing power would be available to a service provider. IN the preferred embodiment described in detail below, this configuration would provide 240 processors and 39 gigabytes of memory. This would allow for a greater number and complexity of value added services which require more processing power. Alternatively, a greater number of line cards could be selected for use in a chassis which would be desirable to handle greater traffic and throughput at the expense of fewer value added services.
  • the broadband mid-network server of the present invention includes the ability to distribute traffic across a number of Internet processing engines and, more specifically, across a number of protocol processing units provided in each engine (the bandwidth to which can be coordinated) , to provide compute power and state space required for performing per user processing for a large number of users.
  • One important feature of the present invention is a unique architectural philosophy, which provides that processing be performed as close to the physical layer as warranted by considerations of flexibility, cost and complexity.
  • This architectural philosophy maintains balance between two kinds of processing which are important to scaling bandwidth with value-added services in broadband networks: time-consuming, repetitive processing; and flexible processing which must be easy to program by third parties.
  • time-consuming, repetitive processing which has proved to create a bottleneck in the processor-based servers of the prior art, is addressed by the inventive architecture through specialized hardware, and results in dramatic increases in speed and decreases in delay.
  • the broadband mid-network server of the present invention provides a system that is currently unrivalled in performance and which can become the prime mover of Internet services such as managed, secure VPNs, Voice over IP and Real Time Video.
  • Fig. 1 illustrates a single shelf broadband mid- network server according to one embodiment of the present invention
  • Fig. 2 is a functional block diagram of the preferred server shown in Fig. 1;
  • Fig. 3 is a functional block diagram of an exemplary line card shown in Figs. 1 and 2;
  • Fig. 4 is a functional block diagram of an exemplary IPE card shown in Figs. 1 and 2;
  • Fig. 5 illustrates routed distribution to an IPE card;
  • Fig. 6 illustrates the processing flow on an IPE card
  • Fig. 7 illustrates a protocol processing platform according to the present invention
  • Fig. 8 is a functional block diagram of an exemplary buffer access controller
  • Fig. 9 illustrates the format of a cell received at an input to a BAG from a PIC
  • Fig. 10 is a functional block diagram of a preferred packet manager
  • Fig. 11 is an illustration of the deployment of a broad-band mid-network server at a Service Provider POP;
  • Fig. 12 is an illustraion of the different kinds of links an ISP may want on a secure segment;
  • Fig. 13 is an illustration of the system wide bandwidth distribution functions
  • Fig. 14 is an illustration of the multi-level policing and multi-level shaping that occurs in the system
  • Fig. 15 is an illustration of router distribution, two level policing, routing and two level shaping;
  • Fig. 16 is a functional block diagram of a preferred packet inspector;
  • Fig. 17 is an illustration of the preferred Distributor Flow Unit.
  • Fig. 18 is a summary of the highlights of the DFU . Detailed Description of the Preferred Embodiments
  • the mid-network processor of the present invention is preferably implemented in a single shelf system as shown generally in Fig. 1, and is indicated generally by reference character 300.
  • the mid- network processor 300 is provided with a number of physical connection (“PHY") cards 302-316 through which packets may enter and exit the mid-network processor 300 according to a particular communication protocol, as is known in the art.
  • PHY physical connection
  • the mid-network processor 300 supports the POS, ATM, and Gigabit Ethernet layer two protocols, although the mid-network processor may readily be configured to support additional protocols, as will be apparent.
  • the PHY cards 302-316 are each associated with line cards 322-336, respectively, as shown in Fig. 1.
  • each PHY card is media specific.
  • each PHY card is provided with connectors and other components necessary to interface with the communication media connected thereto, and over which packets enter and exit the PHY card.
  • Each line card is configured to process packets of the type received from its associated PHY card, as explained more fully below.
  • the preferred mid-network processor 300 shown in Fig. 1 is also provided with a number of Internet Processing Engine ("IPE") cards 340-354, as well as two flash memory modules 360, 362 and four switch fabric modules 364-368. As appreciated by those skilled in the art, the number of switch fabric cards required is a function of the switch fabric card design as well as the desired redundancy overall performance.
  • Fig. 1 also illustrates a midplane 370 that is provided for interconnecting the various cards described above.
  • the preferred mid-network processor 300 utilizes a card-based approach to facilitate maintenance and expansion of the mid-network processor 300, as necessary, but this is clearly not a limitation of the present invention.
  • Fig. 2 is a functional block diagram of the preferred mid-network processor 300 shown in Fig. 1 (although, to simplify the illustration, Fig. 2 does not show the PHY cards 310-316, the line cards 330- 336 and the IPE cards 346-354 shown in Fig. 1) .
  • Packets enter the mid-network processor 300 via the PHY cards, as is known in the art.
  • Each PHY card then delivers its packets to its associated line card through the midplane 370.
  • the line card After performing initial processing of the packet, the line card delivers the packet again through the midplane to the switch fabric which, in turn, delivers the packet to one of the IPE cards for performing certain mid-network processing functions, such as routing functions, per user processing, encryption, and bandwidth distribution.
  • the IPE card After performing mid-network processing for the packet delivered thereto, the IPE card sends the packet back into the switch fabric, typically for delivery to one of the line cards for some additional processing before allowing the packet to exit the mid- network processor 300 through one of the PHY cards.
  • a single IPE card may be insufficient to complete the necessary mid- network processing functions for a packet delivered thereto.
  • the IPE card upon performing some processing, the IPE card will deliver the packet to another IPE card (rather than to one of the line cards) via the switch fabric for further processing.
  • a packet will typically be processed by only one IPE card, it is possible to process a packet in multiple IPE cards, if necessary.
  • all of the line cards contain identical hardware, but are independently programmable.
  • all of the IPE cards contain identical hardware, but are independently programmable. This contributes to the scalability and elegantly simple design of the preferred mid-network processor 300. Additional processing power can be provided to the mid- network processor by simply adding additional IPE cards.
  • additional users can be supported by the mid- network processor 300 by adding additional line cards and PHY cards, and perhaps additional IPE cards to provide additional processing for the newly added users, if necessary.
  • the flash memory cards are provided for storing configuration data used by the IPE cards during system initialization.
  • packet refers to any type of packet that enters or exits the mid- network processor 300, including packets input to the mid-network processor 300 in the form of cells (such as ATM cells) via an interleaved or non-interleaved cell stream.
  • cells such as ATM cells
  • each line card used in the preferred mid-network processor 300 performs a number of functions. Initially, the line card converts packets (possibly of varying lengths) delivered thereto into fixed length cells. In this preferred embodiment, each line card converts input packets (including packets represented by individual cells) into 64 byte cells. The line card then examines the stream of fixed length cells "on the fly” to obtain important control information, including the protocol encapsulation sequence for each packet and those portions of the packet which should be captured for processing. This control information is then used on the line card to reassemble the packet, and to format the reassembled packet into one of a limited number of protocol types that are supported by the IPE cards.
  • any given line card can be configured to support packets having a number of protocol layers and protocol encapsulation sequences
  • the line card is configured to convert these packets into generally non- encapsulated packets (or, stated another way, into packets having an encapsulation sequence of one) of a type that is supported by each of the IPE cards.
  • the line card then sends the reassembled and formatted packet into the switch fabric (in the form of contiguous fixed length cells) for delivery to one of the IPE cards that was designated by the line card for further processing that particular packet.
  • the fixed length cells which comprise a packet are arranged back to back when the packet is delivered to the switch fabric by a line card, the cells may become interleaved with other cells destined for the same IPE card during the course of traversing the switch fabric.
  • the cell stream provided by the switch fabric to any given IPE card may be an interleaved cell stream.
  • the IPE card will first examine this cell stream "on the fly" (much like the cell stream examination conducted by the line cards, explained above) to ascertain important control information. The IPE card then processes this control information to perform routing look-ups and other mid-network processing functions for each packet delivered thereto.
  • the control information is also used by the IPE card to reassemble each packet, and to format each packet according to the packet's destination interface.
  • the IPE card then sends the reassembled and formatted packet back into the switch fabric in the form of contiguous fixed length cells for delivery to one of the line cards (or for delivery to another IPE card, in the case where additional mid- network processing functions must be performed for the packet in question) .
  • the cells of any given packet may enter the switch fabric in a back to back arrangement, these cells may become interleaved with other cells during the course of traversing the switch fabric.
  • the stream of cells provided by the switch fabric to any given line card may be an interleaved cell stream. Accordingly, a line card will first examine this cell stream "on the fly" to ascertain important control information that will be used primarily to reassemble packets, and to format the reassembled packets for their destination interfaces. Additional processing of outbound packets is also conducted on the line card for PHY scheduling and bandwidth distribution purposes.
  • Fig. 3 illustrates an exemplary line card 380 used in the preferred mid-network processor 300 of the present invention.
  • the line card 380 preferably includes an ingress side (i.e., the left half of Fig. 3) and an egress side (i.e., the right half of Fig. 3) .
  • the packets are first provided to a packet inspector chip (“PIC") 400 which converts the packets (which may already be represented by individual cells such as ATM cells) into fixed length cells.
  • PIC packet inspector chip
  • the fixed length cells are 64 byte cells that are 8 bytes wide and 8 bytes long.
  • a "cell time,” in the context of cells propagating within the preferred mid-network processor 300, corresponds to 8 clock cycles, as appreciated by those skilled in the art.
  • the PIC 400 then examines the stream of fixed length cells "on the fly” to identify the "classification” (that is, the protocol encapsulation sequence) , capture matrix, and other control information for each packet (as described more fully in copending Application No. 09/494,235 filed January 30, 2000 entitled “Device and Method for Packet Inspection, " the disclosure of which is incorporated herein by reference) . More specifically, the preferred PIC 400 generates a control cell for each examined cell of a packet, and each control cell represents the control information that has been determined thus far for the corresponding packet. Thus, the PIC 400 outputs both the stream of fixed length cells that was produced before this stream was examined "on the fly” therein, as well as corresponding control cells. As shown in Fig.
  • these control and data cells are then provided by the PIC 400 to four preferably identical buffer access controllers ("BACs") 402-408.
  • Each BAC stores a different quarter (i.e., 25%) of the data cells received from the PIC 400 in its corresponding cell buffer (“CB”) .
  • Each control cell output by the PIC 400 also includes a protocol processing unit (“PPU") identifier which identifies a PPU associated with a particular BAC for processing that control cell.
  • PPU protocol processing unit
  • each PPU in this preferred embodiment, preferably comprises two general purpose central processing units (“CPUs"), as shown in Fig. 3.
  • CPUs general purpose central processing units
  • a PPU could comprise one or more network processors, digital signal processors, or any programmable processors.
  • the BACs 402-408 each examine the PPU identifiers contained in the control cells delivered thereto over a bus by the PIC 400.
  • each control cell output by the PIC 400 is acted on by only one BAC and its associated PPU.
  • the size of the control cell is much smaller than the typical size of a packet. This can significantly increase the utilization of the processor by reducing the I/O bandwidth which is the typical limiting factor in processor use.
  • all control cells corresponding to a specific packet are processed by the same BAC PPU on the line card 380.
  • the PPU assigned by the PIC 400 for any given packet is performed according to configuration and control information received by the PIC 400 from a master PPU ("MPPU") 410, and can be changed by the MPPU 410 over time as necessary for PPU load balancing on the line card 380.
  • the PIC 400 also keeps track of the available memory addresses in the cell buffers associated with the BACs using a free buffer (“FB") list 412, and also keeps track of where each data cell is stored in the cell buffers with respect to other cells of the same packet using a link list 414.
  • MPPU master PPU
  • FB free buffer
  • the PPU When a control cell is processed within a particular BAC PPU, the PPU produces a new control cell to be provided to a packet manager ("PM") 420 which is in communication with the PIC 400 and the BACs 402-408. Included in this control cell provided to the PM 420 is a dequeue pointer which designates the location of the first cell of a packet that is to be dequeued and sent to the PM 420 along with the second and subsequent cells of that packet (if applicable) . The packet manager 420 then forwards this dequeue pointer back to the PIC 400, which, in turn, provides instructions to the BACs 402-408 to dequeue each quarter cell of the designed packet in sequence using the information previously stored by the PIC 400 in the link list 414. Thus, the designed packet is reassembled as it is dequeued and delivered to the packet manager 420.
  • PM packet manager
  • the packet manager 420 stores the cells of the reassembled packet in its own cell buffer 422 (using a free buffer list 424 and link list 426) .
  • the packet manager 420 processes the control information it received for that packet from one of the BAC PPUs and then formats the packet according to this control information by modifying or augmenting the packet header as the cells of the packet are dequeued from the cell buffer 422.
  • This process and additional details of the preferred packet manager 420 are described more fully in copending Application No. 09/494,236 filed January 30, 2000 entitled “Device and Method for Packet Formatting," the disclosure of which is incorporated herein by reference.
  • the packet manager 420 also appends a header to each of the 64 byte cells that constitute the reassembled and formatted packet, and these headers will be used by the switch fabric for routing the cells therethrough.
  • the packet manager 420 then forwards the cells of the packet in sequence to a UDASL 430, which is provided for managing cell traffic into and out of the switch fabric for the line card 380.
  • the UDASL 430 then forwards the packet cells into the switch fabric for delivery to an IPE card that will perform mid- network processing functions for the packet in question.
  • This IPE card is preferably designated by the BAC PPU that prepared and forwarded control information to the packet manager 420.
  • a 9-port Ethernet switch 450 which provides for interprocessor communications between the eight PPUs on the line card 380 (i.e., 4 PPUs on the ingress side and 4 PPUs on the egress side) and the MPPU 410 for purposes of load balancing, hardware monitoring and bandwidth distribution, and for sharing user and configuration information.
  • the bandwidth distribution process and the preferred hardware are described more fully in copending Application No. 09/515,028 filed February 29, 2000 entitled “Method and Device for Distributing Bandwidth, " the disclosure of which is incorporated herein by reference.
  • Fig. 4 illustrates an exemplary IPE card 500 used in the, preferred mid-network processor 300 of the present invention.
  • the hardware layout of the IPE card 500 is similar to the hardware layout on the ingress side (and the egress side) of the line card 380 shown in Fig. 3. That is, the IPE card 500 is also provided with a UDASL
  • the PIC 501 that delivers a typically interleaved cell stream received from the switch fabric to a PIC 502.
  • the present invention provides, amongst other things, an inventive hardware module that can be programmed to perform requisite processing either on the ingress side or the egress side of a line card, or on an IPE card. This contributes to the configurability and scalability of the preferred mid-network processor 300, which can be reconfigured as necessary (both through programming and/or by adding additional lines cards and/or IPE cards) to accommodate additional users and/or to provide additional processing power.
  • the PIC 502 provided on the preferred IPE card 500 is used to inspect the stream of fixed length cells provided thereto by the switch fabric "on the fly" to ascertain control information for each packet to be processed on the IPE card. In most cases, this control information was added to the packet by the PM 420 on the ingress side of the line card that forwarded the packet to this particular IPE card.
  • the PIC 502 outputs the stream of data cells to the four BACs 504-510, each of which is configured to store a different quarter of each data cell in its corresponding cell buffer (note that each BAC on the preferred IPE card 500 has two PPUs associated therewith, whereas only one PPU is associated with each BAC on the preferred line card
  • the PIC 502 also outputs control cells to the BACs 504-510, where each control cell contains a PPU identifier that designates one of the two PPUs associated with a particular BAC for processing that control cell on the IPE card to perform mid-network processing functions for the corresponding packet.
  • all control cells corresponding to a specific packet are processed by the same BAC PPU on the IPE card 500.
  • the PPU that processed control information for that packet on the ingress side of the line card is also responsible for determining to which IPE card and, more specifically, to which PPU on a particular IPE card, the packet should be sent for further processing.
  • the PPU After a BAC PPU on the IPE card processes the control information for a particular packet, the PPU sends a control cell back to the PM 512, which then cooperates with the PIC 502 to dequeue the quarter cells of that packet in sequence from the cell buffers associated with the BACs 504-510.
  • the PM 512 Upon receiving the constituent cells of a reassembled packet and storing these cells in its own cell buffer 514 (using a link list 516 and a free buffer list 518), the PM 512 processes the control cell received from the BAC PPU to format the ' reassembled packet according to its destination interface before forwarding the reassembled formatted packet back into the switch fabric for delivery to its destination line card (or another IPE card, in the case where additional processing of the packet is required) .
  • a 9-port Ethernet switch 550 which, like the Ethernet switch provided on the preferred line card 380, provides for interprocessor communications between the eight PPUs and an MPPU 530 on the IPE card 500 for purposes of load balancing, hardware monitoring and bandwidth distribution, and for sharing user and configuration information.
  • the egress side of the exemplary line card 380 is also provided with a PIC 600, four BACs 602-608, and a PM 610.
  • the PIC 600 Upon receiving a possibly interleaved stream of fixed length cells from the switch fabric via the UDASL 430, the PIC 600 examines this cell stream "on the fly" to ascertain control information (including control information that may have been added to the packet header by the PM 512 on an exemplary IPE card 500) .
  • the PIC 600 then forwards the data cells to the BACs 602-608 for storage in their corresponding cell buffers, and forwards corresponding control cells for each packet to one of the BAC PPUs (typically assigned by an IPE card BAC PPU that previously processed control information for the same packet) for further processing.
  • the assigned BAC PPU then performs additional packet processing, primarily for traffic shaping, PHY card scheduling and bandwidth distribution on that PHY card.
  • this BAC PPU Upon processing the control information received from the PIC 600, this BAC PPU produces and forwards a control cell to the packet manager 610, which, in turn, dequeues the quarter cells of the corresponding packet in sequence from the cell buffers associated with the BACs 602-608 in cooperation with the PIC 600.
  • the PM 610 then stores the constituent cells of the reassembled packet in its own cell buffer 612 (using a link list 614 and a free buffer list 616) , and formats the packet for its intended destination before forwarding the reassembled formatted packet to the PHY card associated with this line card for outputting the packet from the mid-network processor 300.
  • Cardld An 8 bit number that uniquely identifies an IPE or Line Card in the system.
  • Flowld A 10 bit number whose lower (least significant) 8 bits contain a Cardld, and whose upper (most significant) 2 bits identify the priority (class) of the traffic sent through the switch fabric to this card using this Flowld. (In the switch fabric, this field is 12 bits, but our implementation only uses the least significant 10 bits.)
  • a datalink (layer 2) interface examples include ATM virtual circuits, PPP sessions (over SONET, Ethernet, or ATM) , and MPLS label switched paths .
  • Userld A 32-bit value that can be used as a system-wide pointer to user configuration and state information. Since multiple cards (one or more IPEs and one Line Card) can store information about a user, it is possible to have multiple Userlds that refer to a single user.
  • the upper (most significant) 8 bits of the value represent the Cardld of the card which contains the user information being identified.
  • the next 4 bits represent the PPUID of the PPU on the card where the information is stored, and the lower (least significant) 20 bits represent the CID assigned by that card to the user.
  • the CID is used as an index into the PPU' s table of user information.
  • LCUserld A Userld in which the Cardld identifies a Line Card.
  • Primary Userld A Userld in which the Cardld and PPUID identify the PPU on an IPE with has the primary responsibility for managing a user.
  • Small User A user whose ingress packet stream is processed entirely by a single IPE PPU. Small users do not have Secondary Userlds. Large User: A user whose configured bandwidth is too high for his ingress packet stream to be processed by a single IPE PPU. All large users have one or more Secondary Userlds.
  • Logical Link A group of users of the same type (i.e.: a group of ATM Virtual Circuits) . If the Logical Link is a group of PPPoE sessions over ATM, the Logical Link must be an ATM Virtual Circuit.
  • CSIX Header The header of a CSIX (i.e., Common Switch Interface) cell.
  • the CSIX Header is separate from the 64 byte cell payload.
  • Cell Header The first two bytes of the 64 byte payload of a CSIX cell.
  • PIE Header The 6 bytes immediately following the Cell Header of the first cell of a packet. Overview:
  • the server system preferably comprises one or more rack mountable system units (i.e., shelves).
  • the system also contains at least one line card, exactly as many PHY cards as line cards, and at least as many IPE cards as line cards.
  • each shelf of the system contains preferably three switch fabric cards and two flash disk cards.
  • Each line card is uniquely associated with a particular PHY card.
  • Each IPE card can be thought of as an independent router, with one or more IP addresses associated with it.
  • Each Layer 2 (datalink) interface (referred to as a "user") provided by a line card is associated with exactly one IPE card (more specifically, exactly one PPU on one IPE card) . Different users from the same line card can be associated with different PPUs on different IPE cards, and a particular PPU can have users from multiple line cards.
  • PPP/Ethernet/ATM PPP/Ethernet/ATM
  • the inner-most levels of encapsulation each of which being layer 2 interfaces (users) in their own right, can be associated with different PPUs within an IPE card, or even PPUs on different IPE cards, thus causing traffic from the outer levels of encapsulation to be split among multiple PPUs or IPE cards.
  • outer layers can be encapsulated layer 3 traffic as well as layer 2 traffic (for example, an Ethernet/ATM virtual circuit can carry IP as well as PPPoE packets) . In this case, all the layer 3 traffic will be associated with a single PPU (a user) , but the encapsulated layer 2 datalinks (users) can each be associated with a different IPE card.
  • the set of all users on the system is preferably distributed as evenly as possible across all the IPE cards in the system.
  • the MPPU stores the per-user information for the users assigned to that IPE and distributes those users across its PPUs.
  • Each PPU stores a copy of the per-user information assigned to it.
  • each user is associated with one and only one IPE card and one and only one PPU on that IPE.
  • This PPU' s copy of the user's configuration and state information can be uniquely identified on a system- wide basis by the Primary Userld.
  • the architecture of this preferred implementation is based on line cards, PHY cards, a switching fabric, internet processing engines (IPE) and flash memory modules, as was described generally above.
  • the line cards terminate the link protocol and distribute the received packets based on user, tunnel or logical link information to a particular IPE through the switching fabric.
  • the procedure of forwarding a packet to a particular IPE and PPU will be denoted as "routed distribution.”
  • a midplane is also used to connect the different cards.
  • the preferred line card and the preferred IPE card were described above with reference to Figs. 3 and 4.
  • the system is comprised of a set of hardware components, as described, which can be used to configure a system for a wide variety of applications as well as throughput requirements cost effectively.
  • the preferred switch fabric and scheduler support cell switching at OC-192 speeds, and the switch fabric is both fully redundant and highly scalable.
  • the preferred IPE cards have the following attributes: high performance protocol processing engine; manages users, tunnels and secure segment groups; supports policing and traffic shaping; implements highly sophisticated QoS with additional support for differentiated services; supports distributed bandwidth management processing; and supports distributed logical link management, able to do NAT, packet filtering and firewalls .
  • the preferred line cards have the following attributes: packet lookup processing; protocol identification; scheduling; supports distributed bandwidth management processing; multi-I/F support (ATM, GE, POS) ; and AAL-5 Processing (CRC check and generation) .
  • the preferred PHY cards have the following attributes : line termination for rates up to OC 192c; ATM - Layer Processing; ATM - SONET Mapping; POS - SONET Mapping
  • the overall system preferably has the following attributes: high availability; 1+1 switch fabric and scheduler redundancy; 1+1 control system unit redundancy; all field replaceable units are hot-swappable; N+l AC power supply redundancy; and N+l fan redundancy.
  • routed distribution is to forward a packet to a particular PPU within an IPE.
  • the key benefits of this approach are: incremental provisioning of compute power per packet; allows load distribution based on the packet computation needs for a particular user or tunnel; user and tunnel configuration information can be maintained by one single processor thus minimizing the inter-process communication needs; and allowing the portability of single processor application S/W onto the system.
  • Fig. 5 illustrates the distribution of packets to a particular IPE.
  • a packet is received from a line card.
  • the line card examines the packet and forwards the packet based on the IP source or destination address, the user session ID, or the tunnel ID.
  • the IPE receives the packets and hands it over to the PPU specified by the line card.
  • the line cards and the IPE host the flexible protocol- processing platform.
  • This platform is comprised of a data path processing engine and the already mentioned protocol- processing unit.
  • the separation of data path processing from protocol processing leads to the separation of memory and compute intensive applications from the flexible protocol processing requirements.
  • a clearly defined interface in the form of dual-port memory modules and data structures containing protocol specific information allows the deployment of general-purpose CPU modules for supporting the ever changing requirements of packet forwarding based on multilayer protocol layers.
  • the protocol-processing platform can be configured for multiple purposes and environments. That is, it supports a variable number of general purpose CPUs which are used in the context of this architecture as Protocol Processing Units (PPU) .
  • One of these CPUs is denoted as the Master Protocol Processing Unit (MPPU) .
  • MPPU Master Protocol Processing Unit
  • the data path processing unit extracts, in the packet inspector, all necessary information from the received packets or cells and passes this information on to a selected PPU via one of the buffer access controller devices.
  • the cells themselves are stored in the cell buffer and linked together as linked lists of cells, which form a packet.
  • the packet is then forwarded either as a whole or segmented based on the configured interface.
  • Each PPU is associated with one dual-ported memory, where one port is controlled by the data-path processing unit and the other by the corresponding PPU.
  • Each dual-ported memory contains two ring buffers, where one ring buffer is used to forward protocol specific information from the data path to the PPU and the other is used for the other direction.
  • the ring buffer for passing on protocol specific information to the PPU is called the receive buffer.
  • the other buffer is called the send buffer.
  • Two pointers are maintained for each ring buffer.
  • the write pointer for the receive buffer is maintained by the data path processing unit while the read pointer is set by the PPU.
  • the send buffer's write pointer is controlled by the PPU and the read buffer by the data path processing unit.
  • the PHY card terminates the incoming transmission line. It also performs clock recovery and clock synthesis. Optical signals are converted into a parallel electrical signal which is then an input to a physical framer device which maps the incoming bit stream into the transmitted physical frame. Finally the physical layer of the corresponding link protocol processes the physical frames. In addition, link layer protocol processing is performed in order to provide a common packet interface to the line card. On the transmission side, the packets or cells are mapped into physical frames. These frames are then encoded into the corresponding physical layer format and sent over the optical fiber to the receiving peer.
  • the physical layer format is preferably either SONET or Gigabit Ethernet.
  • the link layer format is preferably GE, ATM or PPP for POS.
  • the line card performs packet forwarding for the egress and ingress path. Full duplex 10 Gbit/s throughput is provided.
  • the line card interfaces to the PHY cards and the switch fabric card.
  • the Line Card is preferably configured for either POS-PHY or UTOPIA III interface to the PHY card.
  • the Line Card preferably hosts two Protocol Internet Engine (PIE) chip sets. On the ingress side, one PIE chip set supports four protocol-processing units (PPU) and one MPPU.
  • PIE Protocol Internet Engine
  • the Four PPUs perform routed distribution to the various IPEs in the system. They also provide traffic shaping and scheduling of flows to the switching fabric. The remaining MPPU is used for overall control and supports the distributed bandwidth allocation protocol of the switching fabric.
  • the Packet Inspector first examines incoming cells or packets and the protocol information is extracted based on matched patterns in the data flow. This information is then made available to the PPU which is responsible for processing the incoming packet.
  • Cells or packets from a PHY card are processed by a particular PPU based on a chosen configuration. This configuration depends upon the configuration of the PHY card itself and upon the protocol supported by the PHY card.
  • the other PIE chip set, processing the egress flow is preferably responsible for cell assembly from the switch fabric and packet scheduling for multiple physical ports. Additional support for AAL5 processing is provided for ATM flows.
  • the MPPU from the ingress path is shared for configuration, maintenance and cell extraction of the egress flow.
  • the communication channel provides signaling and connection setup control for the ATM PHY card.
  • the PHY card informs the Line Card about the physical layer status and reports alarm and error conditions.
  • the ingress packet processing preferably involves:
  • Packet Assembly for ATM traffic AAL5 processing
  • Protocol Identification Packet Data Inspection
  • Routed Distribution Scheduling of traffic flows through switching fabric; Buffer management for ingress cell buffers; and cell scheduling for the switch fabric.
  • the egress packet processing preferably involves: Traffic Shaping; Packet Assembly for switch fabric flow; MPHY Buffering; Cell Scheduling for ATM with multiple physical interfaces with AAL5 processing (CPCS, SAR) ; and Packet Scheduling for POS with multiple physical interfaces.
  • the Internet Processing Engine provides the functionality for protocol processing, user management, tunnel management and secure segmentation. It receives the packets from the switching, enforces the service level agreements (SLA's), performs packet classification, filtering and forwarding, and finally schedules the packet for transmission to the requested interface.
  • SLA service level agreements
  • the PI is part of the Packet Internet Engine (PIE) chip set, which consists of the Packet Inspector, the Buffer Access Controller, and the Packet Manager. Together with the sixteen PPUs and the MPPU, the PIE chip set provides a powerful Protocol Processing unit.
  • the PIE chip extracts informative protocol information and forwards it to the PPUs and the MPPU based on the routed distribution decision made in the Line Cards.
  • the chosen PPU processes this information and performs all necessary packet processing. This includes, besides forwarding and filtering, policing, and packet formatting.
  • the MPPU controls the IPE and is negotiating with the units in the system the bandwidth allocation of the switch fabric. It also provides bandwidth management for the configured logical links.
  • the MPPU manages its connections by assigning users and tunnels to individual PPUs for forwarding processing from the Line Card to a particular IPE. Once a connection between the MPPU and Line Card is set up, all packets belonging to such a connection are forwarded from the Line-Card to the chosen PPU.
  • a PPU is chosen based on the already assigned connections, their bandwidth and the bandwidth and QoS required for the new connection. Connectionless traffic (Internet to Internet) is mapped onto an internal connection. If more bandwidth is needed than one PPU can manage, the packets will be distributed over multiple PPUs.
  • the functionality of the IPEs include: User Management; Tunnel* Management; Logical Link Management; Support for Secure Segmentation; Policing; QoS Control with Diff Service Support; Buffer Management; IPv4, IPv6 Forwarding; Packet Classification; Packet Filtering with support for user
  • Protocol Internet Engine Chip Set (PIE) :
  • the Protocol Internet Engine provides the data path processing capabilities for the server system at OC-192c rates.
  • the PIE chip set comprises three chips. These chips result in a very high performance packet processing system together with an interface controller and multiple general purpose CPUs.
  • Each cell is preferably transferred into the buffer through four buffer access controllers ("BACs") in order to increase. the bandwidth to the PPUs and to increase the bandwidth to the external cell buffers .
  • BACs buffer access controllers
  • Different portions of the same cell are written to the cell buffers attached to the different BACs. However, the captured portion of the data is sent to just one of the PPUs.
  • the preferred BAC unit is shown in Fig. 8.
  • the RSU receives incoming data, reformats the data to * an internal format, performs a parity check for incoming data, and also performs synchronization control.
  • the preferred format of a cell received by the BAC from the packet inspector is shown in Fig. 9.
  • the Cell Filter unit extracts control information from the cell and sends the cell data to the BAU along with the indication of which portion of the cell has to be stored in this cell buffer.
  • the CFU also sends the cell data stream to the PTU which translates the PPUID to the appropriate PPU and thence to the CCU where, based on the PPUID and the capture matrix, the control cell is extracted from the data cell CCU and stored in the CBU.
  • the CMU then transmits the control cell to the appropriate PPUs through a dual port RAM interface.
  • the control cell corresponding to the packet is sent by the PPU which processed that user to the PM along with the dequeue pointer. This is received by the BEU of the PM, as shown in Figure 8.
  • the control cell data stream (shown as the narrow arrow in Fig. 8) then goes to the ICU where it is stored while the DSU does deficit round robin scheduling of the data packets corresponding to the control packets in order to distribute bandwidth equitably to the BACs for sending out packets.
  • the dequeue pointer corresponding to the packet to be dequeued is sent to the PIU from where it is transmitted to the PI where it is received at the PIU and passed on to the BMU.
  • the dequeue pointers are stored in a FIFO while the previous packets are being dequeued.
  • the dequeue pointer information is passed onto the BACs and the BAU in the BACs dequeues the packet and passes it through the PMU to the packet manager.
  • a packet is dequeued by dequeuing all the cells comprising the packet which are held in the form of a linked list.
  • Data packets from the data packet stream (shown as the thick arrow in Fig. 8) undergo AAL5 processing (should they need it) in the APU, and are stored in the IDU buffer.
  • the FAU reformats packets into 64 bit slices and controls dequeuing from both the IDU and the DSU' s DPRAM in accordance with the PFU.
  • a sequence number is used at the beginning of both the data and the control cells.
  • Both the control and data streams enter the PFU where they are formatted and sent to the TIU to be sent to the phy cards or the switch fabric.
  • the PIE chip set can be configured for multiple purposes and environments. That is, it supports a variable number of general purpose CPUs which are used in the context with the PIE chip set as Protocol Processing Units (PPU) . One of these CPUs is reserved for maintenance and control purposes and is denoted as MPPU.
  • the PIE chip set implements all necessary functions in order to hide all data path processing from the actual protocol processing functionality.
  • the PIE chip set extracts all necessary information from the received packets or cells and passes this information on to a selected PPU.
  • the cells are then stored in the cell buffer and linked together as linked lists of cells, which form a packet.
  • the packet is then forwarded to the MPHY scheduler as a whole or segmented based on the configured interface .
  • Each PIE chip set is differently configured.
  • the PIE chip set on the IPE supports as many as 8 PPUs and 1 MPPU. 4 PPUs and 1 MPPU will support the PIE chip set on the ingress side of the Line Card, and an equal number on the egress side of the Line Card.
  • the characteristics of the preferred PIE are as follows: Three Chip Chip-Set; Full Data-path processing in hardware; Support for distributed .protocol processing by general purpose CPU modules; Highly scalable compute power per packet (up to 64 PPUs can be supported) ; Flexible interface support with MPHY scheduling; AAL-5 Processing; SAR Sublayer: Assembly and Segmentation for up to 256K connections; CPCS Sublayer: CRC 32 generation and check, padding control, and length field control; Internal Packet Processing; Checksum computation and check; Length field control; Padding control; Micro- programmable Packet Inspection Engine; Supports any layer packet inspection; Supports byte matched pattern processing; Supports bit matched pattern processing; Results are made available to protocol processing units; Supports extraction of any portion of packet for protocol processing; IPv4/IPv6 Header Checksum; Congestion Avoidance Support; EPD; PPD; Internal Back-pressure control; Linked List Control; Supports up to 8 million 64 byte cells (initially a million) ; Links cells together to form a packet
  • the preferred PIE supports: Packet Classification: Based on Layer 3,4,... Information (any layer); Packet Filtering; User programmable filters; Group filters; Firewall processing; Packet Forwarding; IPv4 Lookup Processing; IPv6 Lookup Processing; Tunnel Forwarding; Buffer Management; Dynamic Thresholding on a per user and assigned rate basis; Support for up to 8 million Cell Buffer (initially a million) ; Congestion avoidance with Early Packet Discard (EPD) , Partial Packet Discard (PPD) , Selective Packet Discard; Policing; Per User and Logical Link; Enforcing traffic contracts based on SLA; Traffic Shaping; Per User and Logical Link; Support for traffic contracts based on SLAs; Support for Real-time traffic (low delay traffic) ; QoS Control; Supported for differentiated services; Multiple priorities per user; Flow based queuing (not initially supported) ; Bandwidth Management; Distributed processing for allocation of bandwidth on
  • Traffic Management for an Internet access system is complex due to the involvement of various system interfaces.
  • a system might be connected to users, the Internet backbone, a Local Area Network with file and Web servers, and a Metropolitan Area Network (MAN) which gives access to local TV and media servers as shown in Fig. 11.
  • Each link has different link properties with respect to available bandwidth and Dollar per Megabyte. This means that a user's share of bandwidth on a particular link has to be based on the property of this link.
  • a user might get more bandwidth share on the MAN link than on the backbone link due to the fact that more bandwidth at a cheaper price is available on the MAN link than on the backbone link. The same is true for bandwidth wholesaling of the preferred system to multiple ISPs who would like to resell bandwidth to their customers.
  • the enabling technology for this model is Secure Segmentation.
  • a logical link group can be assigned to a secure segment based on the bandwidth needs of the considered secure segment for a particular link as shown in Fig. 12. This means that not only user allocation has to be considered but also logical link bandwidth needs to be included. Therefore, bandwidth is distributed based on traffic class, user, and logical link group. This supports the wholesaling model and takes into account over-subscription requirements in order to support QoS including differentiated services.
  • the preferred system represents a highly distributed system.
  • resources have to be allocated based on the requirements of the traffic of each component. That means in general that each component has to take part in a distributed computation method in order to allocate the resources.
  • the traffic management requirements for bandwidth allocation within the preferred system will have to include bandwidth negotiation for the various flows through the switching fabric.
  • Buffer management and QoS Control is an integral part of the overall traffic management scheme implemented in the preferred server system. Due to the large buffer, the system has to maintain on various different places in the distributed system a sophisticated buffer management scheme which has to be implemented and supported by QoS control in order to support differentiated services and other traffic flow specific requirements Policing - Traffic Shaping:
  • Policing and Traffic Shaping have closely related functionality. Policing ensures that the incoming stream does conform to the negotiated link parameters for a logical link group as well as the user of the incoming link. Traffic shaping enforces the link parameters for the outgoing traffic stream based on the outgoing user, the logical link group and the link itself. Fig. 14 is intended to illustrate the need for policing as well as traffic shaping. An incoming traffic stream is shaped (policed) in order to enforce the traffic contracts of a user for the considered link and logical link. Before the traffic is forwarded to another link, the traffic contracts for this particular link have to be enforced. This traffic contract might be much different from the traffic contract of the incoming stream. Consider the case where a user requests information over the Internet backbone link.
  • the bandwidth allocated on this link for this user might be 500 Kbit/s.
  • the logical link bandwidth for the corresponding secure segment might be set to 10 Mbit/s. If the user's access link to the system uses an ATM connection with an assigned rate 1 Mbit/s and no policing is enforced, the user could use the full 1 Mbit/s. This is possible since the traffic shaped onto the user ATM link allows the user to transmit the higher rate. Therefore, it is necessary to police the incoming traffic and the other for shaping the traffic for a particular link.
  • Fig. 15 shows the schematic implementation of the policer and traffic shaper in an IPE within the preferred server system. A received cell is assigned to a particular user data structure assigned to the incoming link for the considered user.
  • the policing information can be directly obtained from the user who is sending a packet based on the connection identifier, the corresponding session ID, or the IP source address. However, if the packet on the incoming connection cannot be directly associated with a user or logical link group, then the user and/or logical link group for whom it is destined classifies the packet. Based on the obtained user and logical link information the incoming traffic stream is policed by queuing up the packets and enforcing the negotiated traffic contract.
  • the packet conforms to the incoming link requirements, the packet is shaped based on the user parameters and logical parameters for the outgoing link. These parameters are obtained from the user connection itself if a session ID can be associated with it. If the packet comes from a user and is forwarded across the Internet to a remote terminal, then the shaping parameters are obtained from the sending user for the corresponding link and the associated logical link group. For connectionless traffic, which cannot be directly associated with users, logical link group can be assigned based on the IP destination address and or source address. This allows managing traffic flows between networks. Switch Fabric Bandwidth Management and Scheduling:
  • Exhibit A Attached hereto as Exhibit A are details of the manner in which the preferred server system is programmed so as to minimize inter-IPE card communications.
  • Line Cards do not perform any traffic policing. Policing is performed, in distributed fashion, by the all IPE cards in the system. If during testing, it can be determined that the Line Cards have enough processor and I/O bandwidth to perform policing, this function might be moved to the Line Cards in a future version ofthe software. Also, Line Cards do not perform any routing table lookups.
  • One operation that must be performed is determining if the destination IP address is one of the IP addresses of our system. This can be done using a simple hash table. A full CIDR routing search is not necessary, since we are only looking for an exact match.
  • the result of the lookup (if successful) is the Cardld of the IPE that the address belongs to. If a match is found and the Cardld is equal to the Cardld of the IPE that the packet is about to be forwarded to, the packet must be forwarded with die Destination PPU bit set. This is so that when the packet is received, the PI can select the packet to be captured in its entirety (as long as it is not part of a non- encrypted tunnel).
  • the Userld should be determined based on die IPsec Security Parameter Index (SPI) rather than on • flie hash of die source and destination IP addresses in the ff header of the packet.
  • SPI die IPsec Security Parameter Index
  • the following information is sent to the IPE PPU along with the packet payload:
  • IP checksum IP checksum, AAL5 CRC, internal parity
  • IPC for inter-processor communications
  • IP for inter-processor communications
  • PPP for inter-processor communications
  • Ethernet for inter-processor communications
  • ATM for inter-processor communications
  • MPLS MPLS
  • This 4 bit field can be used to give additional information to the IPE about the encapsulation of this packet It specifies which stage in flie IPE PI will be the first to inspect flie packet.
  • This bit is set for IP packets whose destination address is equal to one of the IP addresses of the IPE card that packet is being sent to.
  • the PPU identifier of the TPE PPU that the packet is being sent to is the PPU identifier of the TPE PPU that the packet is being sent to.
  • the PI uses the VP CI and PHYID to calculate the LC CID, which is used by the PI as index into flie hardware connection table.
  • the PI reads (amongst other tilings) a LC PPUID which selects the LC PPU that the control information for the packet should be sent to.
  • the LC CID is also used by the LC PPU as an index into a software connection table.
  • this connection table is used to determine the Userld (which consists of a Destination Cardld, Destination PPUID, and IPE CID) that is sent to the IPE in flie PIE Header ofthe packet.
  • a determination of priority is made based on flie protocols found in the packet. Alternatively, the priority could be read from the connection table. This priority is used to determine flie two most significant bits of the Destination Flowld when the packet is forwarded to an IPE.
  • the ATM cell headers and the AAL5 trailer and padding are removed (by the PM) before forwarding the packet.
  • the IP packet is forwarded to the IPE.
  • the IPE CID is determined by reading the software connection table.
  • Each ATM LC must have a standard globally unique Ethernet MAC address permanently assigned to it.
  • Each Ethernet/ATM VC should be configurable as to whether or not it is in "promiscuous" mode - that is, whether or not it should discard unicast packets not sent to its MAC address.
  • PPPoE session packets 0x8864.
  • the IPE CID is determined by reading the software connection table, and the Initial PID is set to indicate an Ethernet packet.
  • the PPPoE header is removed, and the PPPoE Session ID (from the PPPoE header) is used to index into a PPPoE session table, from which the IPE CID can be retrieved.
  • the Initial PID is set to indicate a PPP packet.
  • the PPP/PPPoE protocol type is IP
  • the PPP header is also removed before the packet is forwarded to the IPE, and the Initial PID is set to indicate an IP packet.
  • the PPP protocol type is IP
  • the PPP header is removed, and the Initial PIDS is set to indicate IP
  • the PPP header is kept, and the Initial PIDS is set to indicate PPP.
  • the IPE ODD is determined by reading the software connection table.
  • the top of stack shim label (in the AAL5 PDU) is replaced with flie VPI/NCI of the virtual circuit
  • the VPI/VCI can be deduced from flie LC CID.
  • the IPE CID is determined by reading the software connection table.
  • the PI DFU control registers can be programmed (by the MPPU) with the LC CID's of up to 4 large virtual circuits. For these circuits, if die packet contains an IP header, flie PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of flie source and destination IP addresses of the packet (calculated by the PI DFU).
  • any ofthe entries (circuits) in the software NC connection table can be marked for distribution across multiple IPE PPUs. These are known as large users, and need not be the same virtual circuits that are distributed by the DFU as explained above. For these circuits, if the packet contains an IP header, a new hash is calculated over the source and destination IP addresses of the packet and used to select one of several Userlds (Destination Cardld, Destination PPUID, and IPE CID) that are sent to the IPE in the PIE Header of the packet.
  • Destination Cardld Destination PPUID
  • IPE CID IPE CID
  • the Userld is selected using a different means, as described in the IPsec protocol processing section below.
  • the following information is sent to the IPE PPU along with the packet payload: • In the CSIX Header: • Destination Flowld: Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet.
  • IP checksum IP checksum, internal parity
  • This 3 bit field tells the IPE the type of encapsulation this packet has.
  • the choices are: IPC, IP, PPP, or MPLS
  • This 4 bit field can be used to give additional information to the IPE about flie encapsulation of this packet It specifies which stage in flie IPE PI will be the first to inspect the packet
  • This bit is set for IP packets whose destination address is equal to one of flie IP addresses of the IPE card that packet is being sent to.
  • each PPP/SONET PHY comprises a single user.
  • each MPLS Label Switched Path represents an additional user.
  • the LC CID is simply the PHYID.
  • the PI DFU control registers can be programmed (by the MPPU) with the LC CID's of up to 4 PHYs. For these PHYs, if the packet contains an IP header, the PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of the source and destination IP addresses ofthe packet (calculated by the PI DFU). This capability ofthe PI DFU must be used for OC-192c and OC-48c PHYs in order to distribute the load over multiple LC PPUs. For OC-12c and smaller PHYs, the PI DFU need not be used. Instead, the PI uses the LC CID as index into the hardware connection table. The PI reads (amongst other things) a LC PPUID which selects the LC PPU that the control information for the packet should be sent to.
  • a determination of priority (one of four classes) is made. This priority is used to determine the two most significant bits ofthe Destination Flowld when the packet is forwarded to an IPE.
  • the LC PPU uses the LC CID (which is really just the PHYID) as an index into a software PHY table.
  • This table provides the Primary Userld, which determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • the Primary Userld determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • the Initial PID is set to indicate a PPP packet.
  • the LC PPU uses the LC CID (which is really just the PHYED) to index into and read from the software PHY table. From this the LC determines whether this is a small user or a large user. For small users, the Primary Userld is also read from the PHY table. This determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • LC CID which is really just the PHYED
  • a hash is calculated over the source and destination IP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs.
  • the selected UserlD determines where the packet is sent as well as the IPE CED that is sent in the PIE Header of the packet.
  • the PPP header is removed before the packet is forwarded to the IPE, and the Initial PID is set to indicate an IP packet
  • the LC CID only identifies the PHYID. Therefore, when the LC PI identifies an MPLS packet, the top of stack label must be captured in order to identify the user. For each POS PHY, the LC PPU must maintain a table of MPLS LSPs. The LC CED selects which table, and flie top of stack label is used to index into the table. For small users, the Primary UserlD that corresponds to the LSP can then be read the table. For large users, however, a similar process to the one described above for IP is used. A hash is calculated over the source and destination D? addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet
  • the PPP header is removed before the packet is forwarded to the IPE, and the Initial PED is set to indicate an MPLS packet
  • Initial PED This 3 bit field tells the IPE the type of encapsulation this packet has. The choices are: IPC, Ethernet, PPP, IP, or MPLS.
  • This 4 bit field can be used to give additional information to the IPE about the encapsulation of this packet. It specifies which stage in the IPE PI will be the first to inspect the packet.
  • This bit is set for IP packets whose destination address is equal to one of the EP addresses of the IPE card that packet is being sent to.
  • the PPU identifier of the IPE PPU that the packet is being sent to is the PPU identifier of the IPE PPU that the packet is being sent to.
  • each PHY comprises a single user.
  • each MPLS Label Switched Path (LSP) or PPPoE session represents an additional user.
  • the LC CID is simply the PHYED.
  • the PI DFU control registers can be programmed (by flie MPPU) with the LC CED's of up to 4 PHYs. For these PHYs, if the packet contains an P header, the PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of the source and destination EP addresses ofthe packet (calculated by the PI DFU). This capability ofthe PI DFU must be used for 10 Gigabit Ethernet Cards in order to distribute the load over multiple LC PPUs. For 1 Gigabit and smaller PHYs, the PI DFU need not be used.
  • flie PI uses the LC CED as index into flie hardware connection table.
  • the PI reads (amongst other things) a LC PPUED which selects flie LC PPU that flie control information for the packet should be sent to.
  • a determination of priority (one of four classes) is made. This priority is used to determine the two most significant bits ofthe Destination Flowld when the packet is forwarded to an IPE.
  • the LC PPU uses the LC CID (which is really just the PHYED) to index into and read from the software PHY table. From this the LC determines whether this is a small user or a large user. For small users, the Primary Userld is also read from the PHY table. This determines where the packet is sent as well as the IPE CID that is sent in the PIE Header ofthe packet.
  • LC CID which is really just the PHYED
  • a hash is calculated over the source and destination EP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs.
  • the selected UserlD determines where the packet is sent as well as the EPE CED that is sent in flie PIE Header ofthe packet.
  • the packet is forwarded to the IPE with the Ethernet MAC header intact, and the Initial P D is set to indicate an Ethernet packet .1.4.2 PPPoE Session
  • PPPoE Session packets For PPPoE Session packets, the Ethernet and PPPoE headers are removed, and the PPPoE Session ID (from the PPPoE header) is used to index into a PPPoE session table, from which the Userld (EPE Cardld, IPE PPUID and IPE CID) can be retrieved.
  • a unique PPPoE Session table can be maintained for each PHY, and the LC CID can be used to select which session table to use.
  • the PPP header is also removed, and the Initial PEDS is set to indicate IP, otherwise, the PPP header is kept, and the Initial PIDS is set to indicate PPP.
  • the LC CID only identifies the PHYED that the packet was received on. Therefore, when the LC PI identifies an MPLS packet, the top of stack label must be captured in order to identify the user. For each Ethernet PHY, the LC PPU must maintain a table of MPLS LSPs. The LC CED selects which table, and the top of stack label is used to index into the table. For small MPLS users, the Primary UserlD that corresponds to the LSP can then be read the table. For large users, however, a similar process to the one described above for IP is used. A hash is calculated over the source and destination IP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as flie EPE CID that is sent in the PIE Header ofthe packet.
  • Ethernet header is removed before the packet is forwarded to the IPE, and flie Initial PED is set to indicate an MPLS packet.
  • Ethernet protocols other than EP, MPLS, and PPPoE Session.
  • the LC PPU uses the LC CED (which is really just the PHYED) as an index into a software PHY table.
  • This table provides the Primary Userld, which determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • fliese packets are sent to the EPE PPU identified by flie Primary Userld. No distribution is performed for fliese packets.
  • the Initial PED is set to indicate an Ethernet packet.
  • Line cards perform all the traffic shaping for the system.
  • Sent in the Cell Header to allow the LC to reassemble the packet This is simply the identification of the EPE card (in the least significant 8 bits) and flie priority (class) in the most significant two bits.
  • the priority MUST be the same as is specified in the Destination Flowld of this packet.
  • Initial PED This 3 bit field tells the LC the type of encapsulation this packet has. The choices are: IPC (for inter-processor communications), EP, Ethernet, PPP, or MPLS
  • the PPU identifier ofthe LC PPU that the packet is being sent to is the PPU identifier ofthe LC PPU that the packet is being sent to.
  • the Destination PPUID selects the LC PPU that will process the packet
  • the LC CID is used by the LC PPU as an index into a software connection table. This connection table provides the shaping parameters, any additional encapsulation that must be added by the LC, the PHYID, and the ATM VPI/NCI for the packet
  • the priority (one of four classes) is based on the two most significant bits of the Source Flowld in the Cell Header. The priority is used by the Traffic Shaper and the Scheduler to determine when to forward the packet to the PHY.
  • the ATM cell headers and the AAL5 trailer and padding are always added (by the PM) before forwarding the packet to the PHY card.
  • the desired encapsulation for the packet can be eiflier ff/PPP/PPPoE/Ethernet/ATM, EP/PPP/ATM or EP/ATM.
  • the PPU can determine which it is from flie connection table. If the encapsulation should be EP/PPP/PPPoE/Ethernet/ATM, flie connection table will provide the necessary information to add the missing headers. If the encapsulation should be IP/PPP/ATM, a PPP header identifying the protocol as EP is added. Also, the entry in the connection table may specify that an LLC header should also be added.
  • connection table may specify that an LLC header should be added to the beginning ofthe packet. Otherwise the packet is sent as is.
  • the desired encapsulation may be either PPP/PPPoE/Ethernet/ATM or PPP/ATM.
  • the PPU can determine which it is from the connection table. If it is PPP/ATM, the packet is sent as is, otherwise, the connection table will provide the necessary information to add a PPPoE header and an Ethernet Header.
  • VPI/NCI is obtained from flie connection table.
  • IPE PPU The following information is received from IPE PPU along with the packet payload: • In the CSIX Header: • Destination Flowld:
  • This 3 bit field tells the LC the type of encapsulation this packet has.
  • the choices are: EPC (for inter-processor communications), EP, PPP, or MPLS
  • the PPU identifier ofthe LC PPU that the packet is being sent to is the PPU identifier ofthe LC PPU that the packet is being sent to.
  • the Destination PPUID selects the LC PPU tiiat will process the packet
  • the LC CED is used by flie LC PPU as an index into a software connection table. This connection table provides the shaping parameters, and flie PHYED for the packet.
  • a PPP header identifying the packet as an IP packet is added.
  • a PPP header identifying the packet as a MPLS packet is added.
  • This 3 bit field tells the LC the type of encapsulation this packet has.
  • the choices are: IPC (for inter-processor communications), Ethernet, PPP, IP, or MPLS
  • the PPU identifier of the LC PPU that the packet is being sent to is the PPU identifier of the LC PPU that the packet is being sent to.
  • the Destination PPUID selects the LC PPU that will process the packet.
  • the LC CID is used by the LC PPU as an index into a software connection table. This connection table provides the shaping parameters, and the PHYID for the packet
  • IP/Ethernet are sent using this type because the EPE, not the LC implements ARP, and therefore adds the Ethernet header to all EP packets before sending them to the LC.
  • the desired encapsulation is PPP/PPPoE/Ethernet
  • the connection table provides the necessary information to add a PPPoE header and an Ethernet header.
  • the desired encapsulation is EP/PPP/PPPoE/Ethernet
  • a PPP header indicating an EP packet is added.
  • the connection table then provides the necessary information to add a PPPoE header and an Ethernet header.
  • connection table provides the information needed to add an Ethernet header (the destination MAC address is all that is required from the connection table).
  • All packets received by an EPE card from the Line Cards (or from other EPEs) will be of one of the following types.
  • the Initial PED field in the PIE Header will identify which one of these types each packet corresponds to. If there are more than 8 such types, the Initial Stage field in the PIE Header can be used to select a different stage to begin inspection, each of which allows 8 additional protocols to be identified by the Initial PID field.
  • the EPE CID and PPUID in the PEE Header of the received packet combine with the Flowld to give the Userld. Only the least significant 18 of the 20 bits of the EPE CTD are used. 1.3.1.1 IPC
  • the PI should be programmed to capture these packets to a PPU (as specified in the Destination PPU field in the PIE Header) in their entirety.
  • IP/ATM IP/PPP/ATM
  • EP/PPP/SONET IP/PPP/PPPoE/Ethernet
  • IP/PPP/PPPoE/Ethernet/ATM IP/PPP/PPPoE/Ethernet/ATM.
  • the IPE CED uniquely identifies the PPPoE Session ED, or the ATM Virtual Circuit that the packet was received on as well as the PHY/LC that it was received on.
  • the EPE CED will identify only flie PHY/LC that the packet was received on, that is, it will be constant for all EP/PPP/SONET packets received from a particular PHY/LC.
  • This category consists of all PPP packets received whose PPP protocol type was not IP or MPLS. These packets can come from a POS LC, an ATM LC, or an Ethernet LC. For those PPP sessions fliat will be tunneled using L2TP, the EPE must add a new PPP header to the IP/PPP and MPLS/PPP packets, since for those protocols, the PPP header will have been removed by the Line Card.
  • PPP/SONET PPP/PPPoE/Ethernet
  • PPP/ATM PPP/PPPoE/Ethernet/ATM.
  • the EPE CED uniquely identifies the PPPoE Session ED, or the ATM Virtual Circuit that the packet was received on as well as the PHY/LC fliat it was received on. In the case of PPP/SONET, the EPE CED will identify only flie PHY/LC that the packet was received on, that is, it will be constant for all PPP/SONET packets received from a particular PHY/LC.
  • ARP/Ethernet ARP/Ethernet
  • IP/Ethernet IP/Ethernet
  • PPPoE Discovery/Ethernet ARP/Ethernet/ATM EP/Efliernet/ATM
  • PPPoE Discovery/Ethernet/ATM PPPoE Discovery/Ethernet/ATM.
  • the EPE CED For EthernetATM, the EPE CED uniquely identifies the ATM Virtual Circuit that the packet was received on as well as the PHY/LC that it was received on. In the case of Native Ethernet, the EPE CED will identify only the PHY/LC that the packet was received on, that is, it will be constant for all packets received from a particular PHY/LC.
  • This category consists of packets which begin with an MPLS label stack. These can come from a POS LC, an ATM LC or an Ethernet LC.
  • MPLS/PPP/SONET MPLS/Ethernet
  • MPLS/ATM MPLS/ATM
  • the Line Card will have replaced the top of stack shim label with the real label because the real label was encoded as the ATM VPI/VCI in the packet received from the network.
  • MPLS/PPP/PPPoE/Ethernet MPLS/PPP/ATM
  • MPLS/PPP/PPPoE/Ethernet/ATM MPLS/Ethernet/ATM.
  • the IPE CID uniquely identifies the same as the top of stack incoming top of stack MPLS label, as well as the PHY/LC tiiat it was received on.
  • the top of stack label has a one to one correspondence with the ATM Virtual Circuit that the packet was received on.
  • the following table shows the first two layers of protocols that must be identified by the PI on the EPE for each packet that passes through it.
  • All EP packets received by the EPE will fall into one of two categories: those for which the destination EP address is equal to one of the addresses of the IPE, and those for which it isn't. In the case of the latter, flie packet must be forwarded or discarded by the PPU. But for die former, it must be determined whether or not the packet can be processed entirely by the PPU, or whether it must be sent to the MPPU for further processing. If it must be sent to the MPPU, it must be captured in its entirety.
  • All EP packets received regardless of their encapsulation, must have their destination IP address captured and examined. All routing table searches are performed by the EPE cards. If the destination address is one of the system's EP addresses, but not one of the EPE card's addresses, the packet must be forwarded with Destination PPU bit set.
  • Each L2TP tunnel is handled entirely by a particular EPE card.
  • Each session within the tunnel must be handled entirely by a particular PPU. This requirement comes primarily from the need to support sequence numbers on the data sessions:
  • RFC-2661 "Each peer maintains separate sequence numbers for the control connection and each individual data session within a tunnel. "
  • LAC L2TP Access Concentrator
  • Any PPP user can be selected for L2TP tunneling by the EPE MPPU. If a user is selected for tunneling, then the PPU receiving PPP packets from that user must encapsulate those packets, first with an L2TP header, then a UDP header, and finally an IP header.
  • the IP header's destination address will be that ofthe configured LNS, and the source address will be one ofthe IP addresses of IPE.
  • the resulting IP packet can then be forwarded using the standard IP forwarding procedure to the appropriate Line Card for transmission. It should be evident that tunneled PPP users on different IPE cards will be placed in separate tunnels even if being tunneled to the same destination LNS.
  • EP packets received from the LNS will be sent by the receiving Line Card to the IPE PPU associated with the ingress interface (user).
  • This PPU may well be on a different IPE card than the one handling the tunnel. This is easily determined from the destination IP address of the packet. In this case, the PPU receiving the packet from the Line Card must forward the packet to the EPE card handling the tunnel.
  • the L2TP Session ID can be used to identify which PPU on that IPE card should receive the packet (this PPUED must be sent in the PIE Header so that the receiving PI will know which PPU should receive the packet). This is done by always encoding flie PPUID of the PPU handling a particular session in the most significant four bits ofthe L2TP Session ED.
  • the PPU to which the packet is sent to can in turn can de-encapsulate the PPP packet and forward it to the PPP user identified by flie L2TP session ED.
  • LNS L2TP Network Server
  • L2TP packets received from flie LAC will be forwarded, either by a Line Card or anoflier EPE, to the EPE handling die tunneL This is because the destination EP address of the packet will be equal to one of flie IP addresses ofthe IPE handling the tunnel.
  • the PPU that should process the L2TP session is identified using the most significant four bits of the L2TP Session ED.
  • the PPU will de-encapsulate flie PPP packet, then process the PPP packet as if it was received from a PPP user. From this point on, the processing is the same as for a "real" PPP user.
  • packets which, when their destination IP address is looked up in the routing table, yield a destination PPP user fliat is associated with a L2TP tunnel instead of with a Line Card, must be sent to flie IPE PPU handling the PPP user. This is because of the sequence number requirement of L2TP mentioned above.
  • the packet Once received this PPU, the packet must have a PPP header added, as is the case with a normal PPP user.
  • a L2TP header is added, followed by a UDP header and an IP header.
  • the EP destination address is that of the LAC at the other end of the tunnel.
  • the resulting IP packet can then be forwarded using the standard EP forwarding procedure to the appropriate Line Card for transmission.
  • SA IPSec Security Association
  • EPE PPU IPSec Security Association
  • a Security Association is a unidirectional, "simplex" connection that provides security services to the traffic carried by it.
  • Every PPU must have a copy of the SPD for every user from which it receives packets.
  • the PPU For every UserlD (Primary or Secondary) that points to a particular IPE PPU, the PPU must have a pointer to an SPD. If a user 's traffic is split among multiple PPUs (i.e.: a large user), then they should have identical SPDs configured for the user, and each will create its own set of Security Associations for its share of the user's traffic. Every packet received must be processed using the SPD of he user the packet is received from. • Tunneled packets
  • the SPI is the field in the IPSec header that, along with the destination IP address, identifies the SA. Traffic from a small user will always be directed by receiving Line Card to a particular PPU.
  • This PPU uses the SPI to identify the SA, and thus has access to the information it needs to decapsulate the packet.
  • the Line Card must detect EPsec packets whose IP destination address is one of the addresses that belongs to the EPE card identified by the user's Primary Userld. Rather than select a Userld (primary or secondary) based on the hash ofthe source and destination EP addresses ofthe packet, the LC must use the SPI in the IPsec header to select the Userld, and thus the EPE PPU, to send the packet to.
  • the most significant 4 bits of an SPI always contain the PPUID identifying the PPU that is handling the SA identified by that SPI.
  • the difficulty with outbound processing is that, as discussed earlier, the configuration information (and thus the SPD) associated with the egress user is not readily available.
  • the information must be requested from the PPU identified by the Primary Userld and stored in a cache. Each PPU sending to a user will thus create its own set of Security Associations.
  • the IPE card PPUs performs routing table searches for all packets tiiat heed forwarding.
  • the global Forwarding Information Base (FEB) is distributed to every PPU in the system, and contains IP unicast and multicast routing tables in a form fliat facilitates longest matching prefix searches (i.e.: Patricia tries), as well as tables required for MPLS label based forwarding.
  • flie Primary Userld identifies the EPE PPU that maintains the configuration and state information for the user.
  • LCUserld contains the Cardld of the Line Card that the packet must be forwarded to, as well as the PPUID and CED that should be sent in the PIE header ofthe packet to that Line Card.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Un serveur de milieu de réseaux à large bande assure un accès à l"Internet, à grande vitesse, fiable, souple, à grande largeur de bande et facile à gérer, pour accepter tous les services Internet du moment tels que la messagerie électronique, le transfert de fichiers, la navigation sur le Web et le commerce électronique ainsi que de nouveaux services améliorés tels que le VoIP et la vidéo en temps réel. Le serveur préféré est évolutif en termes de largeur de bande et de puissance de traitement. Le serveur est capable de répartir le trafic entre une pluralité de moteurs de traitement Internet et, plus spécifiquement, entre plusieurs unités de traitement de protocole prévues dans chaque moteur (pour lequel la largeur de bande peut être adaptée) pour produire la puissance de calcul et l"espace de recherche nécessaires pour effectuer le traitement par utilisateur pour un grand nombre d"utilisateurs.
EP01908601A 2000-03-03 2001-01-11 Serveur de milieu de reseaux a large bande Withdrawn EP1260067A1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US51857500A 2000-03-03 2000-03-03
US518575 2000-03-03
US51852600A 2000-03-04 2000-03-04
US518526 2000-03-04
PCT/US2001/001003 WO2001067694A1 (fr) 2000-03-03 2001-01-11 Serveur de milieu de reseaux a large bande

Publications (1)

Publication Number Publication Date
EP1260067A1 true EP1260067A1 (fr) 2002-11-27

Family

ID=27059497

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01908601A Withdrawn EP1260067A1 (fr) 2000-03-03 2001-01-11 Serveur de milieu de reseaux a large bande

Country Status (3)

Country Link
EP (1) EP1260067A1 (fr)
AU (1) AU2001236450A1 (fr)
WO (1) WO2001067694A1 (fr)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7072332B2 (en) * 2001-09-27 2006-07-04 Samsung Electronics Co., Ltd. Soft switch using distributed firewalls for load sharing voice-over-IP traffic in an IP network
DE10147750A1 (de) 2001-09-27 2003-04-17 Siemens Ag Vorrichtung und Verfahren zur Vermittlung einer Mehrzahl von Signalen unter Verwendung einer mehrstufigen Protokollverarbeitung
EP1298940A3 (fr) 2001-09-27 2007-06-13 Alcatel Canada Inc. Système et méthode de configuration d'un élément de réseau
JP3891945B2 (ja) * 2002-05-30 2007-03-14 株式会社ルネサステクノロジ パケット通信装置
US8010405B1 (en) 2002-07-26 2011-08-30 Visa Usa Inc. Multi-application smart card device software solution for smart cardholder reward selection and redemption
US20040131072A1 (en) * 2002-08-13 2004-07-08 Starent Networks Corporation Communicating in voice and data communications systems
US8660427B2 (en) 2002-09-13 2014-02-25 Intel Corporation Method and apparatus of the architecture and operation of control processing unit in wavelenght-division-multiplexed photonic burst-switched networks
US8626577B2 (en) 2002-09-13 2014-01-07 Visa U.S.A Network centric loyalty system
US9852437B2 (en) 2002-09-13 2017-12-26 Visa U.S.A. Inc. Opt-in/opt-out in loyalty system
US8015060B2 (en) 2002-09-13 2011-09-06 Visa Usa, Inc. Method and system for managing limited use coupon and coupon prioritization
US7848649B2 (en) 2003-02-28 2010-12-07 Intel Corporation Method and system to frame and format optical control and data bursts in WDM-based photonic burst switched networks
US7298973B2 (en) * 2003-04-16 2007-11-20 Intel Corporation Architecture, method and system of multiple high-speed servers to network in WDM based photonic burst-switched networks
US7266295B2 (en) 2003-04-17 2007-09-04 Intel Corporation Modular reconfigurable multi-server system and method for high-speed networking within photonic burst-switched network
US7827077B2 (en) 2003-05-02 2010-11-02 Visa U.S.A. Inc. Method and apparatus for management of electronic receipts on portable devices
US7272310B2 (en) 2003-06-24 2007-09-18 Intel Corporation Generic multi-protocol label switching (GMPLS)-based label space architecture for optical switched networks
US8554610B1 (en) 2003-08-29 2013-10-08 Visa U.S.A. Inc. Method and system for providing reward status
US7051923B2 (en) 2003-09-12 2006-05-30 Visa U.S.A., Inc. Method and system for providing interactive cardholder rewards image replacement
US8407083B2 (en) 2003-09-30 2013-03-26 Visa U.S.A., Inc. Method and system for managing reward reversal after posting
US8005763B2 (en) 2003-09-30 2011-08-23 Visa U.S.A. Inc. Method and system for providing a distributed adaptive rules based dynamic pricing system
US7653602B2 (en) 2003-11-06 2010-01-26 Visa U.S.A. Inc. Centralized electronic commerce card transactions
US8838743B2 (en) 2004-02-13 2014-09-16 Intel Corporation Apparatus and method for a dynamically extensible virtual switch
DE102007003258B4 (de) * 2007-01-23 2008-08-28 Infineon Technologies Ag Verfahren zur Datenübermittlung in einer Sprachkommunikations-Linecard, Sprachkommunikations-Linecard und Signalverarbeitungsprozessor für eine Sprachkommunikations-Linecard
US7920557B2 (en) 2007-02-15 2011-04-05 Harris Corporation Apparatus and method for soft media processing within a routing switcher
US20110145082A1 (en) 2009-12-16 2011-06-16 Ayman Hammad Merchant alerts incorporating receipt data
US8429048B2 (en) 2009-12-28 2013-04-23 Visa International Service Association System and method for processing payment transaction receipts
WO2017009461A1 (fr) * 2015-07-15 2017-01-19 Lantiq Beteiligungs-GmbH & Co.KG Procédé et dispositif de traitement de paquets

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69129851T2 (de) * 1991-09-13 1999-03-25 Ibm Konfigurierbare gigabit/s Vermittlunganpassungseinrichtung
JP3149845B2 (ja) * 1998-03-20 2001-03-26 日本電気株式会社 Atm通信装置
AU5567499A (en) * 1998-08-17 2000-03-06 Vitesse Semiconductor Corporation Packet processing architecture and methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0167694A1 *

Also Published As

Publication number Publication date
AU2001236450A1 (en) 2001-09-17
WO2001067694A1 (fr) 2001-09-13
WO2001067694A9 (fr) 2002-01-10

Similar Documents

Publication Publication Date Title
WO2001067694A1 (fr) Serveur de milieu de reseaux a large bande
US6611522B1 (en) Quality of service facility in a device for performing IP forwarding and ATM switching
US7151744B2 (en) Multi-service queuing method and apparatus that provides exhaustive arbitration, load balancing, and support for rapid port failover
US6195355B1 (en) Packet-Transmission control method and packet-transmission control apparatus
US7369568B2 (en) ATM-port with integrated ethernet switch interface
Aweya IP router architectures: an overview
US5467349A (en) Address handler for an asynchronous transfer mode switch
US5809024A (en) Memory architecture for a local area network module in an ATM switch
WO1998036608A2 (fr) Procede et appareil destines au multiplexage de donnees provenant d'usagers multiples d'un meme circuit virtuel
WO2000056113A1 (fr) Commutateur internet et procede afferent
Byrne et al. Evolution of metropolitan area networks to broadband ISDN
US20020159391A1 (en) Packet-transmission control method and packet-transmission control apparatus
US6952420B1 (en) System and method for polling devices in a network system
US6810039B1 (en) Processor-based architecture for facilitating integrated data transfer between both atm and packet traffic with a packet bus or packet link, including bidirectional atm-to-packet functionally for atm traffic
Tomonaga IP router for next-generation network
EP0905994A2 (fr) Procédé et dispositif de commande pour la transmission par paquets
KR20020069578A (ko) 인터넷 프로토콜을 사용하는 네트워크에서 서비스 품질우선순위를 지원하는 전송 시스템 및 방법
JPH09181726A (ja) Atmネットワークで接続を結合する方法及びシステム
Aoki et al. Next generation carriers Internet backbone node architecture (MSN Type-X)
Ojesanmi Asynchronous Transfer Mode (ATM) Network.
Durresi et al. Asynchronous Transfer Mode (ATM)
Gebali et al. Switches and Routers
Subramanian Frame Relay Networks-a survey
Chorafas et al. Appreciating the Implementation of Asynchronous Transfer Mode (ATM)

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020923

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050802