WO2001031969A1 - Ethernet edge switch for cell-based networks - Google Patents

Ethernet edge switch for cell-based networks Download PDF

Info

Publication number
WO2001031969A1
WO2001031969A1 PCT/US2000/029350 US0029350W WO0131969A1 WO 2001031969 A1 WO2001031969 A1 WO 2001031969A1 US 0029350 W US0029350 W US 0029350W WO 0131969 A1 WO0131969 A1 WO 0131969A1
Authority
WO
WIPO (PCT)
Prior art keywords
cell
ethernet
type
processor
atm
Prior art date
Application number
PCT/US2000/029350
Other languages
French (fr)
Other versions
WO2001031969A9 (en
Inventor
Michael S. Cohen
Jaison Joseph
Harry J. Jones
Narayanan Bhattathiripad
T. Padmajyothi
Lawrence G. Roberts
Vinayak Bhat
Original Assignee
E-Cell Technologies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by E-Cell Technologies filed Critical E-Cell Technologies
Priority to AU12296/01A priority Critical patent/AU1229601A/en
Publication of WO2001031969A1 publication Critical patent/WO2001031969A1/en
Publication of WO2001031969A9 publication Critical patent/WO2001031969A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • H04M11/06Simultaneous speech and data transmission, e.g. telegraphic transmission over the same conductors
    • H04M11/062Simultaneous speech and data transmission, e.g. telegraphic transmission over the same conductors using different frequency bands for speech and other data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5603Access techniques
    • H04L2012/5609Topology
    • H04L2012/561Star, e.g. cross-connect, concentrator, subscriber group equipment, remote electronics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5617Virtual LANs; Emulation of LANs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5618Bridges, gateways [GW] or interworking units [IWU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5665Interaction of ATM with other protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13039Asymmetrical two-way transmission, e.g. ADSL, HDSL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1305Software aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13097Numbering, addressing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13103Memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13106Microprocessor, CPU
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13109Initializing, personal profile
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13162Fault indication and localisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13164Traffic (registration, measurement,...)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13166Fault prevention
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1319Amplifier, attenuation circuit, echo suppressor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13199Modem, modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13204Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13216Code signals, frame structure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13248Multimedia
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1329Asynchronous transfer mode, ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13292Time division multiplexing, TDM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13299Bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1332Logic circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13322Integrated circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1334Configuration within the switch
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13389LAN, internet

Definitions

  • the present invention relates to a system which provides end-to-end Quality of Service (QoS) associated with ATM networks.
  • QoS Quality of Service
  • the present invention is directed to an Ethernet-type switch which preserves ATM QoS down to an Ethernet end-station.
  • LANs Local area networks
  • Ethernet-type networks have dominated the LAN market and have been continually enhanced (e.g., switched Ethernet, Fast Ethernet, and/or Gigabit Ethernet) to keep pace with the bandwidth intensive multimedia applications.
  • a user at a remote site 101 has traditionally been able to access her his office 1 19, which includes accessing an office local area network 1 19b (LAN), through a dial-up connection over a 33Kbps or 56Kbps modem 101b.
  • the dial-up connection is handled by a telephone central office (CO) 105 through a voice switch 107, which switches the "data" call through a public switched telephone network (PSTN) 1 1 1.
  • CO telephone central office
  • PSTN public switched telephone network
  • the voice switch 123 switches the call to the subscriber; in this case, the called line is associated with a modem in a modem pool 119a.
  • the end user at her/his remote site 101 can access the computing resources in his office 1 19. These sources include a multimedia server 1 19c and a PC 1 19d of the remote user.
  • a similar connection to Internet 115 by a user at a remote site 101 can be accomplished by connecting to an Internet Service Provider (ISP) 1 17 instead of modem pool 119c.
  • ISP Internet Service Provider
  • telecommuting from a remote office or accessing multimedia information from home over the Internet imposes an enormous strain on networking resources. It is common knowledge that the networking infrastructure is the bottleneck to the expedient transfer of information, especially bandwidth intensive multimedia data.
  • ATM Asynchronous Transfer Mode
  • An ATM network 1 13 is typically able to provide bandwidths to an ATM user at approximately 1.5 Mbps on a Tl line, 44.7 Mbps on a T3 line, and 155 Mbps over a fiber optic OC-3c line. Consequently, ATM networks are suitable to transport multimedia information.
  • ATM further provides a mechanism for establishing quality of service (QoS) classes during the virtual channel setup, thereby allotting a predetermined amount of bandwidth to the channel.
  • QoS classes define five broad categories that are outlined, for example, by the ATM Forum's UNI 3.0/3.1 specification. Class 1 specifies performance requirements and indicates that ATM's quality of service should be comparable with the service offered by standard digital connections. Class 2 specifies necessary service levels for packetized video and voice. Class 3 defines requirements for interoperability with other connection- oriented protocols, particularly frame relay. Class 4 specifies interoperability requirements for connectionless protocols, including IP, IPX, and SMDS. Class 5 is effectively a "best effort" attempt at delivery; it is intended for applications that do not require guarantees of service quality.
  • ATM networks carry fixed bandwidth services required for multimedia applications (constant bit rate (CBR) traffic) and guaranteed bandwidth services for high-priority data applications (variable bit rate (VBR) traffic).
  • CBR constant bit rate
  • VBR variable bit rate
  • the ATM Forum refers to services that make use of this otherwise idle bandwidth as available bit rate (ABR) services.
  • ABR flow control is an ATM layer service category for which the limiting ATM layer transfer characteristics provided by the network may change after establishing the network connection.
  • a flow control mechanism is specified which supports several types of feedback to control the source rate in response to changing ATM layer transfer characteristics. When the network becomes congested, the end-stations outputting ABR traffic are instructed to reduce their output rate.
  • the source e.g., a user remote site 103 of a virtual circuit (VC) indicates the desired rate in a resource management cell (RM cell).
  • RM cell is a standard 53-byte ATM cell used to transmit flow- control information.
  • the RM cell travels on the VC about which it carries information, and is therefore allowed to flow all the way to the destination end- station (e.g., PC 1 19d).
  • the destination reflects the RM cell, with an indicator to show that the RM cell is now making progress in the reverse direction.
  • the intermediate switches e.g., switch 109) then identify within the reverse RM cell their respective maximum rates (the explicit rate allocated to the VC). After the source receives the reverse RM cell, the smallest rate identified in the reverse RM cell is then used for subsequent transmissions until a new reverse RM cell is received.
  • ATM has many recognized advantages and has dominated wide area networks (WANs) as the preferred backbone transport technology. Because of cost and performance factors, ATM faces stiff competition from both switched and shared-media high-speed LAN technologies, including Ethernet, Fast Ethernet, and Gigabit Ethernet. And although ATM typically offers QoS guarantees superior to the prioritization schemes of competing high-speed technologies, many users remain unable to take advantage of these features. If a remote user wishes to obtain the advantages of ATM, one solution would be to acquire an ATM switch on the premises as shown in Figure 1A. The remote site 103 would need to be equipped with an ATM switch 103a, whereby a PC 103b interfaces the ATM switch 103a via an ATM NIC 103c.
  • WANs wide area networks
  • the remote user would have to lease a Tl line or an OC-3c pipe from the Telco.
  • the leased line would terminate in an ATM switch 109 in the CO 105.
  • the CO ATM switch 109 is connected to the ATM network 113.
  • the remote user may quickly access multimedia information on the Internet by establishing a virtual channel that would terminate at ATM switch 125 in CO 121.
  • the CO 121 would of course have some means of communication with the ISP 1 17; typically routers (not shown) are used.
  • Figure IB illustrates an ATM to the desktop solution whereby the xDSL technology is utilized to extend ATM capability remotely.
  • a PC 103b is equipped with an ATM NIC 103c, which is attached to an xDSL modem 103d.
  • a telephone set 103e is linked to the xDSL modem 103d.
  • the xDSL modem is connected over twisted pair copper wire to the CO 105, terminating at the POTS splitter 117.
  • the POTS splitter 1 17 separates the data signals originating from the PC 103b from the voice signals.
  • a xDSL multiplexer (mux) 115 receives the data signals from the POTS splitter and uplinks these signals to the ATM switch 105.
  • Ethernet-type LANs constitute nearly all of the networking resources of business and residential users.
  • these legacy systems are still being enhanced and marketed, e.g., switched Ethernet, switched Fast Ethernet, and switched Gigabit Ethernet are significantly lower cost than their ATM counterparts.
  • ATM technology requires a substantial investment in infrastructure, from cable plant to switches to network interface cards (NICs). This tremendous investment cost can be sustained in the wide area network (WAN) where costs can be spread out.
  • WAN wide area network
  • the investment in infrastructure is typically unsustainable which translates into retention of "legacy" LANs such as Ethernet.
  • One apparent disadvantage is the inability to ensure an end-to-end quality of service regarding the transmission of the multimedia information.
  • Another disadvantage of conventional systems is a lack of real-time, rate- based, flow control which can provide congestion management.
  • Ethernet switch performs cell-to-Ethernet frame conversion, and vice versa.
  • the Ethernet-type switch has a multi-processor architecture that eliminates incurring high development cost associated with application specific integrated circuits (ASIC) processors.
  • ASIC application specific integrated circuits
  • Such a switch facilitates the creation of a communications network that supports both native ATM applications, such as those based on the Winsock 2.0 Application Programming Interface (API) as well as traditional IP based applications.
  • Winsock 2.0 enabled applications are utilized, ATM cells are encapsulated into variable length Ethernet frames; up to 31 ATM cells can be encapsulated into a single Ethernet frame. In so doing, ATM QoS and ABR/ER flow control are preserved, facilitating the timely transport of delay sensitive multimedia traffic.
  • the Ethernet-type frames are of a cell-encapsulated type or a non cell- encapsulated type.
  • a receive processor is coupled to the DMA engine for encapsulating and decapsulating the Ethernet-type frames, in which the receive processor supports cut-through and store-and-forward switching.
  • a transmit processor is coupled to the DMA engine and performs traffic scheduling according to priority of the Ethernet-type frame and processing of data that are to be transmitted via one of the Ethernet-type ports.
  • a cell processor is coupled to the internal bus for processing cells and for controlling segmentation and reassembly of the cells.
  • the cell processor executes a cell driver software in support of providing cell-based quality of service.
  • At least one cell interface is coupled to the cell processor and the internal bus for receiving the cells.
  • a communication processor is coupled to the internal bus for controlling the receive and transmit processors and supporting cell-based signaling.
  • a descriptor processor is coupled to the DMA engine for buffering a plurality of descriptors. The processors communicate through a plurality of mailboxes associated with each processor, in which the receive processor further encapsulates/ decapsulates the received cells.
  • Another aspect of the present invention provides a method for providing quality of service in an Ethernet-type environment.
  • the method comprises receiving a plurality of Ethernet-type frames via an Ethernet-type port; receiving a plurality of cells via a cell port; encapsulating at least one cell into one of the plurality of Ethernet-type frames; and supporting ATM quality of service and ATM traffic management capabilities.
  • FIG. 2 is a block diagram depicting detailed aspects of a switch configured in accordance with the present invention.
  • Figure 3 is a functional block diagram depicting communication between processing elements of an embodiment of the present invention.
  • Figure 4 is a block diagram depicting a hardware embodiment of the switch of the present invention
  • Figure 5 is a graphic representation of a network embodying the system of the present invention
  • Ethernet NIC but achieves ATM capability over Ethernet through use of an Ethernet edge switch which employs a multi-processor architecture to interface an Ethernet environment with an ATM infrastructure.
  • FIG. 2 provides a high level description of the Ethernet switch, which generally comprises four basic components: a cell interface 401, a switching fabric 403, an Ethernet/Cell translator 405, and an Ethernet interface 407.
  • the Ethernet interface 407 receives standard Ethernet frames.
  • the Ethernet frame format conforms with all Ethernet type formats (e.g., IEEE 802.3/802.2, Ethernet II, Novell 802.3, and IEEE 802.3/802.2 SNAP).
  • These Ethernet frames are then sent to the Ethernet Cell translator 405, which converts Ethernet frames into cells, as well as cells into Ethernet frames.
  • the Ethernet frames to cells conversion involves segmenting the Ethernet frames and reassembling them as 53 bytes cells for transport over a cell switching backbone such as an ATM network.
  • the conversion from cells into Ethernet frames is a simpler process, whereby the fixed length cells are encapsulated by the Ethernet frame, which can extend to 1500 bytes in length.
  • the cells are switched over the switching fabric to the cell interface for output into the cell-switching network.
  • the Ethemet Cell translator 405 does not perform frame to cell conversion, but instead, by-passes the switching fabric 403 and presents the Ethernet frame to the Ethernet interface 407 to be outputted.
  • the Ethernet frames may be forced to undergo conversion into cells, thereby negating the requirement for a separate routing circuitry for the Ethernet frames; that is, the Ethernet frame is sent to its destination MAC address via the switching fabric 403.
  • the cell interface 401 comprises cell ports (not shown) for the inputting and outputting of data cells.
  • these ports are fiber optic connections, and the cell interface 401 handles OC-3c (155 Mbps) and OC-12 (622 Mbps) data rates. Lower data rates of DS1 (1.544 Mbps) and DS3 (44.7 Mbps) are also used; however, these rates typically employ copper cable connections.
  • the Ethernet switch 260 can be a Cells-in-Frames (CIF) Ethernet switch with ATM functionality.
  • CIF technology encapsulates ATM traffic within the frame structure of the existing LAN media (such as Ethernet) in accordance with the Cells in Frames Version 1.0 Specification, incorporated herein by reference.
  • CIF end stations (CIF ES) and CIF attachment devices (CIF AD), such as the Ethernet switch 260, can thereby exchange native ATM traffic over the same LAN media that serves the standard frame-based traffic (e.g., IP and IPX).
  • CIF describes the method for utilizing frame-based LAN media as another ATM physical layer.
  • the CIF protocol is a peer-to-peer protocol that maintains a virtual point-to-point link (the CIF link) between the CIF ES and the CIF AD that serves it.
  • the CIF link is a virtual point-to-point connection between the CIF ES and the CIF AD that is carried over the LAN connecting them. The two sides of this virtual link maintain local connections of the status of the link.
  • the CIF specification (version 1.0, incorporate herein by reference) describes the exact protocol for establishing the logical association between the CIF ES and the CIF AD and exchanging native ATM traffic encapsulated in specific Ethernet Version 2, IEEE 802.3, or IEEE 802.5 Token Ring frames.
  • the Ethernet switch 260 in one embodiment, employs the CIF technology.
  • the form factor of the Ethernet switch 260 moreover is a stackable unit such that the two units may reside within the same system rack.
  • the stackable nature enables easy connectivity between the two units. A more detailed description of the Ethernet switch 260 follows.
  • FIG. 3 depicts a functional block diagram of an embodiment of switch 260 of the present invention.
  • Switch 260 can be divided into the following major sections: an Ethernet interface 502; a DMA engine 504; a descriptor processor (DP) section 505; an ATM interface 506; an ATM processor (AP) section.507; a packet buffer 508; a receive processor (RP) section 510; a transmit processor (XP) section 512; and a communications processor (CP) section 514.
  • Ethernet interface 502 handles information between Ethernet NIC 714 (Fig. 5) and DMA engine 504.
  • DMA engine 504 performs high-speed transfers of packets between the packet buffer 508 and the network interfaces (Ethernet interface 502 and ATM interface 506) section.
  • DMA engine 504 also transfers packet pointer and other relevant information to receive processor 510.
  • Descriptor processor (DP) section 505 assists DMA engine 504 by providing it with free receive buffers per port for receiving packets.
  • Descriptor processor section 505 also frees transmitted buffers and links them to the free buffer chain.
  • ATM Processor (AP) section 507 runs the ATM driver code and handles frame reception from ATM interface 506 and forwarding of the same to packet buffer 508 (through DMA engine 504) section and vice versa.
  • ATM processor 507 also performs all the house-keeping related to the pair of ATM ports 506b.
  • Receive processor (RP) section 510 is a RISC processor that performs the switching functionality. It analyzes the packet headers and makes forwarding decisions. The processor also handles switching of CIF and non-CIF packets and routing of IP packets. Transmit Processor (XP) section is a RISC processor that handles transmission of packets (i.e., traffic scheduling) and frame descriptor management.
  • Communication Processor (CP) section 514 runs a real time operating system (RTOS) (e.g., the VxWorks), and handles all the remaining functions such as ATM signaling, Spanning tree protocol, TCP/IP, SNMP, house-keeping, etc. Communications processor 514 has control over all the processors, and coordinates the booting of each of the processors on power up, and also monitors their functioning.
  • RTOS real time operating system
  • Digital's StrongARM (SA110) processor is suitable for all the RISC processors (receive processor 510, transmit processor 512, descriptor processor 675 and ATM processor 507). Digital's StrongARM (SA110) processor is high performance (233 MHz internal operation), yet consumes extremely low power (about half a watt). Intel's Pentium processor, for example, running at 133 MHz is suitable for communications processor 514.
  • DMA engine 504 When a packet is received (from Ethernet interface 502 or ATM interface 506), DMA engine 504 passes the pointer to the packet and other relevant information to receive processor 510, while storing the complete packet in packet buffer 508.
  • Receive processor 510 reads the packet header from the packet buffer and does fast hash searches using the information in the header and makes appropriate forwarding decisions. The decisions are communicated to transmit processor 512, transmit processor 512 then handles queuing up of the packet in the appropriate transmit queue. In case receive processor 510 is not able to make the forwarding decision or if the packet requires special handling, then communications processor 514 is informed.
  • All the RISC processor sections handle high-speed tasks. Traditionally, ASICs are employed for these tasks, but the present invention does not use ASICs and therefore advantageously provides greater flexibility in the implementation of programming changes and upgrades.
  • the RISC processors run highly optimized code resident almost fully in on-chip caches.
  • FIG. 4 depicts a block diagram of a hardware configuration 600 illustrating an embodiment of the present invention.
  • Hardware configuration 600 can be divided into the following major sections: communications processor section 514; a boot and mail box section 604; receive processor section 510; transmit processor section 512; packet buffer section 508; DMA engine section 504; Ethernet interface section 502; ATM interface section 616 a back-plane interface section 618; and a power supply section 620.
  • the communications processor section 514 comprises a communications processor (CP) 640 (which, for example, can be a Pentium TM processor running at 133 MHz and performing external bus accesses at 66 MHz). Section 514 also includes a local buffer 642 which can be created from a 16 MB DRAM.
  • a remote access controller 644 provides Ethernet (lOBase-T) and serial ports and timers for debugging the hardware configuration 600.
  • a control bus logic portion 646 provides a control and status port and interface to boot and mailbox section 604, packet buffer section 508 and ATM interface section 616.
  • the communications processor 640 runs a real time operating system and is the master controller of all activities in hardware configuration 600.
  • Boot and mailbox section 604 comprises boot code and mail boxes for all the processor sections by including a boot PROM portion 650, a common boot portion 652, and a plurality of mailboxes 654.
  • Boot PROM 650 includes a 256KB EPROM used for storing the boot code used by communications processor 640.
  • Common boot portion 652 includes a 4MB flash memory which is expandable to 8MB and 256KB of synchronous SRAM (Static Random Access Memory) which contain arbitration logic. The flash memory in common boot portion 652 is used as central storage for runtime code for all processor sections.
  • Communications processor 640 boots up first from boot PROM portion 650, meanwhile keeping all other processor sections under reset.
  • communications processor 650 allows one processor section to boot up at a time; each processor section boots up and picks up its code from pre-defined areas of the flash memory of common boot portion 652.
  • the SRAM of common boot portion 652 is used for accommodating all the mail boxes 654.
  • Mail box size can vary from 8 to 32 commands deep based on the corresponding inter-processor section communication load.
  • Mail boxes 654 are used by the processor sections to exchange information amongst each other.
  • Receive processor section 510 comprises a receive processor (RP) 660, a packet buffer interface 662 and a forwarding database 664.
  • Receive processor 660 can be implemented using a 233 MHz processor, such as the StrongARM SA- 110 RISC processor that performs external bus accesses at 66 MHz.
  • Receive processor 660 fetches packet header information from the packet buffer interface 662 and performs necessary searches through tables to make packet forwarding decisions.
  • Transmit processor section 512 comprises a transmit processor (XP) 670 and a frame descriptor memory 672.
  • Transmit processor 670 can be implemented using an SA 1 10 (mentioned above) and also runs at 233 MHz and performs external bus accesses at 66 MHz.
  • Frame descriptor memory comprises a 512 KB SRAM and contains control logic and interfaces with a buffer descriptor memory 674 and a buffer descriptor cache which are both discussed in greater detail below where the DMA section 504 is more fully described.
  • transmit processor 670 updates the buffer descriptor cache with pointers to these packets for the DMA engine section 504 to perform the transmission operation.
  • Packet buffer section 508 comprises a packet buffer 676 and a packet buffer control and arbitration portion 678.
  • Packet buffer 676 includes a 4 MB SRAM that is expandable to 8MB used for central storage of packets received from Ethernet interface 502, ATM interface 506 and Back-plane interface 518.
  • the back-plane interface 518 is a 4.2 Gbps interface, and is treated by the controlling software as an Ethernet port having a lower priority than the ports of the Ethernet interface 502.
  • Packet buffer section 508 has access to a high speed 4 Gbps (64 bit, 66 MHA) bus 680 to meet the bandwidth requirements of the system.
  • the packet buffer interfaces with: 16 Ethernet ports through DMA engine section 504; two ATM ports through ATM interface section 616 and DMA engine section 504; back plane section 618 through DMA engine 504; receive processor 660; and communications processor 640. Since receive processor 660 reads, and updates, packet headers, receive processor 660 extremely quick access must be provided to packet buffer 660 so as to avoid any delay. This is accomplished by providing receive processor 660 cycle-steal type accesses.
  • the receive processor 660 to packet buffer interface logic is instructed by receive processor 660 to fetch/write a block from/into a particular location. The logic then does the fetch /write operation by stealing some cycles in between other DMA transfers, and informs receive processor 660 when the operation is complete.
  • Communications processor 640 is also provided cycle-steal type access to packet buffer 676, however with lesser priority than receive processor 660 since less bandwidth is required by communications processor 640.
  • the remaining bus masters of the remaining processor sections are given access to packet buffer 676 in a round- robin fashion by packet buffer arbitration logic 678.
  • DMA engine section 504 handles data transfers between packet buffer 676 and all the network ports such as Ethernet ports 502, ATM ports 615 and backplane 618.
  • the buffer descriptor memory 674 is a synchronous SRAM and it contains a buffer descriptor for every 256 byte block in the packet buffer. Hence the size of buffer descriptor memory 674 will depend on the size of packet buffer and the size of each buffer descriptor.
  • Each network port will be allocated a portion of memory in the packet buffer for storing received packets. All the buffer descriptors for each port will be linked together.
  • DMA engine 504 For receiving data on a port, DMA engine 504 has to fetch the next available buffer descriptor from the descriptor memory 674 and then do the data transfer to the corresponding data buffer in the packet buffer. On completing reception of a full buffer (256 bytes) or on end of frame/error, the buffer descriptor has to be updated with some information, such as size of data in the buffer and a couple of flags.
  • a descriptor cache (an SRAM) is provided to make it easier and faster for
  • DMA engine 504 to fetch receive and transmit buffer descriptors. For each port, a cache of 64 receive (Rx) and 64 transmit (Tx) descriptors are provided. This way DMA engine 504 can pick up descriptors from sequential locations in the cache instead of having to walk through the link pointers in buffer descriptor memory 674.
  • a descriptor processor (DP) 675 mainly handles the function of replenishing the descriptor cache (for all the ports) with available receive descriptors, as and when necessary.
  • the transmit descriptor cache 672 is updated by transmit processor 670, which handles all the packet transmission functions. However, on completion of transmission of a buffer, descriptor processor 675 frees the same and attaches it to the appropriate free buffer pool.
  • DMA engine 504 On receiving the first burst of a packet, DMA engine 504 passes the packet pointer and the port number to the receive processor 660, which then reads the packet header from the packet buffer and starts the decision making process. DMA engine 504 also informs the receive processor when the packet is fully received. With these two features, it is possible to support both cut-through and store-and-forward switching methods. However, initially store-and-forward method is used. In addition, DMA engine 504 has interfaces with ATM interface section 616 and back-plane section 618, as well as the Ethernet interface section 502. Ethernet interface section 502 is comprised of a Media Access Control (MAC) chip, such as the Lucent LUC3M08 MAC chips, and a physical (PHY) block. The PHY block is modularly provided on a daughter board, so that other media types can be easily utilized.
  • MAC Media Access Control
  • PHY physical
  • ATM interface 616 is implemented as a plug-in daughter cards.
  • a local storage of 1MB/2MB memory is provided to act as buffer between two different buses (a PCI bus 522 on the SAR side, and a packet buffer bus 524) to which ATM interface 506 is connected.
  • ATM interface section 616 comprises an ATM Processor section 507 and a Segmentation and Reassembly (SAR) section 506).
  • ATM Processor section 507 comprises a ATM Processor (AP) 694, 1 MB/2MB of ATM buffer memory 696 and interfaces to DMA engine 504 and SAR section 506 implemented as bus control logic 698.
  • the SAR section 506 includes the SAR chip, local memory for the SAR, PHY interface and ATM buffer interface.
  • ATM buffer 696 is required since a SAR data bus 699 is not compatible with the packet buffer bus, and interfacing it directly with packet buffer 508 will result in wastage of packet buffer bandwidth.
  • ATM processor 694 runs ATM driver code which handles all the interactions with SAR section 506.
  • ATM processor 694 configures SAR chips within SAR section 506 and sets up receive and transmit buffers for the SAR chips.
  • ATM processor 694 programs packet buffer interface logic to initiate a transfer to packet buffer 508.
  • ATM processor 694 puts them in a transmit queue of the destination ATM port 615.
  • ATM processor code resident in ATM memory 696 is upgradeable to handle new types of uplink modules.
  • ATM ports 615 support OC-3, Tl/El and T3/E3 interfaces.
  • Adapetc's SAR chip is used in all these cases.
  • Tl/El and T3/E3 the same PCB layout may be used, with some components being selectively loaded.
  • back-plane interface section 618 includes, for example, a 2 Gbps (64 bit, 33 MHz) back-plane bus which is used for data transfer between switch modules. According to one embodiment, upgrading of this bus to 4 Gbps (66 MHz) is possible if required.
  • a distributed arbitration (present on each switch module) scheme is used, which provides a round-robin access priority to all the switch modules.
  • Back-plane interface section 618 comprises arbitration logic, a 64 bit bi-directional FIFO and interfaces with DMA engine 504.
  • the transfer over the back-plane essentially involves data movement from the packet buffer of one switch module to the packet buffer of another switch module. This implies that access to both the packet buffers is available to the back-plane logic at the same time if the transfer has to be made in a synchronous manner.
  • the transfer can be made in an asynchronous and simple manner.
  • DMA engine 504 on the source switch module fills up the FIFO with data without waiting for the destination switch module to be ready. Over the back-plane section 618, the data in the FIFO is sent to the FIFO on the destination module after arbitrating for the back-plane bus. From there, DMA engine 504 on the destination switch module transfers the data to the packet buffer as and when it becomes available.
  • Redundant, internal power supplies 520 provide suitable power for both the Telco and non-Telco environments using, respectively, a -48V input and universal AC input of 1 10-240V. In Figure 3, only one power supply 520 is shown for convenience. In the non-Telco environment, AC-DC supply modules are used, each having connectors feeding in 110-240 VAC. In the Telco environment, the AC power supply is replaced by two DC-DC supply modules, each having connectors feeding in -48V DC.
  • the Ethernet switch 260 works in conjunction with the end-user- workstation's software to quickly deliver multimedia information while ensuring an end-to-end negotiated quality of service that is free from delay inducing congestion.
  • the end-station executes a shim software.
  • the shim comprises a protocol combination, or other suitable combination of protocols, to allow the implementation of CIF technology to bring native ATM services to desktops that are equipped with legacy Ethernet or Token Ring NICs by encapsulating cells into frames.
  • CIF can also be viewed as the inverse of ATM LAN Emulation (LANE).
  • LANE provides a way for legacy LAN media access controller-layer protocols like Ethernet and Token Ring, and all higher-layer protocols and applications, to access work transparently across an ATM network.
  • LANE retains all Ethernet and Token Ring drivers and adapters; no modifications need to be made to Ethernet or Token Ring end stations.
  • CIF emulates ATM services over frame-based LANs.
  • CIF uses software at the workstation without requiring the procurement of a new NIC to support quality of service scheduling and ABR/ER flow control.
  • the shim resides as a layer in end station to provide encapsulation of cells within Ethernet frames in the desktop for transport to the data network.
  • Shim supports multiple queues, a scheduler (not shown), the ER flow control, and header adjustment.
  • Shim comprises an ATM Adaptation Layer (AAL) which is the standards layer that allows multiple applications to have data converted to and from the ATM cell.
  • AAL is protocol used that translates higher layer services into the size and format of an ATM cell.
  • the CIF shim layer also includes a traffic management (TM) component that sets forth the congestion control requirements.
  • the TM component (not shown) can be implemented as TM 4.0.
  • CIF shim layer also includes a frame segmentation and reassembly (SAR) sublayer (not shown) which converts protocol data units (PDUs) into appropriate lengths and formats them to fit the pay load of an ATM cell.
  • SAR extracts the payloads for the cells and converts them back into PDUs which can be used by applications higher up the protocol stack.
  • the shim adds the CIF header to packets before they are transmitted, and removes the header when they are received.
  • the shim manages the message queues by queuing outgoing data into multiple queues for QoS management. Shim also processes the RM cells for explicit rate flow control using the ABR flow control and allows ATM signaling software to run both native ATM application as well as standard IP applications.
  • End station further comprises a device driver and a Network Device Interface Specification (NDIS) layer 609 located above the CIF shim layer 61 1.
  • the end station 714 includes Internet Protocol (IP) layer 607b which supports classical IP, LANE and MPOA for the interworking of dissimilar computers across a network.
  • IP layer 607b is a connectionless protocol that operates at the network layer (layer 3) of the OSI model.
  • Winsock 2.0 603 is the application program interface (API) layer, which enables developers to take advantage of ATM's QoS and traffic management features.
  • Application layer 601 can accommodate traditional as well as native ATM applications. Native ATM applications can be readily created with Winsock 2.0 API 603.
  • the shim arrangement guarantees that the services negotiated by the native ATM applications for the VCs are not arbitrarily disrupted by the traffic generated by the legacy applications. Forcing both the ATM and the legacy protocol traffic to go through CIF shim allows CIF shim to manage the transmission of all traffic according to the QoS specified for each traffic stream.
  • the CIF AD forwards CIF traffic from the conventional LAN onto the ATM infrastructure for delivery to an ATM attached end station or to another CIF AD.
  • the CIF ES is also required to run LANE, MPOA (Multiprotocol Over ATM), or Classical IP protocols.
  • Network data from a legacy application is first handled by the legacy protocols (e.g., TCP/IP), and then turned into ATM traffic by LANE, MPOA, or Classical IP.
  • the CIF ES function encapsulates the individual cells into CIF frames before data is finally transmitted on the wire to the CIF AD.
  • FIG. 5 illustrates a potential application of the present invention.
  • Various types of Ethernet systems exist such as switched Ethernet, Fast Ethernet, and Gigabit Ethernet.
  • the end-station 710 is equipped with an Ethernet NIC 712 residing in a personal computer with a host processor 714.
  • Ethernet NIC 714 is connected to a high-speed digital subscriber line (DSL) modem 720, which interfaces with a telephone line 722 via a CP (customer premise) POTS (plain old telephone service) splitter 721.
  • the telephone line is a twisted pair copper wire, which the conventional customer premises telephone 724 uses to connect with a telephone central office 740.
  • DSL digital subscriber line
  • a telephone central office or end office is shown in Figure 5 as a communications facility 740; however, any communication facility can be used (e.g., a wire closet in a separate building).
  • High-speed communication to remote users depends largely on the method of access to the networking infrastructure. Most users cannot bear the cost of leasing expensive outside lines that are needed to provide high speed communication to the Internet or to their offices.
  • the disclosed embodiment overcomes this dilemma by employing a high-speed, low cost subscriber interface that takes advantage of the legacy outside cable plant, such as standard twisted copper pair wiring and coaxial cables.
  • one embodiment utilizes digital subscriber line (DSL) technology to delivery the high bandwidth that the remote users demand. Because traditional copper cabling is used, the remote users do not have to upgrade their current physical connection - their POTS line is sufficient. Because the outside plant need not be revamped, telephone companies (Telcos) can readily implement DSL services.
  • the DSL modem 720 acts as the network access device to the central office.
  • a DSL multiplexer 752 provides termination of the DSL modem connection within communications facility 740.
  • DSL technology is categorized by the downstream and upstream bandwidths. The present invention could be applied to any of the various forms of DSL technology.
  • Rate Adaptive DSL or RADSL involves a rate negotiation between the customer premise DSL modem 720 and the Telco CO modem located within DSL MUX 752 which takes into account distance and line quality issues yielding the maximum available rate for the line conditions encountered.
  • RADSL supports both Asymmetric DSL or ADSL, with a maximum downstream rate of 7.62 Mbps and a maximum upstream rate of 1.1 Mbps, which is ideal for very high speed Internet access and video-on-demand applications.
  • ADSL services can be delivered up to 18,000 feet from the central office over a single copper twisted pair.
  • RADSL also supports Symmetric DSL or SDSL, with a maximum bidirectional rate of about 1.1 Mbps, which is ideal for very high quality videoconferencing and remote LAN access.
  • HDSL high-bit-rate digital subscriber line
  • Telcos have traditionally used HDSL to provide local access to Tl services.
  • HDSL is already widely deployed within the Telco market as a low cost T-l replacement.
  • VDSL or Very high bit-rate DSL requires a fiber-to-the curb local loop infrastructure, with asymmetric speeds up to 52 Mbps.
  • Other flavors of DSL i.e., sometimes generically denoted xDSL are characterized by whether the service is asymmetric or symmetric and the bandwidth allocations for the upstream and downstream transmissions.
  • the central office 740 comprises a plain old telephone service (POTS) splitter 742 which receives the information transmitted across the twisted pair line 722 and "splits" the low frequencies, which carry voice signals, from the high frequencies, which carry data signals.
  • POTS splitter is a passband filter, whereby the low frequency information is carried by a voice line 744 to a voice switch 746 and ultimately to a public switched telephone network (PSTN) 748.
  • PSTN public switched telephone network
  • the voice line 744, voice switch 746 and PSTN 748 are each conventional, and are therefore not explained further so as not to detract from the focus of the disclosure of the present invention.
  • the data information, which is modulated using high frequency signals, is transmitted over a twisted pair cable 750 to a POTS splitter 742.
  • the POTS splitter 742 then passes the high frequency signals to a DSL multiplexer (DSL MUX) 752.
  • DSL MUX serves as the DSL modem termination point for numerous end users with DSL modems.
  • the DSL MUX252 aggregates all the DSL traffic and passes the multimedia information to the Ethernet switch 260.
  • the traffic can be of any data type including multimedia graphics, video, image, audio, and text.
  • Various embodiments of the DSL MUX 752 can be employed, ranging from 74 line stackable modules through the traditional high density chassis based approach.
  • Ethernet switch 260 is primarily an edge device that is connected to an ATM network 770 on which a conventional multimedia server (not shown) may be linked.
  • the ATM network 770 thus represents a fast and efficient delivery system for multimedia applications to which the end user desires access.
  • the Ethernet switch 260 communicates with the CO DSL MUX 752 relative to traffic information, in order to minimize congestion. Traditionally, end user access to an ATM network has been through a router.
  • end-station 710 houses an Ethernet NIC 714
  • connection to ATM network 770 proves difficult without the system of the present invention, which allows information residing on an ATM network to be transferred to an Ethernet end-station while still retaining all the multimedia benefits of ATM, including QoS and ABR/ER flow control.
  • An advantage associated with a DSL implementation is that the personal computer is constantly connected, much like a typical Ethernet LAN connection. That is, communication sessions are not initiated through a dial-up procedure.

Abstract

An Ethernet-type edge switch interfaces a cell-based network which transports multimedia information, i.e., textual graphical, image, video, voice and audio. The Ethernet-type switch possesses a multi-processor architecture for providing quality of service and explicit rate flow control with resource management (RM) cell priority.

Description

ETHERNET EDGE SWITCH FOR CELL- BASED NETWORKS
BACKGROUND OF THE INVENTION
TECHNICAL FIELD
The present invention relates to a system which provides end-to-end Quality of Service (QoS) associated with ATM networks. In particular, the present invention is directed to an Ethernet-type switch which preserves ATM QoS down to an Ethernet end-station.
BACKGROUND ART
As the information age matures, it is enabled by a number of technological advances, such as the geometric growth of networked computing power and the prevalence of reliable and ubiquitous transmission media. Today's consumers in both the residential and business arena have been acclimated to a more graphical approach to communication. In particular, multimedia applications (which include textual, graphical, image, video, voice and audio information) have become increasingly popular and find usage in science, business, and entertainment. Local area networks (LANs) are essential to the productivity of the modern workplace; Ethernet-type networks have dominated the LAN market and have been continually enhanced (e.g., switched Ethernet, Fast Ethernet, and/or Gigabit Ethernet) to keep pace with the bandwidth intensive multimedia applications. A compelling example of the growth of information consumption is the dramatic increase in users of the World Wide Web, a multimedia-based information service provided via the Internet. Although initially a forum for academia to exchange ideas captured in ASCII text, the Internet has developed to become a global media for users from all walks of life. These Internet users regularly exchange multimedia graphical, image, video, voice and audio information as well as text.
Furthermore, the business world has come to realize tremendous value in encouraging workers to telecommute. To avoid the idle commuting time, today's workers enjoy the convenience of working from home via their personal computers. As illustrated in Figure 1, a user at a remote site 101 (e.g., home) has traditionally been able to access her his office 1 19, which includes accessing an office local area network 1 19b (LAN), through a dial-up connection over a 33Kbps or 56Kbps modem 101b. The dial-up connection is handled by a telephone central office (CO) 105 through a voice switch 107, which switches the "data" call through a public switched telephone network (PSTN) 1 1 1. The data call terminates in a remote CO 121 at a voice switch 123. The voice switch 123 switches the call to the subscriber; in this case, the called line is associated with a modem in a modem pool 119a. Once connected to the modem pool 1 19a, the end user at her/his remote site 101 can access the computing resources in his office 1 19. These sources include a multimedia server 1 19c and a PC 1 19d of the remote user. A similar connection to Internet 115 by a user at a remote site 101 can be accomplished by connecting to an Internet Service Provider (ISP) 1 17 instead of modem pool 119c. Unfortunately, telecommuting from a remote office or accessing multimedia information from home over the Internet imposes an enormous strain on networking resources. It is common knowledge that the networking infrastructure is the bottleneck to the expedient transfer of information, especially bandwidth intensive multimedia data. As alluded to before, today's access methods are limited to standard analog modems, such as 101 b, which have a maximum throughput of 56 Kbps on a clean line (i.e., a line not having any appreciable noise causing errors in bit rate transfer). Remote users may alternatively acquire basic rate (2B+D) Integrated Services Digital Network (ISDN) services at 128kbps. Even at this speed, telecommuters may quickly grow impatient with slow response times as compared to the throughput of their LANs to which they have grown accustomed. On average, a typical Ethernet user can expect to achieve approximately 1 Mbps on a shared 1 OBase-T Ethernet LAN and up to 9+Mbps in a full duplex switched Ethernet environment. In addition, Internet users are also demanding greater access speeds to cope with the various multimedia applications that are continually being developed. Fortunately, the communication industry has recognized the escalating demand.
Cell switching technology, such as Asynchronous Transfer Mode (ATM), was developed in part because of the need to provide a high-speed backbone network for the transport of various types of traffic, including voice, data, image, and video. An ATM network 1 13 is typically able to provide bandwidths to an ATM user at approximately 1.5 Mbps on a Tl line, 44.7 Mbps on a T3 line, and 155 Mbps over a fiber optic OC-3c line. Consequently, ATM networks are suitable to transport multimedia information.
ATM further provides a mechanism for establishing quality of service (QoS) classes during the virtual channel setup, thereby allotting a predetermined amount of bandwidth to the channel. QoS classes define five broad categories that are outlined, for example, by the ATM Forum's UNI 3.0/3.1 specification. Class 1 specifies performance requirements and indicates that ATM's quality of service should be comparable with the service offered by standard digital connections. Class 2 specifies necessary service levels for packetized video and voice. Class 3 defines requirements for interoperability with other connection- oriented protocols, particularly frame relay. Class 4 specifies interoperability requirements for connectionless protocols, including IP, IPX, and SMDS. Class 5 is effectively a "best effort" attempt at delivery; it is intended for applications that do not require guarantees of service quality.
In conventional data networks, such as the typical Ethernet LAN or X.25 WAN, there are no explicit negotiations between the network and the user specifying the traffic profile and quality of service expected. Rather, the network is expected to provide each user with a "fair share" of the available bandwidth.
However, in an ATM network, fair allocation of bandwidth requires users to adjust their transmission rates according to the feedback from the network. ATM networks carry fixed bandwidth services required for multimedia applications (constant bit rate (CBR) traffic) and guaranteed bandwidth services for high-priority data applications (variable bit rate (VBR) traffic). The remaining bandwidth, not used by guaranteed bandwidth services, must be shared fairly across all users. The ATM Forum refers to services that make use of this otherwise idle bandwidth as available bit rate (ABR) services.
Although these ABR applications must contend for remaining available bandwidth and would not provide specific throughput guarantees, ABR applications still would require fair access to the available bandwidth with a minimum of cell loss. If ABR traffic had no mechanism to determine if sufficient bandwidth were available to handle the transmission on the network and traffic was simply fed in, network congestion might result in dropped cells, and application traffic might be lost. ABR flow control is an ATM layer service category for which the limiting ATM layer transfer characteristics provided by the network may change after establishing the network connection. A flow control mechanism is specified which supports several types of feedback to control the source rate in response to changing ATM layer transfer characteristics. When the network becomes congested, the end-stations outputting ABR traffic are instructed to reduce their output rate. It is expected that an end-system that adapts its traffic in accordance with the feedback will experience a low cell loss ratio and obtains a fair share of the available bandwidth according to a network-specific allocation policy. Cell delay variation is not controlled in this service, although admitted cells are not delayed unnecessarily.
In this end-to-end rate-based scheme, the source (e.g., a user remote site 103) of a virtual circuit (VC) indicates the desired rate in a resource management cell (RM cell). An RM cell is a standard 53-byte ATM cell used to transmit flow- control information. The RM cell travels on the VC about which it carries information, and is therefore allowed to flow all the way to the destination end- station (e.g., PC 1 19d). The destination reflects the RM cell, with an indicator to show that the RM cell is now making progress in the reverse direction. The intermediate switches (e.g., switch 109) then identify within the reverse RM cell their respective maximum rates (the explicit rate allocated to the VC). After the source receives the reverse RM cell, the smallest rate identified in the reverse RM cell is then used for subsequent transmissions until a new reverse RM cell is received.
ATM has many recognized advantages and has dominated wide area networks (WANs) as the preferred backbone transport technology. Because of cost and performance factors, ATM faces stiff competition from both switched and shared-media high-speed LAN technologies, including Ethernet, Fast Ethernet, and Gigabit Ethernet. And although ATM typically offers QoS guarantees superior to the prioritization schemes of competing high-speed technologies, many users remain unable to take advantage of these features. If a remote user wishes to obtain the advantages of ATM, one solution would be to acquire an ATM switch on the premises as shown in Figure 1A. The remote site 103 would need to be equipped with an ATM switch 103a, whereby a PC 103b interfaces the ATM switch 103a via an ATM NIC 103c. In addition, the remote user would have to lease a Tl line or an OC-3c pipe from the Telco. The leased line would terminate in an ATM switch 109 in the CO 105. The CO ATM switch 109 is connected to the ATM network 113. With an ATM connection, the remote user may quickly access multimedia information on the Internet by establishing a virtual channel that would terminate at ATM switch 125 in CO 121. The CO 121 would of course have some means of communication with the ISP 1 17; typically routers (not shown) are used.
Alternatively, Figure IB illustrates an ATM to the desktop solution whereby the xDSL technology is utilized to extend ATM capability remotely. At the customer premises 103, a PC 103b is equipped with an ATM NIC 103c, which is attached to an xDSL modem 103d. In addition, a telephone set 103e is linked to the xDSL modem 103d. The xDSL modem is connected over twisted pair copper wire to the CO 105, terminating at the POTS splitter 117. The POTS splitter 1 17 separates the data signals originating from the PC 103b from the voice signals. A xDSL multiplexer (mux) 115 receives the data signals from the POTS splitter and uplinks these signals to the ATM switch 105. Although the solution present above provides a way to deliver ATM capabilities to the desktop, it disadvantageously requires the acquisition of ATM NICs by the remote users, and the xDSL modem has to have a costlier ATM interface. Despite all the many inherent advantages with ATM, Ethernet-type LANs constitute nearly all of the networking resources of business and residential users. Moreover, these legacy systems are still being enhanced and marketed, e.g., switched Ethernet, switched Fast Ethernet, and switched Gigabit Ethernet are significantly lower cost than their ATM counterparts. ATM technology requires a substantial investment in infrastructure, from cable plant to switches to network interface cards (NICs). This tremendous investment cost can be sustained in the wide area network (WAN) where costs can be spread out. However, in the LAN environment, the investment in infrastructure is typically unsustainable which translates into retention of "legacy" LANs such as Ethernet.
While a number of service providers (e.g., Telcos) employ ATM to establish point-to-point circuits, little has been done to utilize ATM for transporting multimedia information or services to the desktop. This is simply not commercially practical. As previously noted, commercial practicality prohibits such an endeavor. In essence, millions of users would be required to purchase expensive ATM network interface cards, and then possibly add very costly Tl, T3, or OC-3c lines. As a result, service providers have not commercially implemented ATM in the delivery of multimedia information to the desktop.
One apparent disadvantage is the inability to ensure an end-to-end quality of service regarding the transmission of the multimedia information.
Another disadvantage of conventional systems is a lack of real-time, rate- based, flow control which can provide congestion management.
Yet another disadvantage with the use of cell switching technology is the requirement that existing network interface devices, like Ethernet interfaces, be replaced with more costly and complex interfaces.
DISCLOSURE OF THE INVENTION There is a need for an arrangement that enables the high-speed transmission of multimedia information to the desktop.
There is also a need for an arrangement that enables use of an Ethernet- type network interface device in the procurement of multimedia information from a cell switching network. There is also a need for an arrangement that ensures an end-to-end quality of service in the delivery of multimedia information to the desktop and one which has real-time flow control capabilities needed to eliminate congestion in order to speed delay-sensitive traffic through the network. There is also a need for providing the advantages of cell switching technology without having to replace legacy network interface cards.
There is also a need for an arrangement that employs off-the-shelf components to minimize development costs.
These and other needs are attained by the present invention, where a Ethernet switch performs cell-to-Ethernet frame conversion, and vice versa. The Ethernet-type switch has a multi-processor architecture that eliminates incurring high development cost associated with application specific integrated circuits (ASIC) processors. Such a switch facilitates the creation of a communications network that supports both native ATM applications, such as those based on the Winsock 2.0 Application Programming Interface (API) as well as traditional IP based applications. Where Winsock 2.0 enabled applications are utilized, ATM cells are encapsulated into variable length Ethernet frames; up to 31 ATM cells can be encapsulated into a single Ethernet frame. In so doing, ATM QoS and ABR/ER flow control are preserved, facilitating the timely transport of delay sensitive multimedia traffic.
According to one aspect of the present invention, a Ethernet-type networking device for communicating with a cell-based network comprises a direct memory access (DMA) engine coupled to a plurality of Ethernet-type ports and an internal bus for controlling full-duplex data transfer of Ethernet-type frames. The Ethernet-type frames are of a cell-encapsulated type or a non cell- encapsulated type. A receive processor is coupled to the DMA engine for encapsulating and decapsulating the Ethernet-type frames, in which the receive processor supports cut-through and store-and-forward switching. A transmit processor is coupled to the DMA engine and performs traffic scheduling according to priority of the Ethernet-type frame and processing of data that are to be transmitted via one of the Ethernet-type ports. A cell processor is coupled to the internal bus for processing cells and for controlling segmentation and reassembly of the cells. The cell processor executes a cell driver software in support of providing cell-based quality of service. At least one cell interface is coupled to the cell processor and the internal bus for receiving the cells. A communication processor is coupled to the internal bus for controlling the receive and transmit processors and supporting cell-based signaling. A descriptor processor is coupled to the DMA engine for buffering a plurality of descriptors. The processors communicate through a plurality of mailboxes associated with each processor, in which the receive processor further encapsulates/ decapsulates the received cells.
Another aspect of the present invention provides a method for providing quality of service in an Ethernet-type environment. The method comprises receiving a plurality of Ethernet-type frames via an Ethernet-type port; receiving a plurality of cells via a cell port; encapsulating at least one cell into one of the plurality of Ethernet-type frames; and supporting ATM quality of service and ATM traffic management capabilities. Additional advantages and novel features of the invention will be set forth in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS Reference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout and wherein: Figures 1A and IB are graphic representations of a prior art networks and the access methods;
Figure 2 is a block diagram depicting detailed aspects of a switch configured in accordance with the present invention;
Figure 3 is a functional block diagram depicting communication between processing elements of an embodiment of the present invention;
Figure 4 is a block diagram depicting a hardware embodiment of the switch of the present invention; and Figure 5 is a graphic representation of a network embodying the system of the present invention;
BEST MODE FOR CARRYING OUT THE INVENTION The present invention retains the traditional low cost and low complexity
Ethernet NIC but achieves ATM capability over Ethernet through use of an Ethernet edge switch which employs a multi-processor architecture to interface an Ethernet environment with an ATM infrastructure.
Figure 2 provides a high level description of the Ethernet switch, which generally comprises four basic components: a cell interface 401, a switching fabric 403, an Ethernet/Cell translator 405, and an Ethernet interface 407. The Ethernet interface 407 receives standard Ethernet frames. The Ethernet frame format conforms with all Ethernet type formats (e.g., IEEE 802.3/802.2, Ethernet II, Novell 802.3, and IEEE 802.3/802.2 SNAP). These Ethernet frames are then sent to the Ethernet Cell translator 405, which converts Ethernet frames into cells, as well as cells into Ethernet frames. The Ethernet frames to cells conversion involves segmenting the Ethernet frames and reassembling them as 53 bytes cells for transport over a cell switching backbone such as an ATM network. The conversion from cells into Ethernet frames is a simpler process, whereby the fixed length cells are encapsulated by the Ethernet frame, which can extend to 1500 bytes in length.
Once converted from Ethernet frames to cells, the cells are switched over the switching fabric to the cell interface for output into the cell-switching network. In one embodiment where Ethernet frames are routed to anothet Ethernet LAN, the Ethemet Cell translator 405 does not perform frame to cell conversion, but instead, by-passes the switching fabric 403 and presents the Ethernet frame to the Ethernet interface 407 to be outputted. Alternatively, the Ethernet frames may be forced to undergo conversion into cells, thereby negating the requirement for a separate routing circuitry for the Ethernet frames; that is, the Ethernet frame is sent to its destination MAC address via the switching fabric 403. The cell interface 401 comprises cell ports (not shown) for the inputting and outputting of data cells. In a typical ATM implementation, these ports are fiber optic connections, and the cell interface 401 handles OC-3c (155 Mbps) and OC-12 (622 Mbps) data rates. Lower data rates of DS1 (1.544 Mbps) and DS3 (44.7 Mbps) are also used; however, these rates typically employ copper cable connections.
The above discussion describes a generic Ethernet switch. In accordance with the disclosed embodiment, the Ethernet switch 260 can be a Cells-in-Frames (CIF) Ethernet switch with ATM functionality. CIF technology encapsulates ATM traffic within the frame structure of the existing LAN media (such as Ethernet) in accordance with the Cells in Frames Version 1.0 Specification, incorporated herein by reference. CIF end stations (CIF ES) and CIF attachment devices (CIF AD), such as the Ethernet switch 260, can thereby exchange native ATM traffic over the same LAN media that serves the standard frame-based traffic (e.g., IP and IPX).
From a simplistic perspective, CIF describes the method for utilizing frame-based LAN media as another ATM physical layer. The CIF protocol is a peer-to-peer protocol that maintains a virtual point-to-point link (the CIF link) between the CIF ES and the CIF AD that serves it. The CIF link is a virtual point-to-point connection between the CIF ES and the CIF AD that is carried over the LAN connecting them. The two sides of this virtual link maintain local connections of the status of the link. The CIF specification (version 1.0, incorporate herein by reference) describes the exact protocol for establishing the logical association between the CIF ES and the CIF AD and exchanging native ATM traffic encapsulated in specific Ethernet Version 2, IEEE 802.3, or IEEE 802.5 Token Ring frames.
The Ethernet switch 260, in one embodiment, employs the CIF technology. The form factor of the Ethernet switch 260, moreover is a stackable unit such that the two units may reside within the same system rack. In addition, the stackable nature enables easy connectivity between the two units. A more detailed description of the Ethernet switch 260 follows.
Figure 3 depicts a functional block diagram of an embodiment of switch 260 of the present invention. Switch 260 can be divided into the following major sections: an Ethernet interface 502; a DMA engine 504; a descriptor processor (DP) section 505; an ATM interface 506; an ATM processor (AP) section.507; a packet buffer 508; a receive processor (RP) section 510; a transmit processor (XP) section 512; and a communications processor (CP) section 514. Ethernet interface 502 handles information between Ethernet NIC 714 (Fig. 5) and DMA engine 504. DMA engine 504 performs high-speed transfers of packets between the packet buffer 508 and the network interfaces (Ethernet interface 502 and ATM interface 506) section. DMA engine 504 also transfers packet pointer and other relevant information to receive processor 510. Descriptor processor (DP) section 505 assists DMA engine 504 by providing it with free receive buffers per port for receiving packets. Descriptor processor section 505 also frees transmitted buffers and links them to the free buffer chain.
ATM Processor (AP) section 507 runs the ATM driver code and handles frame reception from ATM interface 506 and forwarding of the same to packet buffer 508 (through DMA engine 504) section and vice versa. ATM processor 507 also performs all the house-keeping related to the pair of ATM ports 506b.
Three processors handle all the packet processing. Receive processor (RP) section 510 is a RISC processor that performs the switching functionality. It analyzes the packet headers and makes forwarding decisions. The processor also handles switching of CIF and non-CIF packets and routing of IP packets. Transmit Processor (XP) section is a RISC processor that handles transmission of packets (i.e., traffic scheduling) and frame descriptor management. Communication Processor (CP) section 514 runs a real time operating system (RTOS) (e.g., the VxWorks), and handles all the remaining functions such as ATM signaling, Spanning tree protocol, TCP/IP, SNMP, house-keeping, etc. Communications processor 514 has control over all the processors, and coordinates the booting of each of the processors on power up, and also monitors their functioning.
For example, Digital's StrongARM (SA110) processor is suitable for all the RISC processors (receive processor 510, transmit processor 512, descriptor processor 675 and ATM processor 507). Digital's StrongARM (SA110) processor is high performance (233 MHz internal operation), yet consumes extremely low power (about half a watt). Intel's Pentium processor, for example, running at 133 MHz is suitable for communications processor 514.
When a packet is received (from Ethernet interface 502 or ATM interface 506), DMA engine 504 passes the pointer to the packet and other relevant information to receive processor 510, while storing the complete packet in packet buffer 508. Receive processor 510 reads the packet header from the packet buffer and does fast hash searches using the information in the header and makes appropriate forwarding decisions. The decisions are communicated to transmit processor 512, transmit processor 512 then handles queuing up of the packet in the appropriate transmit queue. In case receive processor 510 is not able to make the forwarding decision or if the packet requires special handling, then communications processor 514 is informed.
All the RISC processor sections (receive processor 510, transmit processor 512, descriptor processor 505 and ATM processor 507) handle high-speed tasks. Traditionally, ASICs are employed for these tasks, but the present invention does not use ASICs and therefore advantageously provides greater flexibility in the implementation of programming changes and upgrades. The RISC processors run highly optimized code resident almost fully in on-chip caches.
Figure 4 depicts a block diagram of a hardware configuration 600 illustrating an embodiment of the present invention. Hardware configuration 600 can be divided into the following major sections: communications processor section 514; a boot and mail box section 604; receive processor section 510; transmit processor section 512; packet buffer section 508; DMA engine section 504; Ethernet interface section 502; ATM interface section 616 a back-plane interface section 618; and a power supply section 620.
The communications processor section 514 comprises a communications processor (CP) 640 (which, for example, can be a Pentium ™ processor running at 133 MHz and performing external bus accesses at 66 MHz). Section 514 also includes a local buffer 642 which can be created from a 16 MB DRAM. A remote access controller 644 provides Ethernet (lOBase-T) and serial ports and timers for debugging the hardware configuration 600. A control bus logic portion 646 provides a control and status port and interface to boot and mailbox section 604, packet buffer section 508 and ATM interface section 616. Overall, the communications processor 640 runs a real time operating system and is the master controller of all activities in hardware configuration 600.
Boot and mailbox section 604 comprises boot code and mail boxes for all the processor sections by including a boot PROM portion 650, a common boot portion 652, and a plurality of mailboxes 654. Boot PROM 650 includes a 256KB EPROM used for storing the boot code used by communications processor 640. Common boot portion 652 includes a 4MB flash memory which is expandable to 8MB and 256KB of synchronous SRAM (Static Random Access Memory) which contain arbitration logic. The flash memory in common boot portion 652 is used as central storage for runtime code for all processor sections. Communications processor 640 boots up first from boot PROM portion 650, meanwhile keeping all other processor sections under reset. Then communications processor 650 allows one processor section to boot up at a time; each processor section boots up and picks up its code from pre-defined areas of the flash memory of common boot portion 652. The SRAM of common boot portion 652 is used for accommodating all the mail boxes 654. Mail box size can vary from 8 to 32 commands deep based on the corresponding inter-processor section communication load. Mail boxes 654 are used by the processor sections to exchange information amongst each other. Receive processor section 510 comprises a receive processor (RP) 660, a packet buffer interface 662 and a forwarding database 664. Receive processor 660 can be implemented using a 233 MHz processor, such as the StrongARM SA- 110 RISC processor that performs external bus accesses at 66 MHz. Receive processor 660 fetches packet header information from the packet buffer interface 662 and performs necessary searches through tables to make packet forwarding decisions.
Transmit processor section 512 comprises a transmit processor (XP) 670 and a frame descriptor memory 672. Transmit processor 670 can be implemented using an SA 1 10 (mentioned above) and also runs at 233 MHz and performs external bus accesses at 66 MHz. Frame descriptor memory comprises a 512 KB SRAM and contains control logic and interfaces with a buffer descriptor memory 674 and a buffer descriptor cache which are both discussed in greater detail below where the DMA section 504 is more fully described. Whenever packets are available for transmission, transmit processor 670 updates the buffer descriptor cache with pointers to these packets for the DMA engine section 504 to perform the transmission operation.
Packet buffer section 508 comprises a packet buffer 676 and a packet buffer control and arbitration portion 678. Packet buffer 676 includes a 4 MB SRAM that is expandable to 8MB used for central storage of packets received from Ethernet interface 502, ATM interface 506 and Back-plane interface 518. The back-plane interface 518 is a 4.2 Gbps interface, and is treated by the controlling software as an Ethernet port having a lower priority than the ports of the Ethernet interface 502.
Data is stored in the packet buffer 676 in the form of 256 byte data buffers. Packet buffer section 508 has access to a high speed 4 Gbps (64 bit, 66 MHA) bus 680 to meet the bandwidth requirements of the system. The packet buffer interfaces with: 16 Ethernet ports through DMA engine section 504; two ATM ports through ATM interface section 616 and DMA engine section 504; back plane section 618 through DMA engine 504; receive processor 660; and communications processor 640. Since receive processor 660 reads, and updates, packet headers, receive processor 660 extremely quick access must be provided to packet buffer 660 so as to avoid any delay. This is accomplished by providing receive processor 660 cycle-steal type accesses. The receive processor 660 to packet buffer interface logic is instructed by receive processor 660 to fetch/write a block from/into a particular location. The logic then does the fetch /write operation by stealing some cycles in between other DMA transfers, and informs receive processor 660 when the operation is complete. Communications processor 640 is also provided cycle-steal type access to packet buffer 676, however with lesser priority than receive processor 660 since less bandwidth is required by communications processor 640. The remaining bus masters of the remaining processor sections are given access to packet buffer 676 in a round- robin fashion by packet buffer arbitration logic 678. DMA engine section 504 handles data transfers between packet buffer 676 and all the network ports such as Ethernet ports 502, ATM ports 615 and backplane 618. All the data is transferred to/from packet buffer 672 in terms of 64 byte bursts. With the packet buffer being 64 bit wide, each burst of 64 bytes will require 8 clocks plus additional address clock(s). By pipe-lining subsequent bursts, the need for extra address clocks will be avoided to the extent possible. The buffer descriptor memory 674 is a synchronous SRAM and it contains a buffer descriptor for every 256 byte block in the packet buffer. Hence the size of buffer descriptor memory 674 will depend on the size of packet buffer and the size of each buffer descriptor. Each network port will be allocated a portion of memory in the packet buffer for storing received packets. All the buffer descriptors for each port will be linked together.
For receiving data on a port, DMA engine 504 has to fetch the next available buffer descriptor from the descriptor memory 674 and then do the data transfer to the corresponding data buffer in the packet buffer. On completing reception of a full buffer (256 bytes) or on end of frame/error, the buffer descriptor has to be updated with some information, such as size of data in the buffer and a couple of flags. A descriptor cache (an SRAM) is provided to make it easier and faster for
DMA engine 504 to fetch receive and transmit buffer descriptors. For each port, a cache of 64 receive (Rx) and 64 transmit (Tx) descriptors are provided. This way DMA engine 504 can pick up descriptors from sequential locations in the cache instead of having to walk through the link pointers in buffer descriptor memory 674.
A descriptor processor (DP) 675 mainly handles the function of replenishing the descriptor cache (for all the ports) with available receive descriptors, as and when necessary. The transmit descriptor cache 672 is updated by transmit processor 670, which handles all the packet transmission functions. However, on completion of transmission of a buffer, descriptor processor 675 frees the same and attaches it to the appropriate free buffer pool.
On receiving the first burst of a packet, DMA engine 504 passes the packet pointer and the port number to the receive processor 660, which then reads the packet header from the packet buffer and starts the decision making process. DMA engine 504 also informs the receive processor when the packet is fully received. With these two features, it is possible to support both cut-through and store-and-forward switching methods. However, initially store-and-forward method is used. In addition, DMA engine 504 has interfaces with ATM interface section 616 and back-plane section 618, as well as the Ethernet interface section 502. Ethernet interface section 502 is comprised of a Media Access Control (MAC) chip, such as the Lucent LUC3M08 MAC chips, and a physical (PHY) block. The PHY block is modularly provided on a daughter board, so that other media types can be easily utilized.
ATM interface 616 is implemented as a plug-in daughter cards. A local storage of 1MB/2MB memory is provided to act as buffer between two different buses (a PCI bus 522 on the SAR side, and a packet buffer bus 524) to which ATM interface 506 is connected. ATM interface section 616 comprises an ATM Processor section 507 and a Segmentation and Reassembly (SAR) section 506). ATM Processor section 507 comprises a ATM Processor (AP) 694, 1 MB/2MB of ATM buffer memory 696 and interfaces to DMA engine 504 and SAR section 506 implemented as bus control logic 698. The SAR section 506 includes the SAR chip, local memory for the SAR, PHY interface and ATM buffer interface.
The ATM buffer 696 is required since a SAR data bus 699 is not compatible with the packet buffer bus, and interfacing it directly with packet buffer 508 will result in wastage of packet buffer bandwidth. ATM processor 694 runs ATM driver code which handles all the interactions with SAR section 506. ATM processor 694 configures SAR chips within SAR section 506 and sets up receive and transmit buffers for the SAR chips. On receiving frames from the ATM port(s), ATM processor 694 programs packet buffer interface logic to initiate a transfer to packet buffer 508. On receiving frames from packet buffer 508, ATM processor 694 puts them in a transmit queue of the destination ATM port 615. ATM processor code resident in ATM memory 696 is upgradeable to handle new types of uplink modules.
ATM ports 615 support OC-3, Tl/El and T3/E3 interfaces. In an exemplary embodiment, Adapetc's SAR chip is used in all these cases. For Tl/El and T3/E3 the same PCB layout may be used, with some components being selectively loaded.
As illustrated in Figure 4, back-plane interface section 618 includes, for example, a 2 Gbps (64 bit, 33 MHz) back-plane bus which is used for data transfer between switch modules. According to one embodiment, upgrading of this bus to 4 Gbps (66 MHz) is possible if required. A distributed arbitration (present on each switch module) scheme is used, which provides a round-robin access priority to all the switch modules. Back-plane interface section 618 comprises arbitration logic, a 64 bit bi-directional FIFO and interfaces with DMA engine 504.
The transfer over the back-plane essentially involves data movement from the packet buffer of one switch module to the packet buffer of another switch module. This implies that access to both the packet buffers is available to the back-plane logic at the same time if the transfer has to be made in a synchronous manner. With the use of a FIFO, the transfer can be made in an asynchronous and simple manner. DMA engine 504 on the source switch module fills up the FIFO with data without waiting for the destination switch module to be ready. Over the back-plane section 618, the data in the FIFO is sent to the FIFO on the destination module after arbitrating for the back-plane bus. From there, DMA engine 504 on the destination switch module transfers the data to the packet buffer as and when it becomes available.
Redundant, internal power supplies 520 provide suitable power for both the Telco and non-Telco environments using, respectively, a -48V input and universal AC input of 1 10-240V. In Figure 3, only one power supply 520 is shown for convenience. In the non-Telco environment, AC-DC supply modules are used, each having connectors feeding in 110-240 VAC. In the Telco environment, the AC power supply is replaced by two DC-DC supply modules, each having connectors feeding in -48V DC.
The Ethernet switch 260 as detailed above works in conjunction with the end-user- workstation's software to quickly deliver multimedia information while ensuring an end-to-end negotiated quality of service that is free from delay inducing congestion. The end-station executes a shim software. The shim comprises a protocol combination, or other suitable combination of protocols, to allow the implementation of CIF technology to bring native ATM services to desktops that are equipped with legacy Ethernet or Token Ring NICs by encapsulating cells into frames. CIF can also be viewed as the inverse of ATM LAN Emulation (LANE). LANE provides a way for legacy LAN media access controller-layer protocols like Ethernet and Token Ring, and all higher-layer protocols and applications, to access work transparently across an ATM network. LANE retains all Ethernet and Token Ring drivers and adapters; no modifications need to be made to Ethernet or Token Ring end stations. In other words, CIF emulates ATM services over frame-based LANs. CIF uses software at the workstation without requiring the procurement of a new NIC to support quality of service scheduling and ABR/ER flow control.
To achieve end-to-end quality of service, the shim resides as a layer in end station to provide encapsulation of cells within Ethernet frames in the desktop for transport to the data network. Shim supports multiple queues, a scheduler (not shown), the ER flow control, and header adjustment. Shim comprises an ATM Adaptation Layer (AAL) which is the standards layer that allows multiple applications to have data converted to and from the ATM cell. AAL is protocol used that translates higher layer services into the size and format of an ATM cell. The CIF shim layer also includes a traffic management (TM) component that sets forth the congestion control requirements. The TM component (not shown) can be implemented as TM 4.0. The ATM Forum has developed a complete 4.0 protocol suite that includes UNI signaling 4.0 which allows signaling of bandwidth and delay requirements for QoS; whereby, TM 4.0 which specifies explicit rate flow control and QoS functions. CIF shim layer also includes a frame segmentation and reassembly (SAR) sublayer (not shown) which converts protocol data units (PDUs) into appropriate lengths and formats them to fit the pay load of an ATM cell. At the destination end station, SAR extracts the payloads for the cells and converts them back into PDUs which can be used by applications higher up the protocol stack. The shim adds the CIF header to packets before they are transmitted, and removes the header when they are received. The shim manages the message queues by queuing outgoing data into multiple queues for QoS management. Shim also processes the RM cells for explicit rate flow control using the ABR flow control and allows ATM signaling software to run both native ATM application as well as standard IP applications.
End station further comprises a device driver and a Network Device Interface Specification (NDIS) layer 609 located above the CIF shim layer 61 1. The end station 714 includes Internet Protocol (IP) layer 607b which supports classical IP, LANE and MPOA for the interworking of dissimilar computers across a network. IP layer 607b is a connectionless protocol that operates at the network layer (layer 3) of the OSI model. Winsock 2.0 603 is the application program interface (API) layer, which enables developers to take advantage of ATM's QoS and traffic management features. Application layer 601 can accommodate traditional as well as native ATM applications. Native ATM applications can be readily created with Winsock 2.0 API 603.
The shim arrangement guarantees that the services negotiated by the native ATM applications for the VCs are not arbitrarily disrupted by the traffic generated by the legacy applications. Forcing both the ATM and the legacy protocol traffic to go through CIF shim allows CIF shim to manage the transmission of all traffic according to the QoS specified for each traffic stream. To support the migration of legacy applications, the CIF AD forwards CIF traffic from the conventional LAN onto the ATM infrastructure for delivery to an ATM attached end station or to another CIF AD. The CIF ES is also required to run LANE, MPOA (Multiprotocol Over ATM), or Classical IP protocols. Network data from a legacy application is first handled by the legacy protocols (e.g., TCP/IP), and then turned into ATM traffic by LANE, MPOA, or Classical IP. The CIF ES function encapsulates the individual cells into CIF frames before data is finally transmitted on the wire to the CIF AD.
Figure 5 illustrates a potential application of the present invention. A variety of LAN technologies exist, but the large majority of LANs conform to the IEEE standard 802.3, which defines Ethernet standards. Various types of Ethernet systems exist such as switched Ethernet, Fast Ethernet, and Gigabit Ethernet. The end-station 710 is equipped with an Ethernet NIC 712 residing in a personal computer with a host processor 714. Ethernet NIC 714 is connected to a high-speed digital subscriber line (DSL) modem 720, which interfaces with a telephone line 722 via a CP (customer premise) POTS (plain old telephone service) splitter 721. The telephone line is a twisted pair copper wire, which the conventional customer premises telephone 724 uses to connect with a telephone central office 740. A telephone central office or end office is shown in Figure 5 as a communications facility 740; however, any communication facility can be used (e.g., a wire closet in a separate building). High-speed communication to remote users depends largely on the method of access to the networking infrastructure. Most users cannot bear the cost of leasing expensive outside lines that are needed to provide high speed communication to the Internet or to their offices. The disclosed embodiment overcomes this dilemma by employing a high-speed, low cost subscriber interface that takes advantage of the legacy outside cable plant, such as standard twisted copper pair wiring and coaxial cables.
As illustrated in Figure 7, one embodiment utilizes digital subscriber line (DSL) technology to delivery the high bandwidth that the remote users demand. Because traditional copper cabling is used, the remote users do not have to upgrade their current physical connection - their POTS line is sufficient. Because the outside plant need not be revamped, telephone companies (Telcos) can readily implement DSL services. The DSL modem 720 acts as the network access device to the central office. A DSL multiplexer 752 provides termination of the DSL modem connection within communications facility 740. DSL technology is categorized by the downstream and upstream bandwidths. The present invention could be applied to any of the various forms of DSL technology. One variety, commonly employed, Rate Adaptive DSL or RADSL, involves a rate negotiation between the customer premise DSL modem 720 and the Telco CO modem located within DSL MUX 752 which takes into account distance and line quality issues yielding the maximum available rate for the line conditions encountered. RADSL supports both Asymmetric DSL or ADSL, with a maximum downstream rate of 7.62 Mbps and a maximum upstream rate of 1.1 Mbps, which is ideal for very high speed Internet access and video-on-demand applications. ADSL services can be delivered up to 18,000 feet from the central office over a single copper twisted pair. RADSL also supports Symmetric DSL or SDSL, with a maximum bidirectional rate of about 1.1 Mbps, which is ideal for very high quality videoconferencing and remote LAN access. Another type of DSL technology is known as high-bit-rate digital subscriber line (HDSL), which provides a symmetric channel, delivering Tl rates (1.544 Mbps) in both directions. HDSL has a distance limitation of about 12,000 feet without repeaters. Telcos have traditionally used HDSL to provide local access to Tl services. HDSL is already widely deployed within the Telco market as a low cost T-l replacement. VDSL or Very high bit-rate DSL requires a fiber-to-the curb local loop infrastructure, with asymmetric speeds up to 52 Mbps. Other flavors of DSL (i.e., sometimes generically denoted xDSL) are characterized by whether the service is asymmetric or symmetric and the bandwidth allocations for the upstream and downstream transmissions.
The central office 740 comprises a plain old telephone service (POTS) splitter 742 which receives the information transmitted across the twisted pair line 722 and "splits" the low frequencies, which carry voice signals, from the high frequencies, which carry data signals. Essentially, the POTS splitter is a passband filter, whereby the low frequency information is carried by a voice line 744 to a voice switch 746 and ultimately to a public switched telephone network (PSTN) 748. The voice line 744, voice switch 746 and PSTN 748 are each conventional, and are therefore not explained further so as not to detract from the focus of the disclosure of the present invention. The data information, which is modulated using high frequency signals, is transmitted over a twisted pair cable 750 to a POTS splitter 742. The POTS splitter 742 then passes the high frequency signals to a DSL multiplexer (DSL MUX) 752. The DSL MUX serves as the DSL modem termination point for numerous end users with DSL modems. The DSL MUX252 aggregates all the DSL traffic and passes the multimedia information to the Ethernet switch 260. The traffic can be of any data type including multimedia graphics, video, image, audio, and text. Various embodiments of the DSL MUX 752 can be employed, ranging from 74 line stackable modules through the traditional high density chassis based approach. Various line codes can be supported within the DSL modems, including Carrierless Amplitude Phase (CAP) modulation, Discrete Multi-Tone (DMT) modulation, Quadrature Amplitude Modulation (QAM), as well as others. Ethernet switch 260 is primarily an edge device that is connected to an ATM network 770 on which a conventional multimedia server (not shown) may be linked. The ATM network 770 thus represents a fast and efficient delivery system for multimedia applications to which the end user desires access. The Ethernet switch 260 communicates with the CO DSL MUX 752 relative to traffic information, in order to minimize congestion. Traditionally, end user access to an ATM network has been through a router. Since the end-station 710 houses an Ethernet NIC 714, connection to ATM network 770 proves difficult without the system of the present invention, which allows information residing on an ATM network to be transferred to an Ethernet end-station while still retaining all the multimedia benefits of ATM, including QoS and ABR/ER flow control. An advantage associated with a DSL implementation is that the personal computer is constantly connected, much like a typical Ethernet LAN connection. That is, communication sessions are not initiated through a dial-up procedure.
While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims

WHAT IS CLAIMED IS:
1 . An Ethernet-type networking device for communicating with a cell-based network comprising: a direct memory access (DMA) engine coupled to a plurality of Ethernet- type ports and an internal bus for controlling full-duplex data transfer of Ethernet- type frames, the Ethemet-type frames being of a cell-encapsulated type or a non cell-encapsulated type; a receive processor, coupled to the DMA engine, for encapsulating and decapsulating the Ethemet-type frames, the receive processor supporting cut- through and store-and-forward switching; a transmit processor, coupled to the DMA engine, for performing traffic scheduling according to priority of the Ethemet-type frame and processing of data that are to be transmitted via one of the Ethemet-type ports; a cell processor, coupled to the internal bus for processing cells and for controlling segmentation and reassembly of the cells, the cell processor executing a cell driver software in support of providing cell-based quality of service; at least one cell interface, coupled to the cell processor and the internal bus for receiving the cells; a communication processor, coupled to the internal bus for controlling the receive and transmit processors, and supporting cell-based signaling; and a descriptor processor, coupled to the DMA engine, for buffering a plurality of descriptors; wherein the processors communicate through a plurality of mailboxes associated with each processor, the receive processor further encapsulates/decapsulates the received cells.
2. The Ethemet-type networking device as in claim 1 , a DSL multiplexer is connected to one of the Ethemet-type ports.
3. The Ethemet-type networking device as in claim 1 , wherein the cells are ATM cells.
4. The Ethemet-type networking device in claim 1 , wherein the cell driver software supports available bit rate (ABR) with explicit rate flow control and resource management (RM) cell priority.
5. The Ethemet-type networking device in claim 1 , wherein the Ethernet-type ports comprises 10/100Mbps ports.
6. The Ethemet-type networking device in claim 1 , wherein the cell interface comprises an optical carrier (OC)-3c port.
7. The Ethemet-type networking device in claim 1, wherein the cell interface comprises an optical carrier (OC)-12 port.
8. The Ethemet-type networking device in claim 1 , wherein the receive, transmit, cell, communication, and descriptor processors are reduced instruction set chip (RISC) processors.
9. A method for providing quality of service to an Ethernet type environment, comprising: receiving a plurality of Ethernet- type frames via an Ethernet-type port; receiving a plurality of cells via a cell port; encapsulating at least one cell into one of the plurality of Ethemet-type frames; and supporting ATM quality of service and ATM traffic management capabilities.
10. The method in claim 9, further comprising communicating with a DSL multiplexer for transporting the Ethemet-type frames to a remote location.
1 1. The method in claim 9, wherein the step of supporting further comprises using available bit rate (ABR) with explicit rate flow control.
12. The method in claim 1 1, wherein the available bit rate (ABR) with explicit rate flow control supports resource management (RM) cell priority.
13.The method in claim 11, wherein the processing step utilizes cells-in- frames technology.
PCT/US2000/029350 1999-10-25 2000-10-25 Ethernet edge switch for cell-based networks WO2001031969A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU12296/01A AU1229601A (en) 1999-10-25 2000-10-25 Ethernet edge switch for cell-based networks

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US16142099P 1999-10-25 1999-10-25
US60/161,420 1999-10-25
US54741900A 2000-04-11 2000-04-11
US09/547,419 2000-04-11

Publications (2)

Publication Number Publication Date
WO2001031969A1 true WO2001031969A1 (en) 2001-05-03
WO2001031969A9 WO2001031969A9 (en) 2002-01-24

Family

ID=26857814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/029350 WO2001031969A1 (en) 1999-10-25 2000-10-25 Ethernet edge switch for cell-based networks

Country Status (2)

Country Link
AU (1) AU1229601A (en)
WO (1) WO2001031969A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002095607A1 (en) 2001-05-18 2002-11-28 Riverstone Networks, Inc. Method and system for connecting virtual circuits across an ethernet switch
WO2003063425A1 (en) * 2002-01-18 2003-07-31 Telefonaktiebolaget Lm Ericsson (Publ.) Adaptive ethernet switch system and method
WO2003075499A1 (en) * 2002-03-01 2003-09-12 Infineon Technologies Ag Atm-port-module with integrated ethernet switch interface
DE10242321B4 (en) * 2002-03-01 2005-08-18 Infineon Technologies Ag ATM connection module with integrated Ethernet switch interface
EP1715618A1 (en) * 2004-02-25 2006-10-25 Huawei Technologies Co., Ltd. A networking equipment of broadband accessing and method thereof
CN100414939C (en) * 2004-07-26 2008-08-27 华为技术有限公司 Conversion circuit and method between ATM data and data in frame format, and transmission exchange system
EP3001618A1 (en) * 2014-09-29 2016-03-30 F5 Networks, Inc Method and apparatus for multiple DMA channel based network quality of service
EP4123971A1 (en) * 2021-07-20 2023-01-25 Nokia Solutions and Networks Oy Processing data in an ethernet protocol stack

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5946313A (en) * 1997-03-20 1999-08-31 Northern Telecom Limited Mechanism for multiplexing ATM AAL5 virtual circuits over ethernet
US5963543A (en) * 1993-10-20 1999-10-05 Lsi Logic Corporation Error detection and correction apparatus for an asynchronous transfer mode (ATM) network device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963543A (en) * 1993-10-20 1999-10-05 Lsi Logic Corporation Error detection and correction apparatus for an asynchronous transfer mode (ATM) network device
US5946313A (en) * 1997-03-20 1999-08-31 Northern Telecom Limited Mechanism for multiplexing ATM AAL5 virtual circuits over ethernet

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARCO J M ET AL: "Carrying ATM cells over Ethernet", PROCEEDINGS 25TH EUROMICRO CONFERENCE. INFORMATICS: THEORY AND PRACTICE FOR THE NEW MILLENNIUM, PROCEEDINGS OF EUROMICRO WORKSHOP, MILAN, ITALY, 8-10 SEPT. 1999, 1999, Los Alamitos, CA, USA, IEEE Comput. Soc, USA, pages 342 - 349 vol.2, XP002163127, ISBN: 0-7695-0321-7 *
JESSUP T: "DSL: THE CORPORATE CONNECTION", DATA COMMUNICATIONS,US,MCGRAW HILL. NEW YORK, vol. 27, no. 2, 1 February 1998 (1998-02-01), pages 103 - 104,106,108, XP000731801, ISSN: 0363-6399 *
SHORE M ET AL: "Cells in frames: ATM over legacy networks", 1998 1ST IEEE INTERNATIONAL CONFERENCE ON ATM. ICATM'98, PROCEEDINGS OF ICATM'98: IEEE INTERNATIONAL CONFERENCE ON ATM, COLMAR, FRANCE, 22-24 JUNE 1998, 1998, New York, NY, USA, IEEE, USA, pages 418 - 422, XP002163126, ISBN: 0-7803-4982-2 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1393192A4 (en) * 2001-05-18 2008-04-16 Riverstone Networks Inc Method and system for connecting virtual circuits across an ethernet switch
EP1393192A1 (en) * 2001-05-18 2004-03-03 Riverstone Networks, Inc. Method and system for connecting virtual circuits across an ethernet switch
WO2002095607A1 (en) 2001-05-18 2002-11-28 Riverstone Networks, Inc. Method and system for connecting virtual circuits across an ethernet switch
WO2003063425A1 (en) * 2002-01-18 2003-07-31 Telefonaktiebolaget Lm Ericsson (Publ.) Adaptive ethernet switch system and method
US7529250B2 (en) 2002-01-18 2009-05-05 Telefonaktiebolaget L M Ericsson (Publ) Adaptive ethernet switch system and method
CN100375462C (en) * 2002-01-18 2008-03-12 艾利森电话股份有限公司 Adaptive Ethernet switch system and method
DE10242321B4 (en) * 2002-03-01 2005-08-18 Infineon Technologies Ag ATM connection module with integrated Ethernet switch interface
US7369568B2 (en) 2002-03-01 2008-05-06 Infineon Technologies Ag ATM-port with integrated ethernet switch interface
WO2003075499A1 (en) * 2002-03-01 2003-09-12 Infineon Technologies Ag Atm-port-module with integrated ethernet switch interface
EP1715618A1 (en) * 2004-02-25 2006-10-25 Huawei Technologies Co., Ltd. A networking equipment of broadband accessing and method thereof
EP1715618A4 (en) * 2004-02-25 2007-03-14 Huawei Tech Co Ltd A networking equipment of broadband accessing and method thereof
CN100344126C (en) * 2004-02-25 2007-10-17 华为技术有限公司 Equipment and method for configuring network of wide band access
CN100414939C (en) * 2004-07-26 2008-08-27 华为技术有限公司 Conversion circuit and method between ATM data and data in frame format, and transmission exchange system
EP3001618A1 (en) * 2014-09-29 2016-03-30 F5 Networks, Inc Method and apparatus for multiple DMA channel based network quality of service
EP4123971A1 (en) * 2021-07-20 2023-01-25 Nokia Solutions and Networks Oy Processing data in an ethernet protocol stack

Also Published As

Publication number Publication date
WO2001031969A9 (en) 2002-01-24
AU1229601A (en) 2001-05-08

Similar Documents

Publication Publication Date Title
US6477595B1 (en) Scalable DSL access multiplexer with high reliability
US6404861B1 (en) DSL modem with management capability
US6990108B2 (en) ATM system architecture for the convergence of data, voice and video
US6961340B2 (en) AAL2 receiver for filtering signaling/management packets in an ATM system
WO2006100610A1 (en) System-level communication link bonding apparatus and methods
US7050546B1 (en) System and method for providing POTS services in DSL environment in event of failures
US20020154629A1 (en) Integrated PMP-radio and DSL multiplexer and method for using the same
US7203187B1 (en) Messaging services for digital subscriber loop
US20020057700A1 (en) Systems and methods for connecting frame relay devices via an atm network using a frame relay proxy signaling agent
US8295303B2 (en) System and method for transmission of frame relay communications over a digital subscriber line equipped with asynchronous transfer mode components
US7508761B2 (en) Method, communication arrangement, and communication device for transmitting message cells via a packet-oriented communication network
US6931012B2 (en) ATM processor for switching in an ATM system
WO2001031969A1 (en) Ethernet edge switch for cell-based networks
EP1248424B1 (en) AAL2 transmitter for voice-packets and signaling management-packets interleaving on an ATM connection
MXPA02003528A (en) System and method for providing pots services in dsl environment in event of failures.
KR100805174B1 (en) Messaging services for a digital subscriber loop
US6915360B2 (en) Cell buffering system with priority cache in an ATM system
WO2001031970A1 (en) A communication system for transporting multimedia information over high-speed links using an ethernet type network interface
US7092384B1 (en) System and method for providing voice and/or data services
TW516332B (en) A communication system for transporting multimedia information over high-speed links using an Ethernet type network interface
Ojesanmi Asynchronous Transfer Mode (ATM) Network.
Lambrou Asynchronous transfer mode and future

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WD Withdrawal of designations after international publication

Free format text: US

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1-22, DESCRIPTION, REPLACED BY NEW PAGES 1-22; PAGES 23-25, CLAIMS, REPLACED BY NEW PAGES 23-25; PAGES 1/7-7/7, DRAWINGS, REPLACED BY NEW PAGES 1/7-7/7; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase