WO2002100073A2 - Concurrent switching of synchronous and asynchronous traffic - Google Patents

Concurrent switching of synchronous and asynchronous traffic Download PDF

Info

Publication number
WO2002100073A2
WO2002100073A2 PCT/US2002/017515 US0217515W WO02100073A2 WO 2002100073 A2 WO2002100073 A2 WO 2002100073A2 US 0217515 W US0217515 W US 0217515W WO 02100073 A2 WO02100073 A2 WO 02100073A2
Authority
WO
WIPO (PCT)
Prior art keywords
line unit
traffic
network element
data
line
Prior art date
Application number
PCT/US2002/017515
Other languages
French (fr)
Other versions
WO2002100073A3 (en
Inventor
Jason Dove
Brian Semple
Mike Nelson
Ying Zhang
James W. Jones
Andre Tanguay
Original Assignee
Calix Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/874,352 external-priority patent/US6798784B2/en
Priority claimed from US09/874,402 external-priority patent/US7035294B2/en
Application filed by Calix Networks, Inc. filed Critical Calix Networks, Inc.
Priority to AU2002310279A priority Critical patent/AU2002310279A1/en
Publication of WO2002100073A2 publication Critical patent/WO2002100073A2/en
Publication of WO2002100073A3 publication Critical patent/WO2002100073A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane
    • H04L49/405Physical details, e.g. power supply, mechanical construction or backplane of ATM switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5691Access to open networks; Ingress point selection, e.g. ISP selection
    • H04L12/5692Selection among different networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6402Hybrid switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • H04M11/06Simultaneous speech and data transmission, e.g. telegraphic transmission over the same conductors
    • H04M11/062Simultaneous speech and data transmission, e.g. telegraphic transmission over the same conductors using different frequency bands for speech and other data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/005Interface circuits for subscriber lines
    • H04M3/007Access interface units for simultaneous transmission of speech and data, e.g. digital subscriber line [DSL] access interface units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0421Circuit arrangements therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5615Network termination, e.g. NT1, NT2, PBX
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5672Multiplexing, e.g. coding, scrambling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13003Constructional details of switching devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1329Asynchronous transfer mode, ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13292Time division multiplexing, TDM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13299Bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1334Configuration within the switch
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13341Connections within the switch
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13361Synchronous systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13362Asynchronous systems

Definitions

  • central office switches that process telephone calls between subscribers typically use switches called Class 5 switches, such as the 5ESS available from Lucent.
  • a telephone instrument may be directly connected to such a Class 5 switch as illustrated in FIG.1 if the telephone instrument is located within 18 kilofoot radius. Beyond the 18 kilofoot radius, support for such telephone instruments that use a copper twisted pair may be provided through a digital loop carrier (DLC) which has two portions: a central office terminal and a remote terminal.
  • the central office terminal is normally located within the central office and communicates with the remote terminal using a digital signal over metallic (such as copper) or optical link (also called "digital line").
  • the central office terminal of the digital loop carrier is coupled to the Class 5 switch in the central office and the coupling may conform to an aggregation interface, such as GR303 (a standard defined by Telcordia).
  • the remote terminal in turn is connected to a number of telephone instruments.
  • a remote terminal may also provide a high-speed trunk, such as Tl that may be needed by a business and/or be coupled via modems to personal computers to support data traffic.
  • the DLC remote terminal may be implemented by a digital multiplexer that combines a number of subscriber channels into a single high speed digital signal, and the
  • DLC central office terminal implemented by a de-multiplexer. Because a digital line cannot carry signals as far as a corresponding analog line, the digital line often requires a number of digital repeaters to boost signal level.
  • a typical digital line of a DLC carries from 24 to 3000 POTS circuits. Note that a DLC central office terminal may be eliminated, e.g. as in case of an Integrated Digital Loop Carrier System, wherein the digital line is directly connected to the Class 5 switch.
  • the Class 5 switch may be associated with a portion of a telephone number, e.g. the portion 252 in a telephone number 408-252-1735. All telephones that are serviced by a single Class 5 switch are normally assigned a telephone number that includes a preset prefix, e.g. 252.
  • the Class 5 switch typically forms connections between telephones within its own service area, each of which starts with the preset prefix, e.g. 252.
  • the Class 5 switch When a telephone instrument within its service area places a call to a number different from the numbers starting with the preset prefix, the Class 5 switch connects the telephone instrument to another switch which may be of a different class, such as a Class IV switch, commonly referred to as a hub switch.
  • a Class IV switch commonly referred to as a hub switch.
  • the hub switch- is typically coupled to a number of Class 5 switches through a ring of add/drop multiplexers (ADMs).
  • ADMs add/drop multiplexers
  • each central office may have a Class 5 switch co-located with and coupled to an add/drop multiplexer, and in addition the hub switch is also co-located with and coupled to an add/drop multiplexer. All of the add/drop multiplexers are connected to one another in a ring topology.
  • Such a ring topology typically contains two optical fiber connections between each pair of add/drop multiplexers, wherein one of the connections is redundant, and used primarily in case of failure.
  • the just-described ring of add/drop multiplexers that connects a number of central office switches to a hub switch is typically referred to as forming the "interoffice" or "transport” portion of the public telephone network.
  • the hub switch is typically connected to a number of other hub switches by another portion of the network commonly referred to as "core".
  • core the network commonly referred to as "core”.
  • central offices typically contain additional equipment called a DSLAM which provides a digital subscriber line (DSL) connection to the business.
  • DSLAM digital subscriber line
  • the DSLAM may only service businesses that are within 18 kilofeet, e.g. because of the limitations of a copper twisted pair connection.
  • Such DSLAMs are typically connected inside the central office to an add drop multiplexer so that data traffic can be routed to an Internet Service Provider (ISP).
  • ISP Internet Service Provider
  • a remote terminal of a digital loop carrier can be used to provide an IDSL service, which is based on the use of a ISDN link to the central office, via the central office terminal of the DLC.
  • a network element can be configured for connection to any portion of a communication network: access, transport and core.
  • a single network element can be configured to couple subscriber equipment directly to the core portion of the network, thereby to bypass the transport portion of the network.
  • a network element can be configured to include a line unit that supports subscriber equipment (also called “subscriber line unit”), and also to include a line unit to support a link to core of the communication network (also called “core line unit”).
  • the subscriber line unit and core line unit are both installed in a single chassis, and each unit can be installed in any of a number of slots of the chassis.
  • such a network element may support traditional circuit-switched telephony services while simultaneously delivering packet-based services.
  • the same network element acts as a circuit-switched network element that scales across an entire range from the subscriber to the core such as, digital loop carrier (DLC), add-drop multiplexer (ADM) and digital cross-connect (DCC), and also as a packet-based network element that also scales across the entire range from the subscriber to the core, such as digital subscriber loop access multiplexer (DSLAM), an Ethernet or asynchronous transfer mode (ATM) switch, or edge router.
  • DLC digital loop carrier
  • ADM add-drop multiplexer
  • DCC digital cross-connect
  • DSLAM digital subscriber loop access multiplexer
  • ATM asynchronous transfer mode
  • Certain embodiments of such a network element employ a common switch fabric for handling both synchronous and asynchronous traffic over a single point to point bus between a line unit and a switch unit, and also provide multi-class service over the entire range of a communication network.
  • FIG. 1 illustrates, in a block diagram, a prior art communication network.
  • FIGs. 2-4 illustrate, in block diagrams, various embodiments of a communication network that uses a network element in accordance with the invention.
  • FIG. 5 illustrates, in a block diagram, a network element of one embodiment populated with one or more subscriber line units containing circuitry to support subscriber equipment and one or more core line units containing circuitry to support communication with the core of a communication network.
  • FIGs.5A-5D illustrate, in timing diagrams, various portions of a signal on a backplane bus in the network element of FIG. ,5.
  • FIG. 6 illustrates, in a block diagram, three network elements of the type illustrated in FIG. 5 located in a host node in a central office and coupled thereto three network elements in a flex node in a remote terminal in accordance with the invention.
  • FIG. 7 illustrates, in a high-level block diagram, a line unit that supports plain old telephone service (POTS) and another line unit that supports twelve DS1 ports both coupled via a switch fabric to line units that support OC12 signals in one embodiment.
  • POTS plain old telephone service
  • FIG. 8 illustrates, in an intermediate-level block diagram, a system of the type shown in FIG. 7 with the input and output interfaces shown explicitly.
  • FIG. 9 illustrates, in a block diagram, various facilitates seeing the flow of the different types of traffic (synchronous and asynchronous) through a common cross- connect (also called "switch fabric").
  • FIG. 10 illustrates, in a block diagram, a single electronic component that implements the cross-connect of FIG. 9.
  • a network element 11 in accordance with the invention may be directly coupled to any kind of subscriber equipment (such as a telephone instrument 12, or a modem in a personal computer 13), when equipped with appropriate line units that support the subscriber equipment.
  • a network element 14 may also be coupled (either through a central office terminal of a DLC, or through another network element, or even directly) to a switch of any class (such as a central office switch 15 or a hub switch 16), when equipped with an appropriate line unit.
  • network element 14 may also be coupled directly to a router 17 (FIG.3), again when equipped with an appropriate line unit.
  • the network element 14 may be further coupled to other network elements 18, 19 and 20 in a ring topology to provide protection for voice traffic, e.g. support UPSR or BLSR ring functions defined by the SONET standard. Network element 14 may also be configured to protect data traffic.
  • voice traffic e.g. support UPSR or BLSR ring functions defined by the SONET standard.
  • Network element 14 may also be configured to protect data traffic.
  • network elements of the type described herein when equipped with the appropriate line units, can be used in the transport portion of the public telephone network and also in the core portion.
  • the network element as described herein may be coupled by an interexchange carrier to network trunk facilities to provide interoffice call delivery.
  • a network element When used in the core portion, such a network element may be provided with line units that are coupled to a long distance network, and may support dense wave division multiplexing (DWDM).
  • DWDM dense wave division multiplexing
  • a single network element 30 (FIG.5) of the type described herein may be equipped with a line unit 31 (which may also be called “subscriber line unit”) containing circuitry to support subscriber equipment, such as plain old telephone service (POTS) on copper twisted pair, and another line unit 32 (which may also be called “core line unit”) containing circuitry to support the core portion of a communication network, e.g. at OC-
  • the same network element 30 may also contain another line unit 33 (which may also be called “central office line unit”) containing circuitry to support an aggregation interface (such as GR303 or TR008) to a central office switch (such as the 5ESS available from Lucent), e.g. at DS1 rate.
  • the same network element 30 may also contain yet another line unit 34 (which may also be called “router line unit”) containing circuitry to support a data interface (such as 1 Gigabit Ethernet or 10 Gigabit Ethernet).
  • any number of different kinds of line units may be used in a network element as described herein.
  • one implementation supports the following analog, digital, ethernet, and optical line units: POTS (24 ports), DSL (24 ports), combination of POTS and DSL, DS1 (12 ports), DS3 (8 ports), lOBaseT, 100BaseT, 1 Gigabit Ethernet, 10 Gigabit Ethernet, ESCON (Enterprise Systems Connection), FICON (Fiber Connector), OC3 (4 ports), OC12 (2 ports), OC48 (single port), Fiber to Business (12 OC 1 ports coming in and one OC12 port going out).
  • This implementation also supports a number of different kinds of these line units, e.g. POTS card may provide just basic telephone service, or provide universal voice grade service, or coin service, or transmission only service (2 wire or 4 wire), or private line automatic ring-down service (PLAR), foreign exchange service.
  • the network element may be configured with one or more cards that interface with a Class 5 switch or with a router: e.g. DS1 card with GR 303 aggregation, or PLAR, or transmission only, or OC3 (packet), or OC12 (packet), or OC48 (packet).
  • Each line unit 31-34 described above is coupled by one or more buses 41-44 (FIG.5) to the switch unit 35, physically located in a high-speed area of a backplane of chassis 39.
  • the one or more buses 41-44 (also called “backplane buses”) may be of any kind, synchronous or asynchronous, and may be either parallel or serial. Such buses operate at 3.1 Gbps in one example.
  • Information is transmitted over such buses of the backplane in discrete units called "frames," and a number of frames are grouped into a superframe, e.g. in a manner similar to SONET.
  • each line unit may be synchronized in time, by the cross-connect.
  • each of the line units transfers data to and from the cross-connect one word at a time, each word at a synchronized time interval.
  • FIG. 5D An example of such transmission is illustrated in FIG. 5D, wherein each word is a byte, STS and TDM/MC is TDM data and UNICAST is packet data.
  • Each of backplane buses 41-44 may be based on a 125 microsecond Synchronous Optical Network (SONET) frame and a 6 millisecond superframe.
  • FIG. 5A shows one specific example of channel arrangement on a 125 microsecond frame forming a portion of a signal on a backplane bus.
  • the frame is based on 60 channels in this example, the equivalent of 60 SONET Traffic Stream -1 (STS-1) channels, or 3.1104 Gbps.
  • STS-1 SONET Traffic Stream -1
  • the frame includes a sequence of 60 byte intervals or channels (0-59) and this sequence repeats 810 times in every frame. Each byte within an interval represents one STS-1 worth of bandwidth (810 bytes/frame). If an interval is assigned to packet traffic (which may be Synchronous or Asynchronous traffic), the interval carries fixed length packet (FLP) traffic.
  • FLP fixed length packet
  • the just-described 60 channels may be assigned ahead of time for either TDM traffic or for packet traffic.
  • TDM traffic the assignment of each channel to/from a line unit is static, e.g. doesn't change over a period of seconds, because it is provisioned by software.
  • packet traffic the assignment of each channel to/from a line unit is dynamic, e.g. may change multiple times even in a single frame, because an arbiter in hardware performs the assignment.
  • the type of traffic (e.g. TDM (synch) or packet(asynch)) carried in a channel can be changed by software, without loss of data as follows.
  • a new channel assignment is created in a memory, in an off-line manner, for both line units and for the switch unit (which contains a cross- connect). For example, a channel that was assigned to carry time-division-multiplexed (TDM) data may be now assigned to carry packet data.
  • TDM time-division-multiplexed
  • the software provides the new configuration to the switch unit and to the line units, and informs the switch unit to implement the new configuration.
  • the switch unit synchronously distributes a reconfiguration signal to the line units (synchronized with the frame boundary of the backplane bus).
  • a switchover from offline to online memories to implement the change is performed sequentially by all circuitry in the network element, from each traffic input port to that traffic's output port (e.g.
  • source line unit may generate a request to use the newly-allocated packet channel(s), and the switch unit may respond with a grant at which time the newly-allocated packet channel(s) are used to actually transmit packet traffic .
  • the just-described change in the type of traffic carried by a channel can be performed in increments of one channel because all channels are carried in a single frame on a single serial bus.
  • This architecture provides an advantage in that the granularity of the change is an order of magnitude finer than a change that is possible if two serial buses were used (to connect a single line unit to a switch unit), with one bus carrying TDM data and the other bus carrying packet data.
  • Interval 0 in the 125 microsecond frame carries 5 bytes of overhead in every frame.
  • the 5 -byte overhead is used in one specific example for framing, superframe sync, active/protect state and other "low level" communications.
  • a TDM aligner takes into account the overhead bytes when scheduling traffic as the 5 bytes of overhead leave only 805 bytes in the channel to transport payload.
  • a frame sync pattern appears at the start of a 125 microsecond in the frame.
  • each backplane bus transports traffic between the line units and the switch fabric of the chassis. Traffic is transported over copper traces in the cross-connect and line units, and also in the point-to- point backplane buses.
  • SONET ESF is a 3 millisecond signaling superframe format.
  • Sync TDM packets Five types of traffic are transported over backplane bus: overhead, STS, synchronous (sync) TDM packets, sync packets, and asynchronous (async) packets.
  • Sync TDM packets (strictly scheduled) have priority over other sync packet types.
  • FIG. 5C illustrates how fixed length packets are distributed across a data transfer frame, with a remainder at the end of the frame.
  • a frame sync pulse delineates the start of a 125 microsecond frame.
  • the 125-microsecond frame sync pulse refers to frame sync signals that are present within the cross-connect. However, no 125- microsecond or 6-millisecond frame sync signal is present on a backplane bus.
  • the GAI and GA2 framing pattern within the 3.1104Gbps data stream identify the start of a new 125-microsecond frame.
  • the GK1 backplane overhead byte identifies the 6-millisecond superframe by carrying the current frame number from 0 to 47. For every STS-1 interval (a.k.a. "STS channel”) assigned to synchronous traffic,
  • 810 bytes (805 bytes for STS Interval 0) are made available to carry synchronous fixed length packets. 810 bytes divided by 64 bytes per packet yields 12.65625 packet slots per channel per frame. Fixed length packets must start on packet slot boundaries, and any fractional packet slot at the end of a frame is not used to carry synchronous packet traffic.
  • the bandwidth of backplane bus 100 is 3.1104 gigabits per second. This equates to one 40-bit word every 77.76 MHz clock cycle, which is the speed the backplane MAC interface operates at.
  • the backplane MAC interface includes a serializer/deserializer which converts the 3.1104 Gbps stream to/from discrete overhead signals, an 11-bit arrival/grant bus (to an arbiter in the cross-connect), a 40-bit sync packet/STS bus (to the synchronous cross-connect), and a 64-bit async packet bus (to the asynchronous cross-connect).
  • a circuit in a scheduler for inserting packets into the frame compensates for the number of channels allocated to TDM.
  • the packet scheduler circuit positions (strictly scheduled) sync TDM packets in specific positions within the 125 microsecond frame based on frame position field in VCI.
  • the packet scheduler circuit also inserts loosely scheduled sync packets in remaining sync packet slots.
  • a maximum limit for the number of packets that may be sent in any one frame is set at 758 packets for loosely scheduled packets and 512 packets for strictly scheduled sync TDM packets. This limit can be reached when all STS-1 intervals are assigned to synchronous traffic (i.e., if fewer intervals are assigned, fewer packets can be transported).
  • FIG. 5 A illustrates an example of channel allocation in which sync packets (TDM/Multicast), async packets (Unicast) and STS channels are transported simultaneously within one frame.
  • a frame sync pulse marks the start of the 125 microsecond frame.
  • the 60 channels of each 125 microsecond frame are divided among these three groups wherein:
  • Each strictly scheduled TDM packet is associated with a Frame Position Field (FPF) which is placed in lowest 10 bits of the VCI field of the packet.
  • the FPF is only needed for strictly scheduled TDM sync packets.
  • the FPF refers to a specific packet slot position within a particular the frame.
  • Each strictly scheduled TDM packet within a frame has a unique FPF number.
  • the backplane bus transports data in frames and a single bus can carry the equivalent of 60 STS-1 channels, or 3.1104 Gbps. A frame on this bus is always 125 microseconds.
  • Channels on this bus may be allocated as follows: (1) async packets (e.g., Unicast); (2) sync packets (TDM strictly/loosely scheduled, Multicast, SAR, TOH); (3) STS channels (i.e., not packets but 810 bytes of data); and (4) unassigned.
  • async packets e.g., Unicast
  • sync packets TDM strictly/loosely scheduled, Multicast, SAR, TOH
  • STS channels i.e., not packets but 810 bytes of data
  • unassigned When a channel is unassigned, the bus sends an idle pattern '0x55'.
  • Each channel provides 810 bytes every 125 microsecond frame.
  • a TDM-packet as outlined above and below, is 64-bytes long.
  • the following formula calculates the portion of the frame that is occupied with sync packets (TDM or other); where N is the number of channels allocated to TDM traffic, 810 is the number of bytes per channel.
  • the frame size is:
  • Channel 0 i.e., the overhead channel
  • TDM uses 5-bytes for overhead
  • TDM is allocated in Channel 0
  • only 805 of the 810 bytes is available for sync TDM packets.
  • Packet traffic may be dynamically scheduled in a frame on a backplane bus by a cross-connect of the type described above.
  • the cross-connect performs a multi-stage WRR (weighted round robin) arbitration function to choose which of the line unit packets is to be transferred on the backplane bus in the next available time slot (which is determined by a Routing Map field in a signal from the cross-connect to the line unit).
  • WRR weighted round robin
  • the cross-connect identifies which packet should be driven over the upstream backplane bus.
  • the packet grant signal appears at the line unit's transmit, or downstream interface.
  • the line unit caches grants, so that the next upstream packet may not be the last packet granted.
  • each port of the cross-connect is coupled to a line unit by a single serial bus, and the circuitry in each line unit is capable of generating both time-division-multiplexed (TDM) data and packet data in a time interleaved fashion for transmission on the serial bus.
  • TDM time-division-multiplexed
  • the packet data is transmitted in a number of time slots in a frame on the serial bus, and occupancy of the time slots is dynamically allocated by the cross-connect, among line units that transfer packet data.
  • frames of the type described above, on a backplane bus are grouped into a 6 millisecond superframe.
  • the physical layer for backplane buses can be based on, for example, IEEE 802.3z.
  • Such buses may be coupled via a connector, e.g. a 5-row by 10 VHDM connector in each LU slot of chassis 39, and and an 8-row by 60 VHDM connector in the RAP slots.
  • the above-described network element 30 also contains a cross-connect to transfer traffic among various line units.
  • traffic to and from subscriber equipment also called “subscriber traffic”
  • subscriber traffic such as a telephone instrument or a private branch exchange (PBX)
  • PBX private branch exchange
  • core line unit 32 carries the subscriber traffic sequentially, thereby to pass the subscriber traffic to and from the core of the communication network, e.g. to support a long distance telephone call.
  • the transfer of traffic between line units and the cross-connect changes with time, so that at a different moment in time, traffic to/from subscriber line unit 31 may be switched by such a cross- connect to/from router line unit 34, e.g. if the traffic originates from and has as its destination a modem (which modem may be coupled to a different port of line unit 32 from the port to which the above-described telephone instrument is coupled).
  • a network element of the type described above implements in its line units a number of functions that were performed by a number of discrete products in the prior art. Therefore, a network element of the type described herein eliminates the need for a communication service provider to deploy such discrete products. Specifically, one embodiment of the network element eliminates the need for digital loop carriers (DLCs), DSLAMs, add/drop multiplexers (of both types: access and transport). Instead, a communication service provider can simply use a single chassis of the network element, and install whichever subscriber line unit is needed to support the type of services required by its customers. If necessary, such a single chassis may also be configured to contain a core line unit to connect to the core network.
  • DLCs digital loop carriers
  • DSLAMs add/drop multiplexers
  • the number of chassis in a single network element may be incrementally increased (e.g. up to a total of 5 chassis which fit in a single cabinet). Therefore, a network element of the type described herein can be used to provide services at the network edge, e.g. to small businesses and at the same time to connect Class 5 switches to long-distance trunk facilities, and/or interexchange transport facilities. Therefore, a network element of the type described herein may be used in any portion of the entire communication network, to provide any service.
  • each of line units 31-34 are installed in a chassis 39 in which is also installed a unit 35 (which may also be called “switch unit” or resource arbitration processor abbreviated "RAP") that contains the above-described cross- connect.
  • unit 35 which may also be called “switch unit” or resource arbitration processor abbreviated "RAP"
  • RAP resource arbitration processor
  • Each of units 31-35 may be implemented in a modular manner, e.g. as cards that can be inserted into and removed from chassis 39. There may be one or more additional cards that act as standby in case of a failure of a portion of the circuitry in such cards. However, in other implementations, it is not necessary for the units to be modular, or to be implemented in cards.
  • switch unit 35 may be built into a chassis in an alternative implementation.
  • switch unit 35 and a line unit are implemented in a single card, such as a common control card.
  • Line unit circuitry located on the common control card i.e. RAP card
  • RAP card may contain circuitry that supports any traffic rate, such as OC3, OC12 and OC48.
  • chassis 39 is 19 inches wide and contains 23 slots, whereas in another implementation chassis 39 is 23 inches wide, and provides 29 slots.
  • Width W (FIG.5) is selected so as to allow chassis 39 to be installed in a rack commonly found in a central office.
  • one slot in the center is reserved for a card that contains an interface unit (to interface with humans), and two slots on either side of the center slot are reserved for two copies of the above-described common control card, wherein one copy is used as a standby for the other copy.
  • the usage of slots in one embodiment is described in the following table:
  • Switching of the two kinds of traffic (voice and data) as described above, by the cross-connect in switch unit 35 is performed in two separate circuits, namely a synchronous cross-connect, and an asynchronous cross-connect that are implemented in a single electronic component (such as an application specific integrated circuit "ASIC" of the type illustrated in FIG. 10 and described below.
  • the synchronous cross-connect (which transports TDM traffic), and the asynchronous cross-connect (which transports packet traffic) operate simultaneously, i.e. one cross-connect switches traffic to/from one set of input-output ports while at the same time the other cross-connect switches traffic to/from another set of input-output ports. Traffic to/from all input-output ports is multiplexed through these two cross-connects, that operate concurrently to switch synchronous and asynchronous traffic between the ports.
  • a network element of the type described above may be used to groom a broad range of traffic classes, including ATM, Frame Relay, IP, STS and TDM traffic.
  • any combination of the foregoing traffic types can be switched between a core network and/or PSTN and business users and or residential users.
  • a communication network of this embodiment includes two network elements that are hereinafter referred to as a Host Node (HN) that typically resides at a central office (CO) and a Flex Node (FN) that resides at a remote terminal to which individual users have access.
  • HN Host Node
  • FN Flex Node
  • the network elements are used as Host Nodes and Flex Nodes in other embodiments such network elements may be used as other components of a communication network, such as add-drop multiplexers.
  • Host Node 110 and Flex Node 112 contain many of the same components, e.g. each node has at least a first shelf 114a, 116a, respectively.
  • Shelves 114a, 116a of have a number of slots for receiving and coupling circuit boards (118, 120 respectively) to a common back-plane (not shown) by which the circuit boards of each shelf communicate with one another.
  • Shelf 114a of HN 10 has two slots for receiving an active and a redundant Routing and Arbitration Processor board (RAP(A) 124a and RAP(B) 124b respectively).
  • Shelf 116a of FN 112 has two slots for receiving an active and redundant Routing and Arbitration Processor board (RAP(A) 122a and RAP(B) 122b respectively).
  • the RAP (for both nodes 110, 112) performs the primary functions providing an interface for up to 32 full-duplex data ports over which data traffic is received or transmitted, routing the data traffic between the ports, and to condition the data traffic for routing.
  • RAP Routing and Arbitration Processor board
  • AMP administration and maintenance processor board
  • the AMP provides test access, alarm status, application hosting, and the AMP 115 provides an Ethernet connection, for Network Element Management.
  • Shelves 114a and 116a of nodes 110, 112 also have at least twenty slots to receive circuit boards that form line units 118, 120 respectively.
  • line units may be implemented as circuit boards that are specifically designed to transmit and receive particular forms of traffic.
  • some line units have ports and circuitry to interface with POTS terminals.
  • the same line units may have other ports designed to interface with DS-1, DS-3, DSL, STS or optical standards such as OC-3, OC-12 and OC- 48 for example.
  • all ports of certain line units may be of the same kind, e.g. a single line unit may have only POTS ports.
  • All of the line units 118, 120 of a given shelf 114a, 116a provide communication between some number of ports to the outside world, and one or more backplane buses of the type described above.
  • the backplane buses are full duplex serial buses coupling the line units (LUs) to the active RAP(A) and inactive RAP(B) of the shelf through the backplane (not shown).
  • each slot of a given shelf has four such serial buses by which each line unit plugged therein is coupled to the RAP(A) of that shelf and four such serial buses by which the line unit plugged therein is coupled to the RAP(B) of that shelf.
  • a circuit board slot of shelf 114a of HN 110 is used for a two port OC-12 circuit board 118(A), 118(B) that provides dual SONET OC-12 interfaces 113a, 113b when RAP(A) 124a is active, and standby interfaces 113c, 113d if RAP(B) becomes active in the event RAP(A) 124a fails.
  • the SONET interfaces 113a-d can be used to provide connectivity to an Edge/Core Network 111.
  • the line unit 118(A)(B) is coupled through the backplane to RAP(A) 124a through two active serial buses (and to RAP(B) 124b as well through one inactive bus for backup) operating at 3.11 Gbps.
  • One embodiment of line unit 118 can provide STS, ATM and or PPP transport modes.
  • An enhancement to allow the transport of both (point-to-point protocol (PPP) and ATM can be provided via the Pre-emptive ATM over PPP method.
  • the HN 110 is used as either a transport or access multiplexer.
  • Line unit 118 can also be used for communication between the HN 110 and FN 112.
  • FIG.6 illustrates that another slot of shelf 114a of HN 110 is used for a line unit 119(A), 119(B) that provides for communication between HN 110 and RAP(A) 122a (or RAP(B) 122b) of FN 112.
  • line unit 19(A)(B) can have up to four OC-3 ports, thereby providing multiple SONET OC-3 interfaces that can be used to communicate between the HN 110 and FN 112 nodes.
  • such line units may have 16 optical ports, each of which may be operated at OC-3 or OC-12 rates.
  • HN 110 is typically used as an access multiplexer.
  • Fiber egress to line unit 119 is via four SC connectors.
  • One embodiment of line unit 119 provides STS,
  • An enhancement to allow the transport of both PPP and ATM can be provided via the Pre-emptive ATM over PPP method.
  • Another enhancement that can be provided is for a line unit to transport Ethernet or storage area network (SAN) traffic (such as ESCON or Fiber Channel) via Generic Framing Protocol (GFP).
  • SAN storage area network
  • GFP Generic Framing Protocol
  • the circuit board slots of shelves 114a, 116a of nodes 110, 112 can be populated with optical line units as described above or with additional embodiments of line units 118, 120 that provide standard interfaces to several typical forms of data and/or voice traffic sourced by business and residential users.
  • line unit 120 is a POTS circuit board that supports up to 24 POTS interfaces.
  • the POTS interfaces support both loop and ground start as well as provisionable characteristic loop impedance and return loss.
  • the bandwidth requirements for this board are low and thus only one active and one standby full duplex serial bus running at 3.11 Gbps are required to couple this line unit to the RAP(A) 122a and RAP(B) 122b respectively.
  • a line unit that may be installed in a chassis of the type described herein may implement any type of telephony interface well known in the art (i.e. legacy), such as EBS to support analog voice and digital telemetry, digital data service (DDS) of 64 kbps, ISDN, and pay phone interface.
  • legacy i.e. legacy
  • EBS EBS to support analog voice and digital telemetry
  • DDS digital data service
  • ADSL Asymmetric Digital Subscriber Line
  • ADSL is a modem technology that converts existing twisted-pair telephone lines into access paths for multimedia and high-speed data communications. ADSL can transmit up to 6 Mbps to a subscriber, and as much as 832 Kbps or more in both directions (full duplex).
  • the line coding technique used is discrete multi-tone (DMT) and may also function in a G.Lite mode to conserve power.
  • DMT discrete multi-tone
  • One embodiment of a circuit board does not include POTS splitters, which must be deployed in a sub-system outside of the shelf 116a, although POTS splitters may be used in other embodiments.
  • One embodiment of the 24 port ADSL line unit is constructed with quad port framers. Because the bandwidth requirements for this line unit are typically low, once again only one active and one standby bus running at 3.1 Gbps are required to couple the line unit to the RAP(A) 122a and RAP(B) 122b of shelf 116a.
  • DSL such as HDSL, SDSL,
  • VDSL may be implemented in other line units, depending on the embodiment.
  • Another line unit that can be employed with the system of FIG. 6 is a twelve port POTS/DSL combo-board that supports 12 POTS and 12 Full Rate ADSL interfaces.
  • the combo-board is the melding of half of a POTS and half of an ADSL line unit as discussed above.
  • the POTS interfaces support both loop and ground start as well as provision able characteristic loop impedance and return loss.
  • the ADSL line-coding technique used is DMT and may also function in a G.Lite mode to conserve power.
  • quad port ADSL framers may be employed to condition the ADSL traffic. Because the bandwidth requirements for this line unit are typically low, only one active and one standby serial bus running at 3.1 Gbps are required to interface the line unit with the RAP(A) 122a and RAP(B) 122b respectively.
  • Another line unit that can be employed within the system illustrated in FIG. 6 is a twelve port DS-1 board that supports up to 12 DS-1 interfaces.
  • An embodiment of the DS-1 line unit supports DSX-1 interfaces.
  • the card supports twelve dry T-l interfaces. Switching current sources are included to support twelve powered T-l interfaces. Again, because the bandwidth requirements for this line unit are low, only one active and one standby bus running at 3.1 Gbps are required to connect the line unit to the RAP(A) and RAP(B) respectively.
  • data traffic flows over a Utopia 2 interface. Control and provisioning information is communicated over a PCI bus. Control and provisioning information for Tl framers / line interfaces is communicated over a serial interface.
  • An edge stream processor provides an ATM adaptation layer (AAL) 1/2 adaptation function for all 12 framer interfaces.
  • the ESP terminates structured DS0 traffic TDM cross-connect switching or time stamps asynchronous DS1 traffic for Hi-Cap transport.
  • the ESP processor is also capable of terminating PPP framed DS1 traffic for either Layer 2 tunneling (L2TP) or Layer 3 routing.
  • Another line unit 118, 120 that can be used in the system of FIG. 6 is an eight port
  • each interface may be provisioned to terminate either an ATM UNI (ATM unicast), PPP or Channelized Hi- Capacity DS3 service.
  • a routing stream processor supports either ATM or PPP by queuing and scheduling DS-3 traffic for packet transport through the RAP.
  • a DS3_Mapper supports the Hi-Cap DS3 by mapping DS-3 traffic into an STS1 channel for low latency transport through the RAP.
  • Another line unit 118, 120 that can be used in conjunction with the system of FIG.
  • FIG. 6 is a single port STS1 board.
  • STS-1 board a mode selectable single STS1 or DS-3 interface is supported.
  • Yet another line unit 118, 120 that can be used in conjunction with the system of FIG. 6 is a single port OC48 Interface board.
  • the single port OC48 line unit provides a single SONET OC-48 interface to provide connectivity to the Edge/Core Network.
  • the HN 110 is used primarily as a transport level add/drop multiplexer.
  • fiber egress is via two SC connectors (or two LC connectors in another embodiment).
  • SONET framers provide STS, ATM and or PPP transport modes.
  • An enhancement to allow the transport of both PPP and ATM is provided via a Pre- emptive ATM over PPP method.
  • other such line units may have four OC48 ports on a single board, or a single OC192 port.
  • Still one more line unit 118, 120 that can be used in conjunction with the system shown in FIG. 6 is a twelve port Fiber to the Business (FTTB) board.
  • the twelve port FTTB assembly provides a single OC-12 SONET interface out and twelve OC-1 SONET interfaces in.
  • FTTB Fiber to the Business
  • a unique integrated splitter arrangement allows sharing of the single laser diode over twelve business subscribers. Twelve discrete PIN diode receivers will be provided to allow a single interface per each of the twelve business subscribers. This method allows simple and inexpensive fault isolation and efficient bandwidth management amongst the entire pool of business subscribers.
  • the interface provided to the subscriber is a single fiber with a single lambda down and a separate lambda up. This arrangement reduces fiber bulk and cost.
  • the dual Lambda arrangement allows a simpler splitter implementation to be realized in a "silica on silicon" waveguide device.
  • the FTTB line unit is coupled to RAP(A) 124a through one active serial backplane bus (and to RAP(B) 124b as well through one inactive backplane bus for back-up) operating at 3.11 Gbps.
  • the network element is designed to permit the largest number of customer applications to be supported within a single shelf HN and single shelf FN. While the system of FIG. 6 is optimized towards a single shelf per node configuration, expansion to a multi-shelf per node configuration is also supported in a deterministic and modular fashion. Typically co-located shelves are connected in a Uni-directional Path Switched Ring (UPSR) arrangement via the RAP mounted optics as illustrated in FIG. 6 by the addition of shelves 114b, and 114c to HN 110. This provides a simple, incremental means of local expansion. Local expansion can also be implemented in a point-to-point fashion, however, at the expense of slots and cost.
  • UPSR Uni-directional Path Switched Ring
  • a single HN 110 may host many FNs 112 in a variety of topologies. FNs 110 may be subtended from an HN 110 in point-to-point, linear chain, branch and continue, UPSR or bi-directional line switched ring (BLSR) arrangement. An extremely large network of over 20,000 subscribers may be constructed from the HN/FN hierarchy as illustrated in FIG. 6.
  • Line units 118, 120 are split out into an input side 130a and output side 130b for clarity in FIG. 7 (which is also shown without the AMP).
  • Those of skill in the art will recognize that the input and output paths of each line unit 134a(l-n), 134b(l-n) respectively, will typically reside on one circuit board and may even share some of the same circuits.
  • Each output path 134b(l-n) produces an output 136b(l-n) in the form of cells, samples and packets.
  • the line cards can be of a lower bandwidth nature (e.g. line unit 138al/38bl), such as the twenty-four port POTS unit, the twenty-four port DSL unit, and the twelve port POTS/DSL combo unit as previously described.
  • line units 134an/134bn high bandwidth (e.g. line unit 134an/134bn), such as the twelve port DS1/T1 unit, the eight port DS3 unit, and the ST-1, OC-3, OC-12 and OC-48 units, all of which require an ESP/RSP 142a, 142b to perform pre-processing of the traffic.
  • Line unit input paths 134a(l -n) perform the function of interfacing with the physical layer of a data source to receive data traffic over inputs 136a(l-n) in the form of cells, samples or packets, depending upon the source type, and then grooming the traffic through its bus interface circuitry (called "GAP" which is an abbreviation for GigaPoint
  • GigaPoint stands for a point-to-point backplane bus operating at a rate in the gigabit/second range) 140a(l-n) so that it can be switched by the matrix 137 of switch 133 to any one or more of the line unit output paths 134b(l-n).
  • the line unit output paths 134b(l-n) perform the function of taking the routed traffic and converting it back to the physical layer format expected by the receiver to which it is coupled.
  • the interfacing function is represented by input path circuits Phy 138a(l-n) and
  • the traffic for higher bandwidth traffic sources often requires an additional layer of circuitry 142a, 142b for grooming the data traffic, such as an RSP, an ESP and/or quad framers.
  • the input path GAP 140a(l-n) for each line unit input path 134a(l-n) transmits groomed and serialized data traffic over one or more high-speed serial backplane buses 132a(l-n) to the switch 133, where the serialized data is converted to parallel data for switching.
  • a transmit portion of output path GAP 140b(l-n) residing within switch 133 serializes switched data traffic and transmits it over high-speed serial backplane buses 132b(l-n) to the receive portion of each GAP residing in the line unit output paths. From there, the data traffic is adapted back to the physical layer protocol spoken by the destination for the outgoing traffic.
  • the input path GAP 140a(l-n) of each line unit provides data to switch 133 as asynchronous packet traffic, TDM and multicast synchronous packet traffic, and channelized STS data traffic.
  • the output path GAP 40b(l-n) of each line unit receives data in one of the foregoing forms from the switch 133, and adapts it to the requisite physical layer of the traffic destination.
  • the switch 133 includes the active RAP(A) 144 and back-up RAP(B) 146. Each
  • RAP includes a switch unit (called GRX which is an abbreviation for GigaPoint Routing Cross-connect) 139.
  • GRX 139 further includes an arbiter 135 and a matrix 137 by which the conditioned data traffic arriving over the input side of any of the serial buses is switched to the appropriate output side of one or more of the serial backplane buses coupled to the destination for that conditioned traffic.
  • the two RAPs can be both used simultaneously to transport unprotected traffic e.g. Ethernet traffic, to load-share the RAPs and provide additional bandwidth through the switch fabric.
  • FIG. 8 there are twenty-four I/O ports for GRX 139 that are coupled to the backplane. Twenty of the I/O ports couple line units 134(l-n) to the GRX 139 through full-duplex serial buses 132(1-20). One port 132(21) couples RAP(A) 44 to the outside world through input 136(21). Another port that is not shown couples RAP(B) (also not shown) to the outside world and the last port (not shown) couples the AMP (not shown) of shelf 116a to the outside world.
  • the line units 134(l-n) are shown with both their input and output paths.
  • FIG. 8 One possible combination of specific line units is illustrated in FIG. 8 to illustrate the versatility of the system.
  • Twenty-four port POTS unit 134(1) interfaces to the GRX 139 over serial bus GP_1 132(1), and has twenty-four telephone interface inputs 136(1) coupled to five quad codecs 138(1), which in turn interface with GAP 40(1).
  • OC3 units 134(11) and 134(n) are each coupled to GRX 139 over two serial buses GP_11 132(11), GP_12 132(12) and GP_19 312(10), GP_20 132(20) respectively.
  • Each unit provides a total bandwidth equivalent to OC-12, handling bi-directional OC-3 traffic over OC-3 I/Os 136(11) and 136(n) respectively.
  • the line units 134(11), 134(n) interface with the OC-3 traffic by way of OC-12 framer/phy 138(11), 138(n) respectively and data is transferred to and from the GRX 139 by way of GAP 40(11), 40(n) respectively.
  • any combination of the line units described herein, or any that are designed to interface with and condition data traffic to be transmitted and received over a backplane bus by way of a GAP in the form of asynchronous packet traffic, TDM and multicast synchronous packet traffic, and channelized STS data traffic can be employed in shelf 116a of FIG. 8.
  • the present invention is able to serve a broad range of network access functions because it handles both synchronous and asynchronous classes of traffic over a common fabric of backplane buses.
  • these types of traffic are typically handled separately over separate bus systems.
  • Such a solution to the problem is not desirable because though it makes handling the traffic simpler, it does not provide advantages of flexibility and lower cost.
  • Line unit 118, 120 consists of GAP 134a,b (both transmit and receive paths).
  • the GAP 140 interconnects traffic from the serial (backplane) bus 132a,b (transmit and receive) to physical interfaces that make up part of the line units such as OC-48, POTS etc.
  • the GAP 140 receives and transmits TDM and packet base traffic over the backplane bus 132.
  • the GAP 140 also transmits local queue status over the backplane bus 132.
  • the GAP receives control and arbitration information over the backplane bus 132, and maps POTS Codec traffic into an internal packet format.
  • the GAP supports VoQ with 2 classes of service toward the backplane bus.
  • the GAP supports AAL-5 by way of a Hardware SAR (Segmentation and Reassembly) engine, termination of SONET transport overhead bytes and implements a time slot interchange (TSI) for DS0 traffic.
  • TSI time slot interchange
  • the GAP includes a Transmit/Receive GP(gigapoint) MAC (Media Access Controller), that handles transmitting and receiving communicating both STS and packet traffic through the serial high speed backplane bus through SerDes (Gigabit Ethernet serializer/deserializer) 152.
  • SerDes Gigabit Ethernet serializer/deserializer
  • the mixture of packet and STS traffic is simply squeezed down and transmitted over a high speed link to transport the traffic between the GAP, across the backplane of the system, and the common switch fabric represented by GRX 139.
  • Another transmit/receive SerDes 154 resides on the GRX 139 that is interfaced with another receive/transmit MAC 156.
  • the receive GP MAC 156a accepts combined asynchronous and synchronous traffic from a line unit through the SerDes core and distributes it to the packet crosspoint 158 and synchronous crosspoint 160 respectively.
  • STS bytes and synchronous packets (TDM and multicast) are driven to the synchronous crosspoint 160 over a 40-bit parallel bus.
  • Unicast packets are sent over a 64-bit FIFO interface to the packet crosspoint 158.
  • An eight-bit packet arrival word is extracted by each receive MAC and driven to the arbiter 135. The arrival word is sent with an arrival strobe, as well as downstream backplane bus and grant backpressure signals.
  • the GRX transmit GP MAC 156b receives data bound for the serial bus to line unit 188, 120 over three buses; the 64-bit asynchronous packet bus 164, the 40-bit synchronous bus 166 and the 8-bit arbiter bus 162.
  • Asynchronous bus data is read from the packet crosspoint's output FIFO.
  • Synchronous data STS, TDM and multicast packets
  • STS, TDM and multicast packets is received from the synchronous cross-connect at timeslots relative to superframe sync and in accordance with bandwidth allocation of the particular link by way of a channel map configuration.
  • Packet grant information is transported from the arbiter 135 to the transmit GP MAC in a manner similar to that of packet arrival information.
  • FIG 10. A block diagram of the GRX ASIC is depicted in FIG 10.
  • the receive GP MAC modules 156a(l)-a(n) interface with the SerDes receiver 154(1).
  • the receive MACs extract packet arrival and backpressure fields from the packet headers and pass the information to the arbiter 135 and transmit GP MACs 156a(l)-a(n) respectively.
  • the receive GP MAC modules also provide data layer decoding by splitting STS, TDM, multicast packet, and unicast packet traffic. In one embodiment, only unicast packet traffic is routed to the Packet Crosspoint 158 and the other traffic types are routed to the Synchronous Crosspoint 160.
  • Loosely scheduled TDM and multicast traffic could be routed through the packet cross-connect 158, but it is actually more convenient to route these types of packet traffic through the synchronous cross-connect 160 as well.
  • the transmit GP MAC 156 b(l)-b(n) modules combine the various traffic types output by the crosspoints 158, 160 and output them to the SerDes Transmitters 154(l)-n.
  • the transmit MACs insert packet grants and backpressure fields into the packet headers.
  • the Packet Crosspoint 158 snoops on the Packet Grant 302 and Packet Arrival 300 interfaces in support of the grant audit mechanism.
  • traffic carried over the serial links between the GAP 140 and the GRX 139 is classified in to three primary groups. As shown in FIG. 5A, sixty channels 310 are pre-allocated for STS 312, TDM/Multicast 316, or Unicast traffic314. Fixed TDM FLP (fixed length packet) slots are then defined within the channels allocated to TDM. Each TDM FLP slot is 64 bytes long and remains at a fixed location with respect to the 125us frame sync until the TDM pipe is re-sized. TDM traffic shares its bandwidth with Multicast traffic, which means that software needs to take into account the bandwidth requirements of Multicast when provisioning TDM.
  • TDM FLP fixed length packet
  • Each backplane bus can support the transport of STS traffic in designated timeslots in a 125us frame window. Traffic in channels sharing the same channel designator can be merged as it is passed through the 24: 1 masked muxes.
  • This function allows a VTl .5 cross-connect function to be implemented by aligning VTl .5s within the appropriate STS channel(s).
  • Additional mapping formats, such as VT2 and VT6 functions can also use such a merging function of the 24:1 muxes to effect similar cross- connects.
  • a network element of the type described herein does not provide call processing functions
  • call processing may be supported by a network element of the type described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A network element (14) can be configured for connection to any portion of a communication network: access, transport and core (16) Moreover, a single network element can be configured to couple subscriber equipment directly to the core portion of the network, thereby permitting the subscriber to bypass the transport portion of the network. Specifically, such a network element can be configured to include a line unit that supports subscriber equipment (also called a 'subscriber line unit'), and also to include a line unit to support a link to the core of the communication network (also called a 'core line unit'). The subscriber line unit and core line unit are both installed in a single chassis, and each unit can be installed in any of a number of slots in the chassis. Moreover, when configured with appropriate line units, such a network element may support traditional circuit-switched telephony services while simultaneously delivering packet-based voice or data services. The network element (14) provides multi-class service over the entire range of the network because it employs a common switch fabric for handling both synchronous and asynchronous traffic over a common bus.

Description

CONCURRENT SWITCHING OF SYNCHRONOUS AND ASYNCHRONOUS TRAFFIC
Jason Dove
Brian Semple
Mike Nelson
Ying Zhang
James W. Jones Andre Tanguay
BACKGROUND
Traditionally, central office switches that process telephone calls between subscribers typically use switches called Class 5 switches, such as the 5ESS available from Lucent. A telephone instrument may be directly connected to such a Class 5 switch as illustrated in FIG.1 if the telephone instrument is located within 18 kilofoot radius. Beyond the 18 kilofoot radius, support for such telephone instruments that use a copper twisted pair may be provided through a digital loop carrier (DLC) which has two portions: a central office terminal and a remote terminal. The central office terminal is normally located within the central office and communicates with the remote terminal using a digital signal over metallic (such as copper) or optical link (also called "digital line"). The central office terminal of the digital loop carrier is coupled to the Class 5 switch in the central office and the coupling may conform to an aggregation interface, such as GR303 (a standard defined by Telcordia). The remote terminal in turn is connected to a number of telephone instruments. Depending on the hardware installed within the remote terminal, such a remote terminal may also provide a high-speed trunk, such as Tl that may be needed by a business and/or be coupled via modems to personal computers to support data traffic. The DLC remote terminal may be implemented by a digital multiplexer that combines a number of subscriber channels into a single high speed digital signal, and the
DLC central office terminal implemented by a de-multiplexer. Because a digital line cannot carry signals as far as a corresponding analog line, the digital line often requires a number of digital repeaters to boost signal level. A typical digital line of a DLC carries from 24 to 3000 POTS circuits. Note that a DLC central office terminal may be eliminated, e.g. as in case of an Integrated Digital Loop Carrier System, wherein the digital line is directly connected to the Class 5 switch.
All of above-described equipment up to the Class 5 switch in the central office is traditionally referred to as forming the "access" portion of a public switched telephone network (PSTN). The Class 5 switch may be associated with a portion of a telephone number, e.g. the portion 252 in a telephone number 408-252-1735. All telephones that are serviced by a single Class 5 switch are normally assigned a telephone number that includes a preset prefix, e.g. 252. The Class 5 switch typically forms connections between telephones within its own service area, each of which starts with the preset prefix, e.g. 252. When a telephone instrument within its service area places a call to a number different from the numbers starting with the preset prefix, the Class 5 switch connects the telephone instrument to another switch which may be of a different class, such as a Class IV switch, commonly referred to as a hub switch.
The hub switch-is typically coupled to a number of Class 5 switches through a ring of add/drop multiplexers (ADMs). For example, each central office may have a Class 5 switch co-located with and coupled to an add/drop multiplexer, and in addition the hub switch is also co-located with and coupled to an add/drop multiplexer. All of the add/drop multiplexers are connected to one another in a ring topology. Such a ring topology typically contains two optical fiber connections between each pair of add/drop multiplexers, wherein one of the connections is redundant, and used primarily in case of failure. The just-described ring of add/drop multiplexers that connects a number of central office switches to a hub switch is typically referred to as forming the "interoffice" or "transport" portion of the public telephone network. The hub switch is typically connected to a number of other hub switches by another portion of the network commonly referred to as "core". To support data traffic, for example, to provide Internet access to a business, central offices typically contain additional equipment called a DSLAM which provides a digital subscriber line (DSL) connection to the business. The DSLAM may only service businesses that are within 18 kilofeet, e.g. because of the limitations of a copper twisted pair connection. Such DSLAMs are typically connected inside the central office to an add drop multiplexer so that data traffic can be routed to an Internet Service Provider (ISP). For businesses located outside of 18 kilofeet radius of a central office, a remote terminal of a digital loop carrier can be used to provide an IDSL service, which is based on the use of a ISDN link to the central office, via the central office terminal of the DLC.
The development of DSLAM, IDC and IDSL applications was the result of the need for access to the Class 5 switch by remote businesses and subscribers, particularly due to development remote from the Class 5 switches. Recently, larger businesses have bypassed this copper remote access to the transport layer of networks using large fiber optic trunks with large bandwidth capabilities. This new access has been called Metro Access. Smaller businesses would also benefit from this access, but so far most applications are too expensive to provide this direct access to small enterprise and subscribers. Thus, it would be highly desirable for a network access solution that provides the bandwidth of fiber access in the place of the typical copper remote access functions, especially that is cost-competitive with the legacy technology. It would also be highly desirable if that same solution could be used to perform cost-effectively at the remote access level, but through simple substitution of line card units to accommodate different types of traffic, could be deployed to interface with the core itself.
SUMMARY
In accordance with the invention, a network element can be configured for connection to any portion of a communication network: access, transport and core.
Moreover, a single network element can be configured to couple subscriber equipment directly to the core portion of the network, thereby to bypass the transport portion of the network. Specifically, such a network element can be configured to include a line unit that supports subscriber equipment (also called "subscriber line unit"), and also to include a line unit to support a link to core of the communication network (also called "core line unit"). The subscriber line unit and core line unit are both installed in a single chassis, and each unit can be installed in any of a number of slots of the chassis. Moreover, when configured with appropriate line units, such a network element may support traditional circuit-switched telephony services while simultaneously delivering packet-based services. In one example, the same network element acts as a circuit-switched network element that scales across an entire range from the subscriber to the core such as, digital loop carrier (DLC), add-drop multiplexer (ADM) and digital cross-connect (DCC), and also as a packet-based network element that also scales across the entire range from the subscriber to the core, such as digital subscriber loop access multiplexer (DSLAM), an Ethernet or asynchronous transfer mode (ATM) switch, or edge router.
Certain embodiments of such a network element employ a common switch fabric for handling both synchronous and asynchronous traffic over a single point to point bus between a line unit and a switch unit, and also provide multi-class service over the entire range of a communication network.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates, in a block diagram, a prior art communication network.
FIGs. 2-4 illustrate, in block diagrams, various embodiments of a communication network that uses a network element in accordance with the invention.
FIG. 5 illustrates, in a block diagram, a network element of one embodiment populated with one or more subscriber line units containing circuitry to support subscriber equipment and one or more core line units containing circuitry to support communication with the core of a communication network.
FIGs.5A-5D illustrate, in timing diagrams, various portions of a signal on a backplane bus in the network element of FIG. ,5.
FIG. 6 illustrates, in a block diagram, three network elements of the type illustrated in FIG. 5 located in a host node in a central office and coupled thereto three network elements in a flex node in a remote terminal in accordance with the invention. FIG. 7 illustrates, in a high-level block diagram, a line unit that supports plain old telephone service (POTS) and another line unit that supports twelve DS1 ports both coupled via a switch fabric to line units that support OC12 signals in one embodiment.
FIG. 8 illustrates, in an intermediate-level block diagram, a system of the type shown in FIG. 7 with the input and output interfaces shown explicitly.
FIG. 9 illustrates, in a block diagram, various facilitates seeing the flow of the different types of traffic (synchronous and asynchronous) through a common cross- connect (also called "switch fabric").
FIG. 10 illustrates, in a block diagram, a single electronic component that implements the cross-connect of FIG. 9.
DETAILED DESCRIPTION
A network element 11 (FIG.2) in accordance with the invention may be directly coupled to any kind of subscriber equipment (such as a telephone instrument 12, or a modem in a personal computer 13), when equipped with appropriate line units that support the subscriber equipment. Moreover, such a network element 14 (FIG.3) may also be coupled (either through a central office terminal of a DLC, or through another network element, or even directly) to a switch of any class (such as a central office switch 15 or a hub switch 16), when equipped with an appropriate line unit. In addition, network element 14 may also be coupled directly to a router 17 (FIG.3), again when equipped with an appropriate line unit. The network element 14 may be further coupled to other network elements 18, 19 and 20 in a ring topology to provide protection for voice traffic, e.g. support UPSR or BLSR ring functions defined by the SONET standard. Network element 14 may also be configured to protect data traffic.
Also, network elements of the type described herein, when equipped with the appropriate line units, can be used in the transport portion of the public telephone network and also in the core portion. For example, the network element as described herein may be coupled by an interexchange carrier to network trunk facilities to provide interoffice call delivery. When used in the core portion, such a network element may be provided with line units that are coupled to a long distance network, and may support dense wave division multiplexing (DWDM). Therefore, depending on the requirement, a single network element 30 (FIG.5) of the type described herein may be equipped with a line unit 31 (which may also be called "subscriber line unit") containing circuitry to support subscriber equipment, such as plain old telephone service (POTS) on copper twisted pair, and another line unit 32 (which may also be called "core line unit") containing circuitry to support the core portion of a communication network, e.g. at OC-
48 or OC- 192 rates.
The same network element 30 may also contain another line unit 33 (which may also be called "central office line unit") containing circuitry to support an aggregation interface (such as GR303 or TR008) to a central office switch (such as the 5ESS available from Lucent), e.g. at DS1 rate. The same network element 30 may also contain yet another line unit 34 (which may also be called "router line unit") containing circuitry to support a data interface (such as 1 Gigabit Ethernet or 10 Gigabit Ethernet).
Although just four different kinds of line units 31-34 have been discussed above, any number of different kinds of line units may be used in a network element as described herein. For example, one implementation supports the following analog, digital, ethernet, and optical line units: POTS (24 ports), DSL (24 ports), combination of POTS and DSL, DS1 (12 ports), DS3 (8 ports), lOBaseT, 100BaseT, 1 Gigabit Ethernet, 10 Gigabit Ethernet, ESCON (Enterprise Systems Connection), FICON (Fiber Connector), OC3 (4 ports), OC12 (2 ports), OC48 (single port), Fiber to Business (12 OC 1 ports coming in and one OC12 port going out). This implementation also supports a number of different kinds of these line units, e.g. POTS card may provide just basic telephone service, or provide universal voice grade service, or coin service, or transmission only service (2 wire or 4 wire), or private line automatic ring-down service (PLAR), foreign exchange service. Furthermore, the network element may be configured with one or more cards that interface with a Class 5 switch or with a router: e.g. DS1 card with GR 303 aggregation, or PLAR, or transmission only, or OC3 (packet), or OC12 (packet), or OC48 (packet). Each line unit 31-34 described above is coupled by one or more buses 41-44 (FIG.5) to the switch unit 35, physically located in a high-speed area of a backplane of chassis 39. Depending on the embodiment, the one or more buses 41-44 (also called "backplane buses") may be of any kind, synchronous or asynchronous, and may be either parallel or serial. Such buses operate at 3.1 Gbps in one example. Information is transmitted over such buses of the backplane in discrete units called "frames," and a number of frames are grouped into a superframe, e.g. in a manner similar to SONET. The information on such backplane buses may be generated by circuitry (labeled GAP in FIG.5), in each line unit (labeled LU in FIG.5), in a format which transfers both time- division-multiplexed (TDM) and packet data in a time interleaved fashion over a common bus (e.g. interleaving at byte boundaries, although interleaving at boundaries of words of other sizes can also be done in other examples). In some embodiments, each line unit is synchronized in time, by the cross-connect. In such embodiments, each of the line units transfers data to and from the cross-connect one word at a time, each word at a synchronized time interval. An example of such transmission is illustrated in FIG. 5D, wherein each word is a byte, STS and TDM/MC is TDM data and UNICAST is packet data.
Each of backplane buses 41-44 may be based on a 125 microsecond Synchronous Optical Network (SONET) frame and a 6 millisecond superframe. FIG. 5A shows one specific example of channel arrangement on a 125 microsecond frame forming a portion of a signal on a backplane bus. The frame is based on 60 channels in this example, the equivalent of 60 SONET Traffic Stream -1 (STS-1) channels, or 3.1104 Gbps. The frame includes a sequence of 60 byte intervals or channels (0-59) and this sequence repeats 810 times in every frame. Each byte within an interval represents one STS-1 worth of bandwidth (810 bytes/frame). If an interval is assigned to packet traffic (which may be Synchronous or Asynchronous traffic), the interval carries fixed length packet (FLP) traffic.
The just-described 60 channels may be assigned ahead of time for either TDM traffic or for packet traffic. When assigned for TDM traffic, the assignment of each channel to/from a line unit is static, e.g. doesn't change over a period of seconds, because it is provisioned by software. When assigned for packet traffic, the assignment of each channel to/from a line unit is dynamic, e.g. may change multiple times even in a single frame, because an arbiter in hardware performs the assignment. The type of traffic (e.g. TDM (synch) or packet(asynch)) carried in a channel can be changed by software, without loss of data as follows. A new channel assignment is created in a memory, in an off-line manner, for both line units and for the switch unit (which contains a cross- connect). For example, a channel that was assigned to carry time-division-multiplexed (TDM) data may be now assigned to carry packet data. Then the software provides the new configuration to the switch unit and to the line units, and informs the switch unit to implement the new configuration. In response, the switch unit synchronously distributes a reconfiguration signal to the line units (synchronized with the frame boundary of the backplane bus). A switchover from offline to online memories to implement the change is performed sequentially by all circuitry in the network element, from each traffic input port to that traffic's output port (e.g. in the following sequence: source line unit, switch unit, destination line unit). Thereafter, if packet traffic requires additional bandwidth, the source line unit may generate a request to use the newly-allocated packet channel(s), and the switch unit may respond with a grant at which time the newly-allocated packet channel(s) are used to actually transmit packet traffic .
The just-described change in the type of traffic carried by a channel can be performed in increments of one channel because all channels are carried in a single frame on a single serial bus. This architecture provides an advantage in that the granularity of the change is an order of magnitude finer than a change that is possible if two serial buses were used (to connect a single line unit to a switch unit), with one bus carrying TDM data and the other bus carrying packet data.
Interval 0 in the 125 microsecond frame carries 5 bytes of overhead in every frame. The 5 -byte overhead is used in one specific example for framing, superframe sync, active/protect state and other "low level" communications. When the channel is provisioned for TDM/Multicast or sync packet traffic, a TDM aligner takes into account the overhead bytes when scheduling traffic as the 5 bytes of overhead leave only 805 bytes in the channel to transport payload. A frame sync pattern appears at the start of a 125 microsecond in the frame. In one example of a chassis illustrated in FIG. 5, there are up to twenty line unit slots with one backplane bus per slot (i.e., twenty (20) backplane buses plus one to local PHY and another to a cross-connect (that includes a redundant resource arbitration processor (RAP)) for a total of twenty-two backplane buses). Each backplane bus transports traffic between the line units and the switch fabric of the chassis. Traffic is transported over copper traces in the cross-connect and line units, and also in the point-to- point backplane buses. As noted above, such a backplane bus is based on a 6 ms superframe that is not SONET compliant. SONET ESF is a 3 millisecond signaling superframe format. Five types of traffic are transported over backplane bus: overhead, STS, synchronous (sync) TDM packets, sync packets, and asynchronous (async) packets. Sync TDM packets (strictly scheduled) have priority over other sync packet types.
FIG. 5C illustrates how fixed length packets are distributed across a data transfer frame, with a remainder at the end of the frame. There are N packet slots per 125 microsecond frame. A frame sync pulse delineates the start of a 125 microsecond frame. The 125-microsecond frame sync pulse, as described and illustrated herein, refers to frame sync signals that are present within the cross-connect. However, no 125- microsecond or 6-millisecond frame sync signal is present on a backplane bus. The GAI and GA2 framing pattern within the 3.1104Gbps data stream identify the start of a new 125-microsecond frame. The GK1 backplane overhead byte identifies the 6-millisecond superframe by carrying the current frame number from 0 to 47. For every STS-1 interval (a.k.a. "STS channel") assigned to synchronous traffic,
810 bytes (805 bytes for STS Interval 0) are made available to carry synchronous fixed length packets. 810 bytes divided by 64 bytes per packet yields 12.65625 packet slots per channel per frame. Fixed length packets must start on packet slot boundaries, and any fractional packet slot at the end of a frame is not used to carry synchronous packet traffic. As stated above, the bandwidth of backplane bus 100 is 3.1104 gigabits per second. This equates to one 40-bit word every 77.76 MHz clock cycle, which is the speed the backplane MAC interface operates at. The backplane MAC interface includes a serializer/deserializer which converts the 3.1104 Gbps stream to/from discrete overhead signals, an 11-bit arrival/grant bus (to an arbiter in the cross-connect), a 40-bit sync packet/STS bus (to the synchronous cross-connect), and a 64-bit async packet bus (to the asynchronous cross-connect).
A circuit in a scheduler for inserting packets into the frame (also called "TDM aligner") compensates for the number of channels allocated to TDM. The packet scheduler circuit positions (strictly scheduled) sync TDM packets in specific positions within the 125 microsecond frame based on frame position field in VCI. The packet scheduler circuit also inserts loosely scheduled sync packets in remaining sync packet slots. At the 3.1104 gigabits per second bandwidth for a backplane bus, a maximum limit for the number of packets that may be sent in any one frame is set at 758 packets for loosely scheduled packets and 512 packets for strictly scheduled sync TDM packets. This limit can be reached when all STS-1 intervals are assigned to synchronous traffic (i.e., if fewer intervals are assigned, fewer packets can be transported).
After the maximum number of packet slots have gone by on any single frame of a backplane bus, no more synchronous fixed length packets are sent until the next frame. This means that there is no benefit to configuring more than 41 of the 60 channels for TDM packets, because that yields 518 packets slots. As stated above, if there 12.65625 packet slots per channel per frame, then 512 packets divided by 12.65625 packet slots per channel per frame results in 40.45 channels which is rounded up to 41 channels. After the first 512 packets slots have gone by, on the backplane bus, no more TDM FLPs are sent until the next 125 microsecond frame. However, multicast and other sync packets may use all 60 channels. Software allocates the number of channels allocated to each type of traffic.
FIG. 5 A illustrates an example of channel allocation in which sync packets (TDM/Multicast), async packets (Unicast) and STS channels are transported simultaneously within one frame. A frame sync pulse marks the start of the 125 microsecond frame. The 60 channels of each 125 microsecond frame are divided among these three groups wherein:
X + Y + Z = 60 where X = number of channels allocated to TDM/Multicast
Y = number of channels allocated to Unicast
Z = number of channels allocated to STS
Each strictly scheduled TDM packet is associated with a Frame Position Field (FPF) which is placed in lowest 10 bits of the VCI field of the packet. The FPF is only needed for strictly scheduled TDM sync packets. The FPF refers to a specific packet slot position within a particular the frame. Each strictly scheduled TDM packet within a frame has a unique FPF number. The backplane bus transports data in frames and a single bus can carry the equivalent of 60 STS-1 channels, or 3.1104 Gbps. A frame on this bus is always 125 microseconds. Channels on this bus may be allocated as follows: (1) async packets (e.g., Unicast); (2) sync packets (TDM strictly/loosely scheduled, Multicast, SAR, TOH); (3) STS channels (i.e., not packets but 810 bytes of data); and (4) unassigned. When a channel is unassigned, the bus sends an idle pattern '0x55'.
Each channel provides 810 bytes every 125 microsecond frame. For example, a TDM-packet, as outlined above and below, is 64-bytes long. The following formula calculates the portion of the frame that is occupied with sync packets (TDM or other); where N is the number of channels allocated to TDM traffic, 810 is the number of bytes per channel.
Frame Size = (N X 810)/64 packet positions
Fractions of packets are ignored. For example, if 12 channels are allocated to TDM, the frame size is:
Frame Size = (12 X 810)/64 = 151.875
Frame Size = 151 packet positions
From the perspective of the backplane bus, there are 151 packet positions for the TDM packets in a 125 microsecond frame. These positions are used as a reference at the cross-connect for cross-connecting and merging TDM packets. From a hardware perspective, there are no limitations as to which channels are allocated to TDM.
However, as Channel 0 (i.e., the overhead channel) uses 5-bytes for overhead, if TDM is allocated in Channel 0, then only 805 of the 810 bytes is available for sync TDM packets.
Packet traffic may be dynamically scheduled in a frame on a backplane bus by a cross-connect of the type described above. In one embodiment, the cross-connect performs a multi-stage WRR (weighted round robin) arbitration function to choose which of the line unit packets is to be transferred on the backplane bus in the next available time slot (which is determined by a Routing Map field in a signal from the cross-connect to the line unit). When a packet first arrives at a line unit (from the outside world), then the arrival event is signaled by the line unit to the cross-connect (within a header in a channel allocated for packet transfer) to request bandwidth for sending the packet through the switch fabric. In response, the cross-connect identifies which packet should be driven over the upstream backplane bus. The packet grant signal appears at the line unit's transmit, or downstream interface. In one embodiment, the line unit caches grants, so that the next upstream packet may not be the last packet granted.
As noted elsewhere herein, each port of the cross-connect is coupled to a line unit by a single serial bus, and the circuitry in each line unit is capable of generating both time-division-multiplexed (TDM) data and packet data in a time interleaved fashion for transmission on the serial bus. Moreover, the packet data is transmitted in a number of time slots in a frame on the serial bus, and occupancy of the time slots is dynamically allocated by the cross-connect, among line units that transfer packet data.
In one example, frames of the type described above, on a backplane bus, are grouped into a 6 millisecond superframe. The physical layer for backplane buses can be based on, for example, IEEE 802.3z. Such buses may be coupled via a connector, e.g. a 5-row by 10 VHDM connector in each LU slot of chassis 39, and and an 8-row by 60 VHDM connector in the RAP slots.
In addition to containing line units (which can be of different kinds or all of the same kind), the above-described network element 30 also contains a cross-connect to transfer traffic among various line units. For example, traffic to and from subscriber equipment (also called "subscriber traffic"), such as a telephone instrument or a private branch exchange (PBX), that is carried by a subscriber line unit 31 is switched by such a cross-connect to core line unit 32. Therefore, core line unit 32 carries the subscriber traffic sequentially, thereby to pass the subscriber traffic to and from the core of the communication network, e.g. to support a long distance telephone call. The transfer of traffic between line units and the cross-connect changes with time, so that at a different moment in time, traffic to/from subscriber line unit 31 may be switched by such a cross- connect to/from router line unit 34, e.g. if the traffic originates from and has as its destination a modem (which modem may be coupled to a different port of line unit 32 from the port to which the above-described telephone instrument is coupled).
A network element of the type described above implements in its line units a number of functions that were performed by a number of discrete products in the prior art. Therefore, a network element of the type described herein eliminates the need for a communication service provider to deploy such discrete products. Specifically, one embodiment of the network element eliminates the need for digital loop carriers (DLCs), DSLAMs, add/drop multiplexers (of both types: access and transport). Instead, a communication service provider can simply use a single chassis of the network element, and install whichever subscriber line unit is needed to support the type of services required by its customers. If necessary, such a single chassis may also be configured to contain a core line unit to connect to the core network. Also, if necessary, the number of chassis in a single network element may be incrementally increased (e.g. up to a total of 5 chassis which fit in a single cabinet). Therefore, a network element of the type described herein can be used to provide services at the network edge, e.g. to small businesses and at the same time to connect Class 5 switches to long-distance trunk facilities, and/or interexchange transport facilities. Therefore, a network element of the type described herein may be used in any portion of the entire communication network, to provide any service.
In one embodiment, each of line units 31-34 (FIG.5) are installed in a chassis 39 in which is also installed a unit 35 (which may also be called "switch unit" or resource arbitration processor abbreviated "RAP") that contains the above-described cross- connect. Each of units 31-35 may be implemented in a modular manner, e.g. as cards that can be inserted into and removed from chassis 39. There may be one or more additional cards that act as standby in case of a failure of a portion of the circuitry in such cards. However, in other implementations, it is not necessary for the units to be modular, or to be implemented in cards. For example, switch unit 35 may be built into a chassis in an alternative implementation. In one specific implementation, switch unit 35 and a line unit are implemented in a single card, such as a common control card. Line unit circuitry located on the common control card (i.e. RAP card) may contain circuitry that supports any traffic rate, such as OC3, OC12 and OC48.
In one implementation, chassis 39 is 19 inches wide and contains 23 slots, whereas in another implementation chassis 39 is 23 inches wide, and provides 29 slots. Width W (FIG.5) is selected so as to allow chassis 39 to be installed in a rack commonly found in a central office. Regardless of the number of slots present, in one embodiment one slot in the center is reserved for a card that contains an interface unit (to interface with humans), and two slots on either side of the center slot are reserved for two copies of the above-described common control card, wherein one copy is used as a standby for the other copy. The usage of slots in one embodiment is described in the following table:
Figure imgf000015_0001
Switching of the two kinds of traffic (voice and data) as described above, by the cross-connect in switch unit 35 is performed in two separate circuits, namely a synchronous cross-connect, and an asynchronous cross-connect that are implemented in a single electronic component (such as an application specific integrated circuit "ASIC") of the type illustrated in FIG. 10 and described below. The synchronous cross-connect (which transports TDM traffic), and the asynchronous cross-connect (which transports packet traffic) operate simultaneously, i.e. one cross-connect switches traffic to/from one set of input-output ports while at the same time the other cross-connect switches traffic to/from another set of input-output ports. Traffic to/from all input-output ports is multiplexed through these two cross-connects, that operate concurrently to switch synchronous and asynchronous traffic between the ports.
A network element of the type described above may be used to groom a broad range of traffic classes, including ATM, Frame Relay, IP, STS and TDM traffic. In one embodiment, any combination of the foregoing traffic types can be switched between a core network and/or PSTN and business users and or residential users. A communication network of this embodiment includes two network elements that are hereinafter referred to as a Host Node (HN) that typically resides at a central office (CO) and a Flex Node (FN) that resides at a remote terminal to which individual users have access. Although in this particular embodiment (as described in detail next) the network elements are used as Host Nodes and Flex Nodes in other embodiments such network elements may be used as other components of a communication network, such as add-drop multiplexers.
Host Node 110 and Flex Node 112 (see FIG.6) of this embodiment contain many of the same components, e.g. each node has at least a first shelf 114a, 116a, respectively. Shelves 114a, 116a of have a number of slots for receiving and coupling circuit boards (118, 120 respectively) to a common back-plane (not shown) by which the circuit boards of each shelf communicate with one another. Shelf 114a of HN 10 has two slots for receiving an active and a redundant Routing and Arbitration Processor board (RAP(A) 124a and RAP(B) 124b respectively). Shelf 116a of FN 112 has two slots for receiving an active and redundant Routing and Arbitration Processor board (RAP(A) 122a and RAP(B) 122b respectively). The RAP (for both nodes 110, 112) performs the primary functions providing an interface for up to 32 full-duplex data ports over which data traffic is received or transmitted, routing the data traffic between the ports, and to condition the data traffic for routing. Typically, only one of the RAP boards is active, while the other remains idle unless pressed into action as a result of a failure of the first RAP board. Each shelf 114a and 116a of nodes HN 110 and FN 112 also has one slot for an administration and maintenance processor board (AMP) 115. The AMP provides test access, alarm status, application hosting, and the AMP 115 provides an Ethernet connection, for Network Element Management. Shelves 114a and 116a of nodes 110, 112 also have at least twenty slots to receive circuit boards that form line units 118, 120 respectively. As noted above, such line units may be implemented as circuit boards that are specifically designed to transmit and receive particular forms of traffic. For example, some line units have ports and circuitry to interface with POTS terminals. The same line units may have other ports designed to interface with DS-1, DS-3, DSL, STS or optical standards such as OC-3, OC-12 and OC- 48 for example. Moreover, all ports of certain line units may be of the same kind, e.g. a single line unit may have only POTS ports.
All of the line units 118, 120 of a given shelf 114a, 116a provide communication between some number of ports to the outside world, and one or more backplane buses of the type described above. In one implementation, the backplane buses are full duplex serial buses coupling the line units (LUs) to the active RAP(A) and inactive RAP(B) of the shelf through the backplane (not shown). In one embodiment, each slot of a given shelf has four such serial buses by which each line unit plugged therein is coupled to the RAP(A) of that shelf and four such serial buses by which the line unit plugged therein is coupled to the RAP(B) of that shelf.
In a typical embodiment of the system of FIG.6, a circuit board slot of shelf 114a of HN 110 is used for a two port OC-12 circuit board 118(A), 118(B) that provides dual SONET OC-12 interfaces 113a, 113b when RAP(A) 124a is active, and standby interfaces 113c, 113d if RAP(B) becomes active in the event RAP(A) 124a fails. The SONET interfaces 113a-d can be used to provide connectivity to an Edge/Core Network 111. The line unit 118(A)(B) is coupled through the backplane to RAP(A) 124a through two active serial buses (and to RAP(B) 124b as well through one inactive bus for backup) operating at 3.11 Gbps. One embodiment of line unit 118 can provide STS, ATM and or PPP transport modes. An enhancement to allow the transport of both (point-to-point protocol (PPP) and ATM can be provided via the Pre-emptive ATM over PPP method. In this application the HN 110 is used as either a transport or access multiplexer. Line unit 118 can also be used for communication between the HN 110 and FN 112.
FIG.6 illustrates that another slot of shelf 114a of HN 110 is used for a line unit 119(A), 119(B) that provides for communication between HN 110 and RAP(A) 122a (or RAP(B) 122b) of FN 112. One embodiment of line unit 19(A)(B) can have up to four OC-3 ports, thereby providing multiple SONET OC-3 interfaces that can be used to communicate between the HN 110 and FN 112 nodes. In other embodiments, such line units may have 16 optical ports, each of which may be operated at OC-3 or OC-12 rates.
In this application the HN 110 is typically used as an access multiplexer. Fiber egress to line unit 119 is via four SC connectors. One embodiment of line unit 119 provides STS,
ATM and or PPP transport modes. An enhancement to allow the transport of both PPP and ATM can be provided via the Pre-emptive ATM over PPP method. Another enhancement that can be provided is for a line unit to transport Ethernet or storage area network (SAN) traffic (such as ESCON or Fiber Channel) via Generic Framing Protocol (GFP).
The circuit board slots of shelves 114a, 116a of nodes 110, 112 can be populated with optical line units as described above or with additional embodiments of line units 118, 120 that provide standard interfaces to several typical forms of data and/or voice traffic sourced by business and residential users. For example, another embodiment of a line unit 120 is a POTS circuit board that supports up to 24 POTS interfaces. The POTS interfaces support both loop and ground start as well as provisionable characteristic loop impedance and return loss. The bandwidth requirements for this board are low and thus only one active and one standby full duplex serial bus running at 3.11 Gbps are required to couple this line unit to the RAP(A) 122a and RAP(B) 122b respectively.
Depending on the embodiment, a line unit that may be installed in a chassis of the type described herein may implement any type of telephony interface well known in the art (i.e. legacy), such as EBS to support analog voice and digital telemetry, digital data service (DDS) of 64 kbps, ISDN, and pay phone interface.
Another Line Unit that can be used in the system is a 24 port Asymmetric Digital Subscriber Line (ADSL) board that supports 24 Full Rate ADSL interfaces. ADSL is a modem technology that converts existing twisted-pair telephone lines into access paths for multimedia and high-speed data communications. ADSL can transmit up to 6 Mbps to a subscriber, and as much as 832 Kbps or more in both directions (full duplex). The line coding technique used is discrete multi-tone (DMT) and may also function in a G.Lite mode to conserve power. One embodiment of a circuit board does not include POTS splitters, which must be deployed in a sub-system outside of the shelf 116a, although POTS splitters may be used in other embodiments. One embodiment of the 24 port ADSL line unit is constructed with quad port framers. Because the bandwidth requirements for this line unit are typically low, once again only one active and one standby bus running at 3.1 Gbps are required to couple the line unit to the RAP(A) 122a and RAP(B) 122b of shelf 116a. One or more variants of DSL such as HDSL, SDSL,
VDSL may be implemented in other line units, depending on the embodiment.
Another line unit that can be employed with the system of FIG. 6 is a twelve port POTS/DSL combo-board that supports 12 POTS and 12 Full Rate ADSL interfaces. The combo-board is the melding of half of a POTS and half of an ADSL line unit as discussed above. The POTS interfaces support both loop and ground start as well as provision able characteristic loop impedance and return loss. The ADSL line-coding technique used is DMT and may also function in a G.Lite mode to conserve power. In one embodiment, quad port ADSL framers may be employed to condition the ADSL traffic. Because the bandwidth requirements for this line unit are typically low, only one active and one standby serial bus running at 3.1 Gbps are required to interface the line unit with the RAP(A) 122a and RAP(B) 122b respectively.
Another line unit that can be employed within the system illustrated in FIG. 6 is a twelve port DS-1 board that supports up to 12 DS-1 interfaces. An embodiment of the DS-1 line unit supports DSX-1 interfaces. Optionally, the card supports twelve dry T-l interfaces. Switching current sources are included to support twelve powered T-l interfaces. Again, because the bandwidth requirements for this line unit are low, only one active and one standby bus running at 3.1 Gbps are required to connect the line unit to the RAP(A) and RAP(B) respectively. For this line unit, data traffic flows over a Utopia 2 interface. Control and provisioning information is communicated over a PCI bus. Control and provisioning information for Tl framers / line interfaces is communicated over a serial interface. An edge stream processor (ESP) provides an ATM adaptation layer (AAL) 1/2 adaptation function for all 12 framer interfaces. The ESP terminates structured DS0 traffic TDM cross-connect switching or time stamps asynchronous DS1 traffic for Hi-Cap transport. The ESP processor is also capable of terminating PPP framed DS1 traffic for either Layer 2 tunneling (L2TP) or Layer 3 routing. Another line unit 118, 120 that can be used in the system of FIG. 6 is an eight port
DS3 board that supports up to 8 DS-3 interfaces in one example. In other examples, other numbers of ports, such as 6 or 12 ports may be present in such a DS3 board and in the corresponding DS3 interface. In one embodiment of this line unit, each interface may be provisioned to terminate either an ATM UNI (ATM unicast), PPP or Channelized Hi- Capacity DS3 service. A routing stream processor supports either ATM or PPP by queuing and scheduling DS-3 traffic for packet transport through the RAP. A DS3_Mapper supports the Hi-Cap DS3 by mapping DS-3 traffic into an STS1 channel for low latency transport through the RAP.
Another line unit 118, 120 that can be used in conjunction with the system of FIG.
6 is a single port STS1 board. In one embodiment of the STS-1 board, a mode selectable single STS1 or DS-3 interface is supported.
Yet another line unit 118, 120 that can be used in conjunction with the system of FIG. 6 is a single port OC48 Interface board. The single port OC48 line unit provides a single SONET OC-48 interface to provide connectivity to the Edge/Core Network. In this application the HN 110 is used primarily as a transport level add/drop multiplexer. In a preferred embodiment, fiber egress is via two SC connectors (or two LC connectors in another embodiment). SONET framers provide STS, ATM and or PPP transport modes. An enhancement to allow the transport of both PPP and ATM is provided via a Pre- emptive ATM over PPP method. Depending on the embodiment, other such line units may have four OC48 ports on a single board, or a single OC192 port.
Still one more line unit 118, 120 that can be used in conjunction with the system shown in FIG. 6 is a twelve port Fiber to the Business (FTTB) board. The twelve port FTTB assembly provides a single OC-12 SONET interface out and twelve OC-1 SONET interfaces in. In one embodiment of this line unit, a unique integrated splitter arrangement allows sharing of the single laser diode over twelve business subscribers. Twelve discrete PIN diode receivers will be provided to allow a single interface per each of the twelve business subscribers. This method allows simple and inexpensive fault isolation and efficient bandwidth management amongst the entire pool of business subscribers. The interface provided to the subscriber is a single fiber with a single lambda down and a separate lambda up. This arrangement reduces fiber bulk and cost. The dual Lambda arrangement allows a simpler splitter implementation to be realized in a "silica on silicon" waveguide device. The FTTB line unit is coupled to RAP(A) 124a through one active serial backplane bus (and to RAP(B) 124b as well through one inactive backplane bus for back-up) operating at 3.11 Gbps.
Thus, it can be seen from the foregoing discussion that the network element is designed to permit the largest number of customer applications to be supported within a single shelf HN and single shelf FN. While the system of FIG. 6 is optimized towards a single shelf per node configuration, expansion to a multi-shelf per node configuration is also supported in a deterministic and modular fashion. Typically co-located shelves are connected in a Uni-directional Path Switched Ring (UPSR) arrangement via the RAP mounted optics as illustrated in FIG. 6 by the addition of shelves 114b, and 114c to HN 110. This provides a simple, incremental means of local expansion. Local expansion can also be implemented in a point-to-point fashion, however, at the expense of slots and cost. A single HN 110 may host many FNs 112 in a variety of topologies. FNs 110 may be subtended from an HN 110 in point-to-point, linear chain, branch and continue, UPSR or bi-directional line switched ring (BLSR) arrangement. An extremely large network of over 20,000 subscribers may be constructed from the HN/FN hierarchy as illustrated in FIG. 6.
Line units 118, 120 (FIG. 6) are split out into an input side 130a and output side 130b for clarity in FIG. 7 (which is also shown without the AMP). Those of skill in the art will recognize that the input and output paths of each line unit 134a(l-n), 134b(l-n) respectively, will typically reside on one circuit board and may even share some of the same circuits. Each output path 134b(l-n) produces an output 136b(l-n) in the form of cells, samples and packets. The line cards can be of a lower bandwidth nature (e.g. line unit 138al/38bl), such as the twenty-four port POTS unit, the twenty-four port DSL unit, and the twelve port POTS/DSL combo unit as previously described. Some of the line units are high bandwidth (e.g. line unit 134an/134bn), such as the twelve port DS1/T1 unit, the eight port DS3 unit, and the ST-1, OC-3, OC-12 and OC-48 units, all of which require an ESP/RSP 142a, 142b to perform pre-processing of the traffic.
Line unit input paths 134a(l -n) perform the function of interfacing with the physical layer of a data source to receive data traffic over inputs 136a(l-n) in the form of cells, samples or packets, depending upon the source type, and then grooming the traffic through its bus interface circuitry (called "GAP" which is an abbreviation for GigaPoint
Access Processor, wherein "GigaPoint" stands for a point-to-point backplane bus operating at a rate in the gigabit/second range) 140a(l-n) so that it can be switched by the matrix 137 of switch 133 to any one or more of the line unit output paths 134b(l-n). The line unit output paths 134b(l-n) perform the function of taking the routed traffic and converting it back to the physical layer format expected by the receiver to which it is coupled. The interfacing function is represented by input path circuits Phy 138a(l-n) and
Phy output path circuits 138b(l-n). The traffic for higher bandwidth traffic sources often requires an additional layer of circuitry 142a, 142b for grooming the data traffic, such as an RSP, an ESP and/or quad framers.
The input path GAP 140a(l-n) for each line unit input path 134a(l-n) transmits groomed and serialized data traffic over one or more high-speed serial backplane buses 132a(l-n) to the switch 133, where the serialized data is converted to parallel data for switching. With respect to the output path 134b(l -n) of each line unit, a transmit portion of output path GAP 140b(l-n) residing within switch 133 (not shown) serializes switched data traffic and transmits it over high-speed serial backplane buses 132b(l-n) to the receive portion of each GAP residing in the line unit output paths. From there, the data traffic is adapted back to the physical layer protocol spoken by the destination for the outgoing traffic. The input path GAP 140a(l-n) of each line unit provides data to switch 133 as asynchronous packet traffic, TDM and multicast synchronous packet traffic, and channelized STS data traffic. Likewise, the output path GAP 40b(l-n) of each line unit receives data in one of the foregoing forms from the switch 133, and adapts it to the requisite physical layer of the traffic destination.
The switch 133 includes the active RAP(A) 144 and back-up RAP(B) 146. Each
RAP includes a switch unit (called GRX which is an abbreviation for GigaPoint Routing Cross-connect) 139. GRX 139 further includes an arbiter 135 and a matrix 137 by which the conditioned data traffic arriving over the input side of any of the serial buses is switched to the appropriate output side of one or more of the serial backplane buses coupled to the destination for that conditioned traffic. Depending on the type of traffic being carried, the two RAPs can be both used simultaneously to transport unprotected traffic e.g. Ethernet traffic, to load-share the RAPs and provide additional bandwidth through the switch fabric.
In one embodiment (FIG. 8), there are twenty-four I/O ports for GRX 139 that are coupled to the backplane. Twenty of the I/O ports couple line units 134(l-n) to the GRX 139 through full-duplex serial buses 132(1-20). One port 132(21) couples RAP(A) 44 to the outside world through input 136(21). Another port that is not shown couples RAP(B) (also not shown) to the outside world and the last port (not shown) couples the AMP (not shown) of shelf 116a to the outside world. The line units 134(l-n) are shown with both their input and output paths.
One possible combination of specific line units is illustrated in FIG. 8 to illustrate the versatility of the system. Twenty-four port POTS unit 134(1) interfaces to the GRX 139 over serial bus GP_1 132(1), and has twenty-four telephone interface inputs 136(1) coupled to five quad codecs 138(1), which in turn interface with GAP 40(1). A twelve port DS1 unit 134(10) having twelve DS1 interfaces coupled to three quad Tl framers 138(10), the outputs of which are processed by ESP 142(10), sends and receives conditioned traffic through GAP 140(10) to GRX 139 over serial bus GP_10 132(10). Four port OC3 units 134(11) and 134(n) are each coupled to GRX 139 over two serial buses GP_11 132(11), GP_12 132(12) and GP_19 312(10), GP_20 132(20) respectively. Each unit provides a total bandwidth equivalent to OC-12, handling bi-directional OC-3 traffic over OC-3 I/Os 136(11) and 136(n) respectively. The line units 134(11), 134(n) interface with the OC-3 traffic by way of OC-12 framer/phy 138(11), 138(n) respectively and data is transferred to and from the GRX 139 by way of GAP 40(11), 40(n) respectively.
Those of skill in the art will recognize that any combination of the line units described herein, or any that are designed to interface with and condition data traffic to be transmitted and received over a backplane bus by way of a GAP in the form of asynchronous packet traffic, TDM and multicast synchronous packet traffic, and channelized STS data traffic can be employed in shelf 116a of FIG. 8.
The present invention is able to serve a broad range of network access functions because it handles both synchronous and asynchronous classes of traffic over a common fabric of backplane buses. In prior art multi-class solutions, these types of traffic are typically handled separately over separate bus systems. Such a solution to the problem is not desirable because though it makes handling the traffic simpler, it does not provide advantages of flexibility and lower cost.
With reference to Fig. 9, a conceptual block diagram of the present invention is depicted that facilitates seeing the flow of the different types of traffic through the common cross-connect (also called "switch fabric"). Line unit 118, 120 consists of GAP 134a,b (both transmit and receive paths). As previously discussed, The GAP 140 interconnects traffic from the serial (backplane) bus 132a,b (transmit and receive) to physical interfaces that make up part of the line units such as OC-48, POTS etc. The GAP 140 receives and transmits TDM and packet base traffic over the backplane bus 132. The GAP 140 also transmits local queue status over the backplane bus 132. It receives control and arbitration information over the backplane bus 132, and maps POTS Codec traffic into an internal packet format. The GAP supports VoQ with 2 classes of service toward the backplane bus. The GAP supports AAL-5 by way of a Hardware SAR (Segmentation and Reassembly) engine, termination of SONET transport overhead bytes and implements a time slot interchange (TSI) for DS0 traffic.
The GAP includes a Transmit/Receive GP(gigapoint) MAC (Media Access Controller), that handles transmitting and receiving communicating both STS and packet traffic through the serial high speed backplane bus through SerDes (Gigabit Ethernet serializer/deserializer) 152. The mixture of packet and STS traffic is simply squeezed down and transmitted over a high speed link to transport the traffic between the GAP, across the backplane of the system, and the common switch fabric represented by GRX 139. Another transmit/receive SerDes 154 resides on the GRX 139 that is interfaced with another receive/transmit MAC 156.
In the GRX139, the receive GP MAC 156a accepts combined asynchronous and synchronous traffic from a line unit through the SerDes core and distributes it to the packet crosspoint 158 and synchronous crosspoint 160 respectively. STS bytes and synchronous packets (TDM and multicast) are driven to the synchronous crosspoint 160 over a 40-bit parallel bus. Unicast packets are sent over a 64-bit FIFO interface to the packet crosspoint 158. An eight-bit packet arrival word is extracted by each receive MAC and driven to the arbiter 135. The arrival word is sent with an arrival strobe, as well as downstream backplane bus and grant backpressure signals.
The GRX transmit GP MAC 156b receives data bound for the serial bus to line unit 188, 120 over three buses; the 64-bit asynchronous packet bus 164, the 40-bit synchronous bus 166 and the 8-bit arbiter bus 162. Asynchronous bus data is read from the packet crosspoint's output FIFO. Synchronous data (STS, TDM and multicast packets) is received from the synchronous cross-connect at timeslots relative to superframe sync and in accordance with bandwidth allocation of the particular link by way of a channel map configuration. Packet grant information is transported from the arbiter 135 to the transmit GP MAC in a manner similar to that of packet arrival information.
A block diagram of the GRX ASIC is depicted in FIG 10. In the receive path, the receive GP MAC modules 156a(l)-a(n) interface with the SerDes receiver 154(1). The receive MACs extract packet arrival and backpressure fields from the packet headers and pass the information to the arbiter 135 and transmit GP MACs 156a(l)-a(n) respectively. The receive GP MAC modules also provide data layer decoding by splitting STS, TDM, multicast packet, and unicast packet traffic. In one embodiment, only unicast packet traffic is routed to the Packet Crosspoint 158 and the other traffic types are routed to the Synchronous Crosspoint 160. Loosely scheduled TDM and multicast traffic could be routed through the packet cross-connect 158, but it is actually more convenient to route these types of packet traffic through the synchronous cross-connect 160 as well. In the transmit path, the transmit GP MAC 156 b(l)-b(n) modules combine the various traffic types output by the crosspoints 158, 160 and output them to the SerDes Transmitters 154(l)-n. Also, the transmit MACs insert packet grants and backpressure fields into the packet headers. The Packet Crosspoint 158 snoops on the Packet Grant 302 and Packet Arrival 300 interfaces in support of the grant audit mechanism.
In one embodiment, traffic carried over the serial links between the GAP 140 and the GRX 139 is classified in to three primary groups. As shown in FIG. 5A, sixty channels 310 are pre-allocated for STS 312, TDM/Multicast 316, or Unicast traffic314. Fixed TDM FLP (fixed length packet) slots are then defined within the channels allocated to TDM. Each TDM FLP slot is 64 bytes long and remains at a fixed location with respect to the 125us frame sync until the TDM pipe is re-sized. TDM traffic shares its bandwidth with Multicast traffic, which means that software needs to take into account the bandwidth requirements of Multicast when provisioning TDM.
Each backplane bus can support the transport of STS traffic in designated timeslots in a 125us frame window. Traffic in channels sharing the same channel designator can be merged as it is passed through the 24: 1 masked muxes. This function allows a VTl .5 cross-connect function to be implemented by aligning VTl .5s within the appropriate STS channel(s). Additional mapping formats, such as VT2 and VT6 functions can also use such a merging function of the 24:1 muxes to effect similar cross- connects.
Thus, those of skill in the art will recognize that combining and routing both asynchronous and synchronous traffic across a common fabric reduces system implementation size as well as makes the invention easily configured to handle numerous legacy applications network applications, as well as to combine such applications into one system. Moreover, the flexibility in provisioning bandwidth among the traffic types makes the invention configurable for serving applications from the subscriber edge of a network up to the edge of the core of a network.
Numerous modifications and adaptations of the embodiments described herein will be apparent to the skilled artisan in view of the disclosure.
While in one embodiment, a network element of the type described herein does not provide call processing functions, in another embodiment call processing may be supported by a network element of the type described herein.
Numerous such modifications and adaptations are encompassed by the attached claims.

Claims

1. A network element for a communication network, the network element comprising:
a first line unit comprising circuitry to support subscriber equipment; and
a second line unit comprising circuitry to support a link to core of the communication network; and
a chassis comprising a plurality of slots, each of the first line unit and the second line unit being installed in any one of the slots.
2. The network element of Claim 1 further comprising:
a switch unit having a first port coupled to the first line unit and a second port coupled to the second line unit, the switch unit being installed in said chassis;
wherein each port of the switch unit is coupled to a slot by a single serial bus, and the circuitry in each line unit is capable of generating both time-division- multiplexed (TDM) data and packet data in a time interleaved fashion for transmission on the single serial bus.
3. The network element of Claim 2 wherein:
the packet data is transmitted in a plurality of time slots in a frame on the serial bus; and
occupancy of the time slots is dynamically allocated among line units that transfer packet data.
4. The network element of Claim 1 wherein: the first line unit carries traffic from and to the subscriber equipment; and
the second line unit carries said traffic sequentially to pass said traffic to and from said core of the communication network.
5. The network element of Claim 1 further comprising:
a switch unit having a first port coupled to the first line unit and a second port coupled to the second line unit, the switch unit being installed in said chassis, the switch unit comprising a cross-connect.
6. The network element of Claim 4 wherein:
the cross-connect includes a synchronous cross-connect and an asynchronous cross-connect.
7. The network element of Claim 5 wherein:
at one time the synchronous cross-connect transfers traffic between the first line unit and the second line unit; and
at another time the asynchronous cross-connect transfers traffic between the first line unit and the second line unit.
8. A method of fransferring traffic between ports of a network element, the method comprising:
transferring statically allocated time-division-multiplexed (TDM) data on a serial bus within the network element; and
transferring dynamically allocated packet data on said serial bus; wherein the acts of transferring are performed in a time-interleaved manner.
9. The method of Claim 8 wherein the serial bus (hereinafter "first serial bus") is coupled to a port of a switch fabric in the network element, and the switch fabric is coupled to a second serial bus, and the method further comprises:
transferring dynamically allocated packet data on the second serial bus in a time-interleaved manner with statically allocated time-division-multiplexed (TDM) data on the second serial bus.
10. The method of Claim 8 wherein:
said time-interleaving is done at a word boundary.
11. The method of Claim 8 further comprising:
directly transferring data to core of a communication network; and
directly transferring data to a subscriber.
12. The method of Claim 10 wherein:
a portion of the data being transferred to the core and to the subscriber is TDM data and another portion of the data is packet data.
13. The method of Claim 8 wherein the packet data is transmitted in a plurality of time slots in a frame on the serial bus, and the method further comprises:
dynamically allocating occupancy of the time slots among a plurality of line units that transfer packet data.
14. The method of Claim 13 further comprising:
synchronizing each of the line units; and
each of the line units transferring data one word at a time, each word at a synchronized time interval.
15. The method of Claim 8 wherein the data is transferred in channels in a frame, the method further comprising:
reallocating at least one channel statically allocated for carrying time- division-multiplexed (TDM) data to now carry dynamically allocated packet data; and
generating a request for use of said at least one channel.
PCT/US2002/017515 2001-06-04 2002-06-03 Concurrent switching of synchronous and asynchronous traffic WO2002100073A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002310279A AU2002310279A1 (en) 2001-06-04 2002-06-03 Concurrent switching of synchronous and asynchronous traffic

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/874,402 2001-06-04
US09/874,352 2001-06-04
US09/874,352 US6798784B2 (en) 2001-06-04 2001-06-04 Concurrent switching of synchronous and asynchronous traffic
US09/874,402 US7035294B2 (en) 2001-06-04 2001-06-04 Backplane bus

Publications (2)

Publication Number Publication Date
WO2002100073A2 true WO2002100073A2 (en) 2002-12-12
WO2002100073A3 WO2002100073A3 (en) 2003-02-20

Family

ID=27128336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/017515 WO2002100073A2 (en) 2001-06-04 2002-06-03 Concurrent switching of synchronous and asynchronous traffic

Country Status (2)

Country Link
AU (1) AU2002310279A1 (en)
WO (1) WO2002100073A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11757612B2 (en) 2021-10-29 2023-09-12 Hewlett Packard Enterprise Development Lp Communicating management traffic between baseboard management controllers and network interface controllers

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6181694B1 (en) * 1998-04-03 2001-01-30 Vertical Networks, Inc. Systems and methods for multiple mode voice and data communciations using intelligently bridged TDM and packet buses
US6219354B1 (en) * 1998-12-30 2001-04-17 Qwest Communications International Inc. VDSL cabinet designs and configurations

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11757612B2 (en) 2021-10-29 2023-09-12 Hewlett Packard Enterprise Development Lp Communicating management traffic between baseboard management controllers and network interface controllers

Also Published As

Publication number Publication date
WO2002100073A3 (en) 2003-02-20
AU2002310279A1 (en) 2002-12-16

Similar Documents

Publication Publication Date Title
US6798784B2 (en) Concurrent switching of synchronous and asynchronous traffic
US7035294B2 (en) Backplane bus
JP3667337B2 (en) ATM exchange system
US6229822B1 (en) Communications system for receiving and transmitting data cells
US6822960B1 (en) Asynchronous transfer mode (ATM) switch and method
US6621828B1 (en) Fused switch core and method for a telecommunications node
US7317725B2 (en) System and method for implementing combined packetized TDM streams and TDM cross connect functions
US7130276B2 (en) Hybrid time division multiplexing and data transport
US6760327B1 (en) Rate adjustable backplane and method for a telecommunications node
US6944153B1 (en) Time slot interchanger (TSI) and method for a telecommunications node
EP0978181A1 (en) Transmission of atm cells
US6628657B1 (en) Method and system for transporting synchronous and asynchronous traffic on a bus of a telecommunications node
US7428208B2 (en) Multi-service telecommunication switch
US6920156B1 (en) Method and system for transporting synchronous and asynchronous traffic on a synchronous bus of a telecommunications node
CN100433707C (en) Method for switching ATM, TDM and packet data through a single communications switch
US6778529B1 (en) Synchronous switch and method for a telecommunications node
US6804229B2 (en) Multiple node network architecture
JP3828859B2 (en) M-DSLAM system
US6788703B2 (en) DS0 on ATM, mapping and handling
US6885661B1 (en) Private branch exchange built using an ATM Network
US6778538B2 (en) Virtual junctors
WO2002100073A2 (en) Concurrent switching of synchronous and asynchronous traffic
EP2259508B1 (en) Network element for switching time division multiplex signals using cell switch matrix having reduced cell loss probability
US6768736B1 (en) Using an ATM switch to grow the capacity of a switching stage
KR100230186B1 (en) The host digital terminal using dual cell bus

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP