EP1290819A2 - System and process for return channel spectrum manager - Google Patents

System and process for return channel spectrum manager

Info

Publication number
EP1290819A2
EP1290819A2 EP01933285A EP01933285A EP1290819A2 EP 1290819 A2 EP1290819 A2 EP 1290819A2 EP 01933285 A EP01933285 A EP 01933285A EP 01933285 A EP01933285 A EP 01933285A EP 1290819 A2 EP1290819 A2 EP 1290819A2
Authority
EP
European Patent Office
Prior art keywords
packet
data
chassis
bus
backplane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01933285A
Other languages
German (de)
French (fr)
Inventor
Paul E. Nikolich
J. David Unger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bigband Networks Bas Inc
Original Assignee
ADC Broadband Access Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ADC Broadband Access Systems Inc filed Critical ADC Broadband Access Systems Inc
Publication of EP1290819A2 publication Critical patent/EP1290819A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/76Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet
    • H04H60/81Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself
    • H04H60/93Wired transmission systems
    • H04H60/96CATV systems
    • H04H60/97CATV systems using uplink of the CATV systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J1/00Frequency-division multiplex systems
    • H04J1/02Details
    • H04J1/12Arrangements for reducing cross-talk between channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications

Definitions

  • This invention relates generally to networking data processing systems, and, more particularly to a broadband network environment, for example, one using a SONET backbone and Hybrid Fiber-Coax (ial cable) ("HFC") to connect users to the backbone.
  • HFC Hybrid Fiber-Coax
  • An emerging hardware/software standard for the HFC environment is DOCSIS (Data Over Cable Service Interface Standard) of CableLabs .
  • the head-end server is connected to a SONET ring via a multiplexer drop on the ring (see Fig. 1) .
  • These multiplexers currently cost some $50,000 in addition to the head-end server, and scaling up of service of a community may require new multiplexers and servers .
  • the failure of a component on the head-end server can take an entire "downstream" (from the head-end to the end- user) sub-network out of communication with the world.
  • Figure 2 shows a system having a reverse path monitoring system, an ethemet switch, a router, modulators and upconverters, a provisioning system, telephony parts, and a plurality of CMTS's (cable modem termination systems) .
  • This type of system typically has multiple vendors for its multiple systems, has different management systems, a large footprint, high power requirements and high operating costs .
  • a typical network broadband cable network for delivery of voice and data is shown in Figure 3.
  • Two OC-12 port interface servers are each connected to one of two backbone routers which are in turn networked to two switches.
  • the switches are networked to CMTS head-end routers .
  • the CMTS head-end routers are connected to a plurality of optical nodes .
  • the switches are also connected to a plurality of telephone trunk gateways which are in turn connected to the public switched telephone network (PSTN) .
  • PSTN public switched telephone network
  • this type of network also typically has multiple vendors for its multiple systems, has different management systems, a large footprint, high power requirements and high operating costs. In order to facilitate an effective integrated solution to have an integrated diagnostic system.
  • the upstream signal from the cable modems are, for a variety of reasons, noisy. Some of the noise can be predicted and some can be avoided. When the signal is too noisy, the CMTS cannot operate. Therefore, the upstream signal needs to be monitored and the upstream data frequency ranges need to be adjusted accordingly.
  • the problems of providing an integrated spectrum analyzer for delivery of voice and data in a compact area for an integrated switch are solved by the present invention of a return channel spectrum manager.
  • the scanning receiver spectrum analyzer system is a system for monitoring the spectrum of RF energy on the upstream cable plant for the purpose of operating a group of cable modems in frequency bands that are the most interference-free.
  • the spectrum analyzer surveys the entire upstream spectrum
  • Fig. 1 shows a prior art network on a SONET ring
  • Fig. 2 shows a prior art data delivery system
  • Fig. 3 shows a prior art data delivery network
  • Fig. 4 is a block diagram of a chassis according to principles of the invention
  • Fig. 5 shows an integrated cable infrastructure having the chassis of Fig. 4;
  • Fig. 6 is a block diagram of the application cards, the backplane and a portion of the interconnections between them in the chassis of Fig. 4;
  • Fig. 7 is a schematic diagram of the backplane interconnections, including the switching mesh;
  • Fig. 8 is a block diagram of two exemplary slots showing differential pair connections between the slots
  • Fig. 9 is a block diagram of the MCC chip in an application module according to principles of the present invention.
  • Fig. 10 is a diagram of a packet tag
  • Fig. 11 is a block diagram of a generic switch packet header
  • Fig. 12 is a flow chart of data transmission through the backplane
  • Fig. 13 is a block diagram of an incoming ICL packet
  • Fig. 14 is a block diagram of a header for the ICL packet of Fig. 13;
  • Fig. 15 shows example mapping tables mapping channels to backplane slots according to principles of the present invention
  • Fig. 16 is a block diagram of a bus arbitration application module connected in the backplane of the present invention.
  • Fig. 17 is a state diagram of bus arbitration in the application module of Fig. 16;
  • Fig. 18 is a block diagram of the chassis of Fig. 4 showing a subset of RF signal lines in the backplane according to principles of the invention
  • Fig. 19 is a block diagram of a CMTS application module and an embedded cable modem connected through the backplane according to principles of the invention
  • Fig. 20 is a block diagram of the spectrum analyzer according to principles of the invention.
  • Fig. 21 is a graph showing ingress characterization.
  • FIG. 4 shows a chassis 200 operating according to principles of the present invention.
  • the chassis 200 integrates a plurality of network applications into a single switch system.
  • the invention is a fully-meshed OSI Layer 3/4 IP-switch with high performance packet forwarding, filtering and QoS/CoS (Quality of Service/Class of Service) capabilities using low-level embedded software controlled by a cluster manager in a chassis controller.
  • QoS/CoS Quality of Service/Class of Service
  • the chassis 200 has fourteen (14) slots for modules. Twelve of those fourteen slots hold application modules 205, and two slots hold chassis controller modules 210. Each application module has an on-board DC-DC converter and is "hot-pluggable" into the chassis.
  • the chassis controller modules 210 are for redundant system clock/bus arbitration. Examples of applications that may be integrated in the chassis are a CMTS module 215, an Ethernet module 220, a SONET module 225, and a telephony application 230. Another application may be an interchassis link (ICL) port 235 through which the chassis may be linked to another chassis .
  • ICL interchassis link
  • Fig. 5 shows an integrated cable infrastructure 260 having the chassis 200 of Fig. 4.
  • the chassis 200 is part of a regional hub 262 (also called the "head-end") for voice and data delivery.
  • the hub 262 includes a video controller application 264, a video server 266, Web/cache servers 268, and an operation support system (OSS) 270, a combiner 271 and the chassis 200.
  • the chassis 200 acts as an IP access switch.
  • the chassis 200 is connected to a SONET ring 272, outside the hub 262, having a connection to the Internet 274, and a connection to the Public Switched Telephone Network (PSTN) 276.
  • PSTN Public Switched Telephone Network
  • the chassis 200 and the video-controller application 264 are attached to the combiner 271.
  • the combiner 271 is connected by an HFC link 278 to cable customers and provides IP voice, data, video and fax services. At least 2000 cable customers may be linked to the head-end by the HFC link 278.
  • the chassis 200 can support a plurality of HFC links and also a plurality of chassises may be networked together (as described below) to support thousands of cable customers.
  • FIG. 6 shows application modules connected to a backplane 420 of the chassis 200 of Figure 4.
  • the backplane is implemented as a 24-layer printed wiring board and includes 144 pairs of uni- directional differential-pair connections, each pair directly connecting input and output terminals of each of a maximum of twelve application modules with output and input terminals of each other module and itself .
  • Each application module interfaces with the backplane through a Mesh Communication Chip (MCC) 424 through these terminals.
  • MCC Mesh Communication Chip
  • Each application module is also connected to a chassis management bus 432 which provides the modules with a connection to the chassis controllers 428, 430.
  • Each MCC 424 has twelve (12) serial link interfaces that run to the backplane 420. Eleven of the serial links on each application module are for connecting the application module to every other application module in the chassis. One link is for connecting the module with itself, i.e., a loop-back.
  • the backplane is fully meshed meaning that every application module has a direct link to every other application module in the chassis through the serial links . Only a portion of the connections is shown in Figure 6 as an example.
  • the backplane mesh is shown in Figure 7.
  • the 12 channels with serial links of the MCC are numbered 0 to 11. This is referred to as the channel ID or CID.
  • the slots on the backplane are also numbered from 0 to 11 (slot ID, or SID) .
  • the chassis system does not require, however, that a channel 0 be wired to a slot 0 on the backplane.
  • a serial link may be connected to any slot.
  • the slot IDs are dynamically configured depending on system topology. This provides freedom in backplane wiring layout which might otherwise require layers additional to the twenty-four layers in the present backplane.
  • the application module reads the slot ID of the slot into which it is inserted.
  • the application module sends that slot ID out its serial lines in an idle stream in between data transmissions .
  • the application module also includes the slot ID in each data transmission.
  • Figure 15 shows examples of mapping tables of channels in cards to backplane slots. Each card stores a portion of the table, that is, the table row concerning the particular card. The table row is stored in the MCC.
  • Figure 16 shows a management bus arbitration application module connected to the backplane .
  • the backplane contains two separate management buses for failure protection.
  • Each application module in the chassis including the two chassis controllers as well as the twelve applications modules can use both or either management bus.
  • the management bus is used for low-speed data transfer within the chassis and generally consists of control, statistical, configuration information, data from the chassis controller modules to the application modules in the chassis.
  • the implementation of the management bus consists of a four bit data path, a transmit clock, a transmit clock signal, a collision control signal, and a four bit arbitration bus.
  • the bus controller has a 10/100 MAC device, a receive FIFO, bus transceiver logics, and a programmable logic device ("PLD").
  • the data path on the management bus is a four-bit (Media Independent Interface) Mil standard interface for 10/100 ethernet MACs.
  • the bus mimics the operation of a standard 100 Mbit ethernet bus interface so that the MAC functionality can be exploited.
  • the programmable logic device contains a state machine that performs bus arbitration.
  • Figure 17 shows the state diagram for the state machine in the programmable logic device for the management bus .
  • the arbitration lines determine which module has control of the bus by using open collector logic .
  • the pull-ups for the arbitration bus reside on the chassis controller modules. Each slot places its slot ID on the arbitration lines to request the bus .
  • the bus controller During transmission of the preamble of data to be transmitted, if the arbitration is corrupted, the bus controller assumes that another slot had concurrently requested the bus and the state machine within the PLD aborts the transfer operation by forcing a collision signal for both the bus and the local MAC device active. As other modules detect the collision signal on the bus active, the collision line on each local MAC is forced to the collision state, which allows the back-off algorithm within the MAC to determine the next transmission time. If a collision is not detected, the data is latched into the receive FIFO of each module, and the TX_Enable signal is used to quantify data from the bus.
  • Backplane Architecture Fig. 7 shows the internal backplane architecture of the current embodiment of the switch of the invention that was shown in exemplary fashion in Fig. 6.
  • One feature is the full-mesh interconnection between slots shown in the region 505. Slots are shown by vertical lines in Figure 7. This is implemented using 144 pairs of differential pairs embedded in the backplane as shown in Figure 8. Each slot thus has a full-duplex serial path to every other slot in the system.
  • n(n-l) non-looped-back links in the system, that is, 132 links, doubled for the duplex pair configuration for a total of 264 differential pairs (or further doubled for 528 wires) in the backplane to create the backplane mesh in the present embodiment of the invention.
  • Each differential pair is able to support data throughput of more than 1 gigabit per second.
  • the clock signal is embedded in the serial signaling, obviating the need for separate pairs (quads) for clock distribution. Because the data paths are independent, different pairs of cards in the chassis may be switching (ATM) cells and others switching (IP) packets.
  • each slot is capable of transmitting on all 11 of its serial links at once, a feature useful for broadcasting. All the slots transmitting on all their serial lines achieve a peak bandwidth of 132 gigabits per second. Sustained bandwidth depends on system configuration.
  • the mesh provides a fully redundant connection between the application cards in the backplane. One connection can fail without affecting the ability of the cards to communicate. Routing tables are stored in the chassis controllers. If, for example, the. connection between application module 1 and application module 2 failed, the routing tables are updated. The routing tables are updated when the application modules report to the chassis controllers that no data is being received on a particular serial link. Data addressed to application module 2 coming in through application module 1 is routed to another application module, for instance application module 3, which would then forward the data to application module 2.
  • the bus-connected backplane region 525 includes three bus systems.
  • the management/control bus 530 is provided for out- of-band communication of signaling, control, and management information.
  • a redundant backup for a management bus failure will be the mesh interconnect fabric 505.
  • the management bus provides 32-bit 10-20 MHz transfers, operating as a packet bus.
  • Arbitration is centralized on the system clock module 102 (clock A) . Any slot to any slot communication is allowed, with broadcast and multicast also supported.
  • the bus drivers are integrated on the System Bus FPGA/ASIC.
  • a TDM (Time Division Multiplexing) Fabric 535 is also provided for telephony applications .
  • Alternative approaches include the use of a DSO fabric, using 32 TDM highways (sixteen full-duplex, 2048 FDX ti eslots, or approximately 3 T3s) using the H.llO standard, or a SONET ATM (Asynchronous Transfer Mode) fabric.
  • Miscellaneous static signals may also be distributed in the bus-connected backplane region 540. Slot ID, clock failure and management bus arbitration failure may be signaled.
  • a star interconnect region 545 provides independent clock distribution from redundant clocks 102 and 103.
  • the static signals on backplane bus 540 tell the system modules which system clock and bus arbitration slot is active.
  • Two clock distribution networks are supported: a reference clock from which other clocks are synthesized, and a TDM bus clock, depending on the TDM bus architecture chosen. Both clocks are synchronized to an internal Stratum 3/4 oscillator or an externally provided BITS (Building Integrated Timing Supply) .
  • Fig. 8 shows a first connection point on a first MCC on a first module, MCC A 350, and a second connection point on a second MCC on a second module, MCC B 352, and connections 354, 355, 356, 357 between them.
  • the connections run through a backplane mesh 360 according to the present invention.
  • each point on a module has four connections between it and every other point due to the backplane mesh.
  • the differential transmission line impedance and length are controlled to ensure signal integrity and high speed operation.
  • Fig. 9 is a block diagram of the MCC chip.
  • An F-bus interface 805 connects the MCC 300 to the FIFO bus (F-bus) .
  • Twelve transmit FIFOs 810 and twelve receive FIFOs 815 are connected to the F-bus interface 805.
  • Each transmit FIFO has a data compressor (12 data compressors in all, 820)
  • each receive FIFO has a data expander (12 data expanders in all, 825) .
  • Twelve serializer/deserializers 830 serve the data compressors 820 and data expanders 825, one compressor and one expander for each A channel in the MCC is defined as a serial link together with its encoding/decoding logic, transmit queue and receive queue.
  • the serial lines running from the channels connect to the backplane mesh. All the channels can transmit data at the same time.
  • a current implementation of the invention uses a Mesh Communication Chip to interconnect up to thirteen F-buses in a full mesh using serial link technology.
  • Each MCC has two F-bus interfaces and twelve serial link interfaces.
  • the MCC transmits and receives packets on the F-buses in programmable size increments from 64 bytes to entire packets, it contains twelve virtual transmit processors (VTPs) which take packets from the F-bus and send them out the serial links, allowing twelve outgoing packets simultaneously.
  • VTPs virtual transmit processors
  • the VTPs read the MCC tag on the front of the packet and dynamically bind themselves to the destination slot(s) indicated in the header.
  • the packet transmit path is from the PHY/MAC to the processor, then from the processor to the MCC and out the mesh.
  • the processor does Layer 3 and Layer 4 look-ups in the FIPP to determine the packet' s destination and Quality of Service (QoS) , modifies the header as necessary, and prepends the MCC tag to the packet before sending it to the MCC.
  • QoS Quality of Service
  • the packet receive path is from the mesh to the MCC and on to the processor, then from the processor to the MAC/Phy and out the channel.
  • the processor strips off the MCC tag before sending the packet on to the MAC.
  • a first data flow control mechanism in the present invention takes advantage of the duplex pair configuration of the connections in the backplane and connections to the modules .
  • the MCCs have a predetermined fullness threshold for the FIFOs. If a receive FIFO fills to the predetermined threshold, a code is transmitted over the transmit channel of the duplex pair to stop sending data.
  • the codes are designed to direct-couple balance the signals on the transmission lines and to enable the detection of errors .
  • the codes in the present implementation of the invention are 16B/20B codes, however other codes may be used within the scope of the present invention.
  • the MCC sends an II or 12 code with the XOFF bit set to turn off the data flow. This message is included in the data stream transmitted on the transmit channel . If the FIFO falls below the predetermined threshold, the MCC clears the stop message by sending an II or 12 code with the XOFF bit cleared.
  • the efficient flow control prevents low depth FIFOs from overrunning, thereby allowing small FIFOs in ASICs, for example, 512 bytes, to be used. This reduces microchip costs in the system.
  • Fig. 10 shows a packet tag, also called the MCC tag.
  • the MCC tag is a 32-bit tag used to route a packet through the backplane mesh. The tag is added to the front of the packet by the slot processor before sending it to the MCC.
  • the tag has four fields: a destination mask field, a priority field, a keep field, and a reserved field.
  • the destination mask field is the field holding the mask of slots in the current chassis to which the packet is destined, which may or may not be the final destination in the system.
  • the MCC uses the destination mask to determine which transmit queue (s) the packet is destined for.
  • the MCC uses the priority and keep fields to determine which packets to discard in an over-committed slot.
  • the reserved field is unused in the current embodiment of the invention.
  • the MCC has two independent transmit mode selectors, slot-to-channel mapping and virtual transmit mode. In slot- to-channel mapping, the MCC transparently maps SIDs to CIDs and software does not have to keep track of the mapping.
  • the MCC In virtual transmit mode, the MCC handles multicast packets semi- transparently.
  • the MCC takes a single F-bus stream and directs it to multiple channels .
  • the transmit ports in the MCC address virtual transmit processors (VTPs) rather than slots.
  • the F-bus interface directs the packet to the selected virtual transmit processor.
  • the VTP saves the Destination Mask field from the MCC tag and forwards the packet data (including the MCC tag) to the set of transmit queues indicated in the Destination Mask. All subsequent 64-byte "chunks" of the packet are sent by the slot processor using the same port ID, and so are directed to the same VTP.
  • the VTP forwards chunks of the packet to the set of transmit queues indicated in the Destination Mask field saved from the MCC tag.
  • the VTP When a chunk arrives with the EOP bit set, the VTP clears its destination mask. If the next chunk addressed to that port is not the start of a new packet (i.e., with the SOP bit set) , the VTP does not forward the chunk to any queue.
  • the destination mask of the MCC tag enable efficient multicast transmission of packets through "latching."
  • the destination mask includes code for all designated destination slots. So, if a packet is meant for all twelve slots, only one packet need be sent.
  • the tag is delivered to all destinations encoded in the mask. If only a fraction of the slots are to receive the packet, only those slots are encoded into the destination mask.
  • the MCC maintains a set of "channel busy" bits which it uses to prevent multiple VTPs from sending packets to the same CID simultaneously.
  • This conflict prevention mechanism is not intended to assist the slot processor in management of busy channels, but rather to prevent complete corruption of packets in the event that the slot processor accidentally sends two packets to the same slot simultaneously.
  • the VTPs get a new packet, they compare the destination CID mask with the channel busy bits. If any channel is busy, it is removed from the destination mask and an error is recorded for that CID. The VTP then sets all the busy bits for all remaining destination channels and transmits the packet. When the VTP sees EOP on the F-bus for the packet, it clears the channel busy bits for its destination CIDs.
  • the F-bus interface performs the I/O functions between the MCC and the remaining portion of the application module.
  • the application module adds a 32-bit packet tag (MCC tag) , shown in Figure 10, to each data packet to be routed through the mesh.
  • MCC tag 32-bit packet tag
  • the data received or transmitted on the F-bus is up to 64 bits wide.
  • the F-bus interface adds 4 status bits to the transmit data to make a 68-bit data segment.
  • the F-bus interface drops the 68-bit data segment into the appropriate transmit FIFO as determined from the packet tag.
  • the data from a transmit FIFO is transferred to the associated data compressor where the 68-bit data segment is reduced to 10-bit segments.
  • the data is then passed to the associated serializer where the data is further reduced to a serial stream.
  • the serial stream is sent out the serial link to the backplane .
  • a Fast IP Processor is provided with 32/64 Mbytes of high-speed synchronous SDRAM, 8 Mbytes of high-speed synchronous SRAM, and boot flash.
  • the FIPP has a 32-bit PCI bus and a 64-bit FIFO bus (F-bus) .
  • the FIPP transfers packet data to and from all F-bus-connected devices. It provides IP forwarding in both unicast and multicast mode.
  • Routing tables are received over the management bus from the chassis route server.
  • the FIPP also provides higher layer functions such as filtering, and CoS/QoS.
  • Each line card has a clock subsystem that produces all the clocks necessary for each card. This will lock to the reference clock provided by the System Clock and Management Bus Arbitration Card.
  • Each card has hot-plug, power-on reset circuitry, and Sanity Timer functions. All cards have on-board DC-to-DC converters to go from the -48V rail in the backplane to whatever voltages are required for the application. Some cards (such as the CMTS card) likely will have two separate and isolated supplies to maximize the performance of the analog portions of the card.
  • Fig. 11 shows a generic switch header for the integrated switch.
  • the header is used to route data packets through the system.
  • the final destination may be either intra-chassis or inter-chassis .
  • the header type field indicates the header type used to route the packet through the network having one or more chassis systems.
  • the header type field is used to decode the header and provide information needed for packet forwarding.
  • the header type field may be used to indicate that the Destination Fabric Interface Address has logical ports.
  • the header type field is also used to indicate whether the packet is to be broadcast or unicast.
  • the header type field is used to indicate the relevant fields in the header.
  • the keep field indicates whether a packet can be dropped due to congestion.
  • the fragment field indicates packet fragmentation and whether the packet consists of two frames.
  • the priority field is used to indicate packet priority.
  • the encap type field is a one bit field that indicates whether further layer 2 processing is needed before the packet is forwarded. If the bit is set, L2 is present. If the bit is not set, L2 is not present.
  • the Mcast type field is a one bit field that indicates whether the packet is a broadcast or multicast packet. It may or may not be used depending on the circumstances .
  • the Dest FIA (Fabric Interface Address) type field indicates whether the destination FIA is in short form (i.e., ⁇ chassis/slot/ ⁇ ort>) or in long form (i.e.,
  • the Src FIA type field is a one bit field that indicates whether the source FIA is in short form (i.e., ⁇ chassis/slot/port>) or in long form (i.e.,
  • This field may or may not be used depending on the circumstances. This field may be combined with the header type field.
  • the data type field is an x-bit field used for application to application communication using the switch layer.
  • the field identifies the packet destination.
  • the forwarding info field is an x-bit field that holds the Forwarding Table Revision is a forwarding information next hop field, a switch next hop, that identifies which port the packet is to go out, along with the forward_table_entry key/id.
  • the Dest FIA field is an x-bit field that indicates the final destination of the packet. It contains chassis/slot/port and sometimes logical port information. A chassis of value 0 (zero) indicates the chassis holding the Master Agent. A port value of 0 (zero) indicates the receiver of the packet is an application module. The logical port may be used to indicate which stack/entity in the card is to receive the packet. All edge ports and ICL ports are therefore "l"-based.
  • the Src FIA field is an x-bit field that indicates the source of the packet. It is used by the route server to identify the source of incoming packets.
  • Figure 12 is a flow chart of the general packet forwarding process .
  • the module examines the BAS header, if one is present, to determine if the packet was addressed for the chassis to which the module is attached. If not, the application module looks up the destination chassis in the routing table and forwards the packet to the correct chassis. If the packet was addressed for the chassis, the application module examines the header to determine whether the packet was addressed to the module (or slot) . If not, the application module looks up the destination slot in the mapping table and forwards the packet to the correct application module. If the packet was addressed to the application module, the application module compares the forwarding table ID in the header to the local forwarding table revision. If there is a match, the module uses the pointer in the header to forward the packet on to its next destination.
  • Figure 13 is a diagram of an incoming ICL packet.
  • the packet has a BAS header, an encap field that may be either set or not (L2 or NULL) , an IP field, and a data field for the data.
  • Figure 14 is a diagram of a header for the packet of Figure 12.
  • the header type may be 1 or 2.
  • a header type of 1 indicates an FIA field that is of the format chassis/slot/port both for destination and source.
  • a header type of 2 indicates an FIA field of the format chassis/slot/port/logical port for both destination and source.
  • the keep field is not used.
  • the priority field is not used.
  • the fragment field is not used.
  • the next hop filed is not used.
  • the Encap field is zero or 1.
  • the Mcast field is not used.
  • the DST FIA type may be 0 or 1.
  • the SRC FIA type may be zero or one.
  • the BAS TTL field is not used.
  • the forward info field is used.
  • And the DST and SRC FIA fields are used.
  • the cable modem termination system comprises an asymmetrical communication system with two major components: a head end component and a client end component.
  • the head end component known as the cable modem termination system (CMTS) transmits a signal to a cable modem (CM) that is substantially different form the signal it receives from the CM.
  • CMTS cable modem
  • the CMTS transmits a 5 megabaud, 64/256 QAM, in the 54 to 850 MHz band signal and the CM transmits a 160 kilobaud to 2.56 megabaud, QPSK/16 QAM, in the 5 to 42 MHz band signal. Because the transmit signal is different from the receive signal, it is impossible to perform physical layer loopback for self- diagnostic testing.
  • FIG 18 shows the chassis 200 of Figure 4 with RF signal lines 906 running through the backplane 420 from the CMTS's 215, 900, 902, 904 to a chassis controller module 908.
  • RF signals lines run from each of the applications slots to each of the chassis controllers 908, 910.
  • the figure is simplified for clarity.
  • Each of the chassis controllers 908, 910 has an embedded cable modem system 912, 914 capable of receiving a signal from one of the CMTS ' s and generating a return signal to the sending CMTS .
  • the embedded cable modem also contains diagnostic functions.
  • the signal lines 906 in backplane carry the radio frequency (RF) signals from the CMTS to the embedded CM and back again. This type of RF signal routing on a backplane requires a high degree of care in the implementation of the backplane to ensure the RF signals are not contaminated by any of the digital signals on the backplane.
  • RF radio frequency
  • FIG 19 shows a CMTS application module 215 connected through the backplane 420 to an embedded cable modem 912.
  • the CMTS application module 215 has a downstream modulator 920 and four upstream modulators 922, 924, 926, 928.
  • a packet processing and backplane interface 930 sends and receives data at the backplane 420.
  • the data is processed at the CMTS MAC layer 932.
  • the embedded cable modem 908 receives the downstream signal from the CMTS application module 215 at a first 12-position relay 936 (having one position for each application module) .
  • the embedded cable modem 912 sends return signals through a second 12-position relay 937 to the CMTS application module 215.
  • the embedded cable modem 912 also receives CMTS signals at the second 12-position relay 937 for diagnostic purposes .
  • the downstream RF signal goes through a first variable attenuator 938 and then a first summer 940 before being demodulated at a downstream demodulator 942.
  • the signal is processed at the cable modem MAC layer 944. From the MAC layer 944, the signal may be processed at a CM packet processing and backplane interface 946 and then packets may be transmitted to the backplane 420. The signal could also be sent through an upstream burst modulator 948 that modulates the return signal.
  • the return signal is then sent through second summer 950, followed by a second variable attenuator 952 and then through the second 12-position relay 937 where it is returned through an RF signal line to the CMTS application module 215.
  • the signal can be directed to the system interface 955 where, in the preferred embodiment of the invention, the diagnostic software resides .
  • the diagnostic software may reside in the chassis controller outside the embedded cable modem.
  • the diagnostic software may reside on an application module in the chassis, such as the CMTS module.
  • the diagnostic software may reside at the central network manager or some other remote, off-chassis location.
  • the embedded cable modem 912 has a calibrated broadband noise source 954 that generates a noise signal that can be added to the downstream signal at the first summer 940 and to the upstream signal at the second summer 950.
  • the calibrated noise source 954 is used to generate test signals for carrier to noise diagnostics.
  • outgoing packets flow from the CMTS backplane interface 930, to the CMTS MAC 932, to the CMTS downstream modulator 920 where they are modulated onto a carrier in the range of 54 to 850 MHz.
  • the resulting signal is then routed to the external hybrid fiber coax (HFC) where it is sent to the downstream CMs, including the embedded cable modem.
  • Upstream packets are modulated and transmitted from the CMs and the embedded cable modem in the range of 5-42 MHz onto the HFC cabling into one of the upstream ports of the CMTS where the signal is routed into one of the upstream demodulators .
  • the embedded cable modem is able to perform diagnostics on both the downstream and upstream signal as well as generate test signals to the CMTS.
  • the embedded cable modem may reside on each CMTS application module.
  • the embedded cable modem may itself be an application module.
  • Fig. 19 shows a scanning receiver spectrum analyzer system 960 in the embedded cable modem 912.
  • the integrated spectrum analyzer is a system for monitoring the spectrum of RF energy on the upstream cable plant for the purpose of operating a group of cable modems in frequency bands that are the most interference-free.
  • the spectrum analyzer surveys the entire upstream spectrum (typically 5 - 42 MHz) to determine the noise level at each frequency, and then weighing this data with current operating conditions (e.g., bit error rates) to determine the best possible alternative channel frequency for modems on an upstream node, and then waiting for a trigger condition to occur which indicates a need for the one or more of the upstream demodulators and their corresponding user modems to change frequency on the upstream node.
  • a node is equivalent to a physical wire or fiber, onto which signals from multiple user modems are aggregated and sent to a port of the CMTS.
  • Fig. 20 is a detailed block diagram of the spectrum analyzer system of Fig. 19.
  • the CMTS application module 215 has four A/D converters 970, 972, 974, 976 in addition to four receiver/demodulators 922, 924, 926, 928.
  • the receiver/demodulators are in actuality multiplexed to the A/D converters.
  • the figure is simplified for clarity.
  • the spectrum analyzer system 960 taps the signals of each upstream channel between the A/D converters and the receiver/demodulators .
  • the spectrum analyzer system 960 has a digital switch or multiplexer 986, a spectrum analyzer 992, a bit error rate (BER) meter 994, a peak detector 996, a power detector 998, an AM demodulator 1000, a frequency subset demodulator 978, and an autocorrelation processor.
  • the spectrum analyzer 992 has a tuner (adjustable band pass filter) 988 and a signal detector 990 to detect amplitude.
  • the system has a graphical output to a graphical user interface 1002 and an audible output to an audible user interface 1004 which allow a user to observe the noise spectrum.
  • the system further includes an alarm output 1006 which can be set to trigger if noise levels exceed predetermined levels for predetermined durations .
  • the frequency subset demodulator 978 performs phase, frequency, SSB, amplitude, or other demodulation processes in selected subsets of the upstream spectrum for the purpose of allowing a user to listen through the audible user interface 1004 to the noise to help determine its origin.
  • the autocorrelation processor 980 processes samples of the RF spectrum, and provides an indication of the location of modulated, non-random interference, thus assisting the user in determining the source of unwanted signals .
  • the AM demodulator is configured to determine the level of AC hum on any portion of the RF spectrum present on a node.
  • the spectrum analyzer system taps off the upstream digital signals in the CMTS. The four channels are switched by the digital switch.
  • the spectrum analyzer tunes to a range in the spectrum for analysis .
  • the BER meter holds a threshold value for noise in a signal. If the noise is above the threshold, the BER meter counts it resulting in a ratio of good symbols to bad symbols in the signal. Burst noise is detected in this way because the swept filter analyzer remains at a range for a period of time so that the range may be adequately analyzed.
  • the spectrum analyzer system monitors the signal to noise ratio and bit error rate on a current channel. When the noise ratio and bit error rate exceed a configurable threshold, the system obtains a new channel frequency from the channel selection method, and instructs the modems to move to that frequency. The method of selecting a new channel frequency involves weighing a variety of inputs . The spectrum analyzer system then determines which frequency statistically offers the best chance of improving the bit error rate performance of the modems in the current upstream node.
  • the factors examined in selecting a new channel frequency are: the instantaneous noise level at each frequency, the peak power accumulated over time at each frequency, the average noise level at each frequency, the estimated bit error rate which a signal would see if it were operating at that a selected frequency, the histogram of the data which is the amount of time that the noise level at each frequency exceeds a group of predetermined levels, forbidden frequency bands which are frequencies where cable modems are prohibited to operate as configured by the user, and past histogram data which can, for, example be used to find frequencies that have high noise only during certain parts of the day.
  • Figure 21 shows a graph of ingress characterization.
  • the spectrum analyzer system can display the spectrum analyzer results graphically on the graphical user interface.
  • Two types of ingress can be isolated: burst noise and frequency domain ingress.
  • Burst noise can be characterized by burst width, amplitude, and repetition rate.
  • Frequency domain ingress can be characterized by bandwidth of modulation, amplitude, and time duration. These parameters can be correlated to bad packet rate vs. time of day. This information is useful for deciding which frequencies are the most interference-free. This information is also useful for providing graphical data about worst interference sources vs. time of day and worst interference sources vs . packet loss .
  • the spectrum analyzer system was described as part of the embedded cable modem on the chassis controller.
  • the spectrum analyzer system may be on a CMTS application module, or an application module by itself, or an application module plugged into the chassis controller without the embedded modem.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The scanning receiver spectrum analyzer system is a system for monitoring the spectrum of RF energy on the upstream cable plant for the purpose of operating a group of cable modems in frequency bands that are the most interference-free. The spectrum analyzer surveys the entire upstream spectrum (typically 5 - 42 MHz) to determine the noise level at each frequency, and then weighing this data with current operating conditions (e.g. bit error rates) to determine the best possible alternative channel frequency for modems on an upstream node, and then waiting for a trigger condition to occur which indicates a need for the modems to change frequency.

Description

SYSTEM AND PROCESS FOR RETURN CHANNEL SPECTRUM MANAGER
FIELD OF THE INVENTION
This invention relates generally to networking data processing systems, and, more particularly to a broadband network environment, for example, one using a SONET backbone and Hybrid Fiber-Coax (ial cable) ("HFC") to connect users to the backbone. An emerging hardware/software standard for the HFC environment is DOCSIS (Data Over Cable Service Interface Standard) of CableLabs .
BACKGROUND OF THE INVENTION
In a current configuration of wide-area delivery of data to HFC systems (each connected to 200 households/clients) , the head-end server is connected to a SONET ring via a multiplexer drop on the ring (see Fig. 1) . These multiplexers currently cost some $50,000 in addition to the head-end server, and scaling up of service of a community may require new multiplexers and servers .
The failure of a component on the head-end server can take an entire "downstream" (from the head-end to the end- user) sub-network out of communication with the world.
Attempts have been made to integrate systems in order to reduce costs and to ease system management. A current integrated data delivery system is shown in Figure 2. Figure 2 shows a system having a reverse path monitoring system, an ethemet switch, a router, modulators and upconverters, a provisioning system, telephony parts, and a plurality of CMTS's (cable modem termination systems) . This type of system typically has multiple vendors for its multiple systems, has different management systems, a large footprint, high power requirements and high operating costs .
A typical network broadband cable network for delivery of voice and data is shown in Figure 3. Two OC-12 port interface servers are each connected to one of two backbone routers which are in turn networked to two switches. The switches are networked to CMTS head-end routers . The CMTS head-end routers are connected to a plurality of optical nodes . The switches are also connected to a plurality of telephone trunk gateways which are in turn connected to the public switched telephone network (PSTN) . As with the "integrated" system shown in Figure 2 , this type of network also typically has multiple vendors for its multiple systems, has different management systems, a large footprint, high power requirements and high operating costs. In order to facilitate an effective integrated solution to have an integrated diagnostic system. The upstream signal from the cable modems are, for a variety of reasons, noisy. Some of the noise can be predicted and some can be avoided. When the signal is too noisy, the CMTS cannot operate. Therefore, the upstream signal needs to be monitored and the upstream data frequency ranges need to be adjusted accordingly.
It is desirable to have an integrated solution to reduce the size of the system, its power needs and its costs, as well as to give the data delivery system greater consistency.
It is an object of the present invention to provide a system and process for electrical interconnect for broadband delivery of high-quality voice, data, and video services.
It is another object of the present invention to provide a system and process for a cable access platform having high network reliability with the ability to reliably support lifeline telephony services and the ability to supply tiered voice and data services.
It is another object of the present invention to provide a system and process for a secure, scalable network switch.
SUMMARY OF THE INVENTION
The problems of providing an integrated spectrum analyzer for delivery of voice and data in a compact area for an integrated switch are solved by the present invention of a return channel spectrum manager. The scanning receiver spectrum analyzer system is a system for monitoring the spectrum of RF energy on the upstream cable plant for the purpose of operating a group of cable modems in frequency bands that are the most interference-free. The spectrum analyzer surveys the entire upstream spectrum
(typically 5 - 42 MHz) to determine the noise level at each frequency, and then weighing this data with current operating conditions (e.g. bit error rates) to determine the best possible alternative channel frequency for modems on an upstream node, and then waiting for a trigger condition to occur which indicates a need for the modems to change frequency.
The present invention together with the above and other advantages may best be understood from the following detailed description of the embodiments of the invention illustrated in the drawings, wherein:
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 shows a prior art network on a SONET ring; Fig. 2 shows a prior art data delivery system; Fig. 3 shows a prior art data delivery network; Fig. 4 is a block diagram of a chassis according to principles of the invention; Fig. 5 shows an integrated cable infrastructure having the chassis of Fig. 4;
Fig. 6 is a block diagram of the application cards, the backplane and a portion of the interconnections between them in the chassis of Fig. 4; Fig. 7 is a schematic diagram of the backplane interconnections, including the switching mesh;
Fig. 8 is a block diagram of two exemplary slots showing differential pair connections between the slots;
Fig. 9 is a block diagram of the MCC chip in an application module according to principles of the present invention;
Fig. 10 is a diagram of a packet tag; Fig. 11 is a block diagram of a generic switch packet header;
Fig. 12 is a flow chart of data transmission through the backplane; Fig. 13 is a block diagram of an incoming ICL packet;
Fig. 14 is a block diagram of a header for the ICL packet of Fig. 13;
Fig. 15 shows example mapping tables mapping channels to backplane slots according to principles of the present invention;
Fig. 16 is a block diagram of a bus arbitration application module connected in the backplane of the present invention;
Fig. 17 is a state diagram of bus arbitration in the application module of Fig. 16;
Fig. 18 is a block diagram of the chassis of Fig. 4 showing a subset of RF signal lines in the backplane according to principles of the invention;
Fig. 19 is a block diagram of a CMTS application module and an embedded cable modem connected through the backplane according to principles of the invention;
Fig. 20 is a block diagram of the spectrum analyzer according to principles of the invention; and,
Fig. 21 is a graph showing ingress characterization.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Figure 4 shows a chassis 200 operating according to principles of the present invention. The chassis 200 integrates a plurality of network applications into a single switch system. The invention is a fully-meshed OSI Layer 3/4 IP-switch with high performance packet forwarding, filtering and QoS/CoS (Quality of Service/Class of Service) capabilities using low-level embedded software controlled by a cluster manager in a chassis controller. Higher-level software resides in the cluster manager, including router server functions
(RIPvl, RIPv2, OSPF, etc.), network management (SNMP V1/V2) , security, DHCP, LDAP, and remote access software (VPNs, PPTP, L2TP, and PPP) , and can be readily modified or upgraded. In the present embodiment of the invention, the chassis 200 has fourteen (14) slots for modules. Twelve of those fourteen slots hold application modules 205, and two slots hold chassis controller modules 210. Each application module has an on-board DC-DC converter and is "hot-pluggable" into the chassis. The chassis controller modules 210 are for redundant system clock/bus arbitration. Examples of applications that may be integrated in the chassis are a CMTS module 215, an Ethernet module 220, a SONET module 225, and a telephony application 230. Another application may be an interchassis link (ICL) port 235 through which the chassis may be linked to another chassis .
Fig. 5 shows an integrated cable infrastructure 260 having the chassis 200 of Fig. 4. The chassis 200 is part of a regional hub 262 (also called the "head-end") for voice and data delivery. The hub 262 includes a video controller application 264, a video server 266, Web/cache servers 268, and an operation support system (OSS) 270, a combiner 271 and the chassis 200. The chassis 200 acts as an IP access switch. The chassis 200 is connected to a SONET ring 272, outside the hub 262, having a connection to the Internet 274, and a connection to the Public Switched Telephone Network (PSTN) 276. The chassis 200 and the video-controller application 264 are attached to the combiner 271. The combiner 271 is connected by an HFC link 278 to cable customers and provides IP voice, data, video and fax services. At least 2000 cable customers may be linked to the head-end by the HFC link 278. The chassis 200 can support a plurality of HFC links and also a plurality of chassises may be networked together (as described below) to support thousands of cable customers.
By convention today, there is one wide-band channel for transmission (downloading) to users (which may be desktop computers, facsimile machines or telephone sets) and four much narrower channels for (uploading) . This is processed by the HFC cards with duplexing at an 0/E node. The local HFC cable system or loop may be a coaxial cable distribution network with a drop to a cable modem. Figure 6 shows application modules connected to a backplane 420 of the chassis 200 of Figure 4. In the present embodiment of the invention, the backplane is implemented as a 24-layer printed wiring board and includes 144 pairs of uni- directional differential-pair connections, each pair directly connecting input and output terminals of each of a maximum of twelve application modules with output and input terminals of each other module and itself . Each application module interfaces with the backplane through a Mesh Communication Chip (MCC) 424 through these terminals. Each application module is also connected to a chassis management bus 432 which provides the modules with a connection to the chassis controllers 428, 430. Each MCC 424 has twelve (12) serial link interfaces that run to the backplane 420. Eleven of the serial links on each application module are for connecting the application module to every other application module in the chassis. One link is for connecting the module with itself, i.e., a loop-back. The backplane is fully meshed meaning that every application module has a direct link to every other application module in the chassis through the serial links . Only a portion of the connections is shown in Figure 6 as an example. The backplane mesh is shown in Figure 7.
The 12 channels with serial links of the MCC are numbered 0 to 11. This is referred to as the channel ID or CID. The slots on the backplane are also numbered from 0 to 11 (slot ID, or SID) . The chassis system does not require, however, that a channel 0 be wired to a slot 0 on the backplane. A serial link may be connected to any slot. The slot IDs are dynamically configured depending on system topology. This provides freedom in backplane wiring layout which might otherwise require layers additional to the twenty-four layers in the present backplane. The application module reads the slot ID of the slot into which it is inserted. The application module sends that slot ID out its serial lines in an idle stream in between data transmissions . The application module also includes the slot ID in each data transmission.
Figure 15 shows examples of mapping tables of channels in cards to backplane slots. Each card stores a portion of the table, that is, the table row concerning the particular card. The table row is stored in the MCC.
Figure 16 shows a management bus arbitration application module connected to the backplane . The backplane contains two separate management buses for failure protection. Each application module in the chassis including the two chassis controllers as well as the twelve applications modules can use both or either management bus.
The management bus is used for low-speed data transfer within the chassis and generally consists of control, statistical, configuration information, data from the chassis controller modules to the application modules in the chassis.
The implementation of the management bus consists of a four bit data path, a transmit clock, a transmit clock signal, a collision control signal, and a four bit arbitration bus. As seen in Figure 16, the bus controller has a 10/100 MAC device, a receive FIFO, bus transceiver logics, and a programmable logic device ("PLD").
The data path on the management bus is a four-bit (Media Independent Interface) Mil standard interface for 10/100 ethernet MACs. The bus mimics the operation of a standard 100 Mbit ethernet bus interface so that the MAC functionality can be exploited. The programmable logic device contains a state machine that performs bus arbitration. Figure 17 shows the state diagram for the state machine in the programmable logic device for the management bus . The arbitration lines determine which module has control of the bus by using open collector logic . The pull-ups for the arbitration bus reside on the chassis controller modules. Each slot places its slot ID on the arbitration lines to request the bus . During transmission of the preamble of data to be transmitted, if the arbitration is corrupted, the bus controller assumes that another slot had concurrently requested the bus and the state machine within the PLD aborts the transfer operation by forcing a collision signal for both the bus and the local MAC device active. As other modules detect the collision signal on the bus active, the collision line on each local MAC is forced to the collision state, which allows the back-off algorithm within the MAC to determine the next transmission time. If a collision is not detected, the data is latched into the receive FIFO of each module, and the TX_Enable signal is used to quantify data from the bus. The state machine waits four clock cycles during the preamble of the transmit state, and four clock cycles during the collision state to allow the other modules to synchronize to the state of the bus . Backplane Architecture Fig. 7 shows the internal backplane architecture of the current embodiment of the switch of the invention that was shown in exemplary fashion in Fig. 6. One feature is the full-mesh interconnection between slots shown in the region 505. Slots are shown by vertical lines in Figure 7. This is implemented using 144 pairs of differential pairs embedded in the backplane as shown in Figure 8. Each slot thus has a full-duplex serial path to every other slot in the system. There are n(n-l) non-looped-back links in the system, that is, 132 links, doubled for the duplex pair configuration for a total of 264 differential pairs (or further doubled for 528 wires) in the backplane to create the backplane mesh in the present embodiment of the invention. Each differential pair is able to support data throughput of more than 1 gigabit per second. In the implementation of the current invention, the clock signal is embedded in the serial signaling, obviating the need for separate pairs (quads) for clock distribution. Because the data paths are independent, different pairs of cards in the chassis may be switching (ATM) cells and others switching (IP) packets. Also, each slot is capable of transmitting on all 11 of its serial links at once, a feature useful for broadcasting. All the slots transmitting on all their serial lines achieve a peak bandwidth of 132 gigabits per second. Sustained bandwidth depends on system configuration. The mesh provides a fully redundant connection between the application cards in the backplane. One connection can fail without affecting the ability of the cards to communicate. Routing tables are stored in the chassis controllers. If, for example, the. connection between application module 1 and application module 2 failed, the routing tables are updated. The routing tables are updated when the application modules report to the chassis controllers that no data is being received on a particular serial link. Data addressed to application module 2 coming in through application module 1 is routed to another application module, for instance application module 3, which would then forward the data to application module 2. The bus-connected backplane region 525 includes three bus systems. The management/control bus 530 is provided for out- of-band communication of signaling, control, and management information. A redundant backup for a management bus failure will be the mesh interconnect fabric 505. In the current implementation, the management bus provides 32-bit 10-20 MHz transfers, operating as a packet bus. Arbitration is centralized on the system clock module 102 (clock A) . Any slot to any slot communication is allowed, with broadcast and multicast also supported. The bus drivers are integrated on the System Bus FPGA/ASIC.
A TDM (Time Division Multiplexing) Fabric 535 is also provided for telephony applications . Alternative approaches include the use of a DSO fabric, using 32 TDM highways (sixteen full-duplex, 2048 FDX ti eslots, or approximately 3 T3s) using the H.llO standard, or a SONET ATM (Asynchronous Transfer Mode) fabric.
Miscellaneous static signals may also be distributed in the bus-connected backplane region 540. Slot ID, clock failure and management bus arbitration failure may be signaled.
A star interconnect region 545 provides independent clock distribution from redundant clocks 102 and 103. The static signals on backplane bus 540 tell the system modules which system clock and bus arbitration slot is active. Two clock distribution networks are supported: a reference clock from which other clocks are synthesized, and a TDM bus clock, depending on the TDM bus architecture chosen. Both clocks are synchronized to an internal Stratum 3/4 oscillator or an externally provided BITS (Building Integrated Timing Supply) .
Fig. 8 shows a first connection point on a first MCC on a first module, MCC A 350, and a second connection point on a second MCC on a second module, MCC B 352, and connections 354, 355, 356, 357 between them. The connections run through a backplane mesh 360 according to the present invention. There are transmit 362, 364 and receive 366, 368 channels at each MCC 350, 352, and each channel has a positive and a negative connection. In all, each point on a module has four connections between it and every other point due to the backplane mesh. The differential transmission line impedance and length are controlled to ensure signal integrity and high speed operation. Fig. 9 is a block diagram of the MCC chip. An F-bus interface 805 connects the MCC 300 to the FIFO bus (F-bus) . Twelve transmit FIFOs 810 and twelve receive FIFOs 815 are connected to the F-bus interface 805. Each transmit FIFO has a data compressor (12 data compressors in all, 820) , and each receive FIFO has a data expander (12 data expanders in all, 825) . Twelve serializer/deserializers 830 serve the data compressors 820 and data expanders 825, one compressor and one expander for each A channel in the MCC is defined as a serial link together with its encoding/decoding logic, transmit queue and receive queue. The serial lines running from the channels connect to the backplane mesh. All the channels can transmit data at the same time. A current implementation of the invention uses a Mesh Communication Chip to interconnect up to thirteen F-buses in a full mesh using serial link technology. Each MCC has two F-bus interfaces and twelve serial link interfaces. The MCC transmits and receives packets on the F-buses in programmable size increments from 64 bytes to entire packets, it contains twelve virtual transmit processors (VTPs) which take packets from the F-bus and send them out the serial links, allowing twelve outgoing packets simultaneously. The VTPs read the MCC tag on the front of the packet and dynamically bind themselves to the destination slot(s) indicated in the header. The card/slot-specific processor, card/slot-specific MAC/PHY pair (Ethernet, SONET, HFC, etc.) and an MCC communicate on a bi-directional F-bus (or multiple unidirectional F-busses) . The packet transmit path is from the PHY/MAC to the processor, then from the processor to the MCC and out the mesh. The processor does Layer 3 and Layer 4 look-ups in the FIPP to determine the packet' s destination and Quality of Service (QoS) , modifies the header as necessary, and prepends the MCC tag to the packet before sending it to the MCC.
The packet receive path is from the mesh to the MCC and on to the processor, then from the processor to the MAC/Phy and out the channel. The processor strips off the MCC tag before sending the packet on to the MAC. A first data flow control mechanism in the present invention takes advantage of the duplex pair configuration of the connections in the backplane and connections to the modules . The MCCs have a predetermined fullness threshold for the FIFOs. If a receive FIFO fills to the predetermined threshold, a code is transmitted over the transmit channel of the duplex pair to stop sending data. The codes are designed to direct-couple balance the signals on the transmission lines and to enable the detection of errors . The codes in the present implementation of the invention are 16B/20B codes, however other codes may be used within the scope of the present invention. The MCC sends an II or 12 code with the XOFF bit set to turn off the data flow. This message is included in the data stream transmitted on the transmit channel . If the FIFO falls below the predetermined threshold, the MCC clears the stop message by sending an II or 12 code with the XOFF bit cleared. The efficient flow control prevents low depth FIFOs from overrunning, thereby allowing small FIFOs in ASICs, for example, 512 bytes, to be used. This reduces microchip costs in the system. Fig. 10 shows a packet tag, also called the MCC tag. The MCC tag is a 32-bit tag used to route a packet through the backplane mesh. The tag is added to the front of the packet by the slot processor before sending it to the MCC. The tag has four fields: a destination mask field, a priority field, a keep field, and a reserved field. The destination mask field is the field holding the mask of slots in the current chassis to which the packet is destined, which may or may not be the final destination in the system. For a transmit packet, the MCC uses the destination mask to determine which transmit queue (s) the packet is destined for. For a receive packet the MCC uses the priority and keep fields to determine which packets to discard in an over-committed slot. The reserved field is unused in the current embodiment of the invention. The MCC has two independent transmit mode selectors, slot-to-channel mapping and virtual transmit mode. In slot- to-channel mapping, the MCC transparently maps SIDs to CIDs and software does not have to keep track of the mapping. In virtual transmit mode, the MCC handles multicast packets semi- transparently. The MCC takes a single F-bus stream and directs it to multiple channels . The transmit ports in the MCC address virtual transmit processors (VTPs) rather than slots. The F-bus interface directs the packet to the selected virtual transmit processor. The VTP saves the Destination Mask field from the MCC tag and forwards the packet data (including the MCC tag) to the set of transmit queues indicated in the Destination Mask. All subsequent 64-byte "chunks" of the packet are sent by the slot processor using the same port ID, and so are directed to the same VTP. The VTP forwards chunks of the packet to the set of transmit queues indicated in the Destination Mask field saved from the MCC tag. When a chunk arrives with the EOP bit set, the VTP clears its destination mask. If the next chunk addressed to that port is not the start of a new packet (i.e., with the SOP bit set) , the VTP does not forward the chunk to any queue. The destination mask of the MCC tag enable efficient multicast transmission of packets through "latching." The destination mask includes code for all designated destination slots. So, if a packet is meant for all twelve slots, only one packet need be sent. The tag is delivered to all destinations encoded in the mask. If only a fraction of the slots are to receive the packet, only those slots are encoded into the destination mask.
The MCC maintains a set of "channel busy" bits which it uses to prevent multiple VTPs from sending packets to the same CID simultaneously. This conflict prevention mechanism is not intended to assist the slot processor in management of busy channels, but rather to prevent complete corruption of packets in the event that the slot processor accidentally sends two packets to the same slot simultaneously. When the VTPs get a new packet, they compare the destination CID mask with the channel busy bits. If any channel is busy, it is removed from the destination mask and an error is recorded for that CID. The VTP then sets all the busy bits for all remaining destination channels and transmits the packet. When the VTP sees EOP on the F-bus for the packet, it clears the channel busy bits for its destination CIDs.
The F-bus interface performs the I/O functions between the MCC and the remaining portion of the application module. The application module adds a 32-bit packet tag (MCC tag) , shown in Figure 10, to each data packet to be routed through the mesh.
The data received or transmitted on the F-bus is up to 64 bits wide. In data transmission, the F-bus interface adds 4 status bits to the transmit data to make a 68-bit data segment. The F-bus interface drops the 68-bit data segment into the appropriate transmit FIFO as determined from the packet tag. The data from a transmit FIFO is transferred to the associated data compressor where the 68-bit data segment is reduced to 10-bit segments. The data is then passed to the associated serializer where the data is further reduced to a serial stream. The serial stream is sent out the serial link to the backplane .
Data arriving from the backplane comes through a serial link to the associated channel . The serializer for that channel expands the data to a 10-bit data segment and the associated data expander expands the data to a 68-bit data segment which is passed on to the related FIFO and then from the FIFO to the F-bus interface. A Fast IP Processor (FIPP) is provided with 32/64 Mbytes of high-speed synchronous SDRAM, 8 Mbytes of high-speed synchronous SRAM, and boot flash. The FIPP has a 32-bit PCI bus and a 64-bit FIFO bus (F-bus) . The FIPP transfers packet data to and from all F-bus-connected devices. It provides IP forwarding in both unicast and multicast mode. Routing tables are received over the management bus from the chassis route server. The FIPP also provides higher layer functions such as filtering, and CoS/QoS. Each line card has a clock subsystem that produces all the clocks necessary for each card. This will lock to the reference clock provided by the System Clock and Management Bus Arbitration Card.
Each card has hot-plug, power-on reset circuitry, and Sanity Timer functions. All cards have on-board DC-to-DC converters to go from the -48V rail in the backplane to whatever voltages are required for the application. Some cards (such as the CMTS card) likely will have two separate and isolated supplies to maximize the performance of the analog portions of the card.
Fig. 11 shows a generic switch header for the integrated switch. The header is used to route data packets through the system. The final destination may be either intra-chassis or inter-chassis . The header type field indicates the header type used to route the packet through the network having one or more chassis systems. Generally, the header type field is used to decode the header and provide information needed for packet forwarding. Specifically, the header type field may be used to indicate that the Destination Fabric Interface Address has logical ports. The header type field is also used to indicate whether the packet is to be broadcast or unicast. The header type field is used to indicate the relevant fields in the header. The keep field indicates whether a packet can be dropped due to congestion.
The fragment field indicates packet fragmentation and whether the packet consists of two frames. The priority field is used to indicate packet priority. The encap type field is a one bit field that indicates whether further layer 2 processing is needed before the packet is forwarded. If the bit is set, L2 is present. If the bit is not set, L2 is not present.
The Mcast type field is a one bit field that indicates whether the packet is a broadcast or multicast packet. It may or may not be used depending on the circumstances .
The Dest FIA (Fabric Interface Address) type field indicates whether the destination FIA is in short form (i.e., <chassis/slot/ρort>) or in long form (i.e.,
<chassis/slot/port/logical port>) . This field may or may not be used depending on the circumstances. This field may be combined with the header type field. The Src FIA type field is a one bit field that indicates whether the source FIA is in short form (i.e., <chassis/slot/port>) or in long form (i.e.,
<chassis/slot/port/logical port>) . This field may or may not be used depending on the circumstances. This field may be combined with the header type field.
The data type field is an x-bit field used for application to application communication using the switch layer. The field identifies the packet destination.
The forwarding info field is an x-bit field that holds the Forwarding Table Revision is a forwarding information next hop field, a switch next hop, that identifies which port the packet is to go out, along with the forward_table_entry key/id.
The Dest FIA field is an x-bit field that indicates the final destination of the packet. It contains chassis/slot/port and sometimes logical port information. A chassis of value 0 (zero) indicates the chassis holding the Master Agent. A port value of 0 (zero) indicates the receiver of the packet is an application module. The logical port may be used to indicate which stack/entity in the card is to receive the packet. All edge ports and ICL ports are therefore "l"-based. The Src FIA field is an x-bit field that indicates the source of the packet. It is used by the route server to identify the source of incoming packets.
Figure 12 is a flow chart of the general packet forwarding process . When a packet is received at one of the application modules of the switch, the module examines the BAS header, if one is present, to determine if the packet was addressed for the chassis to which the module is attached. If not, the application module looks up the destination chassis in the routing table and forwards the packet to the correct chassis. If the packet was addressed for the chassis, the application module examines the header to determine whether the packet was addressed to the module (or slot) . If not, the application module looks up the destination slot in the mapping table and forwards the packet to the correct application module. If the packet was addressed to the application module, the application module compares the forwarding table ID in the header to the local forwarding table revision. If there is a match, the module uses the pointer in the header to forward the packet on to its next destination. Unicast Traffic Received from an ICL Port
Figure 13 is a diagram of an incoming ICL packet. The packet has a BAS header, an encap field that may be either set or not (L2 or NULL) , an IP field, and a data field for the data.
Figure 14 is a diagram of a header for the packet of Figure 12. The header type may be 1 or 2. A header type of 1 indicates an FIA field that is of the format chassis/slot/port both for destination and source. A header type of 2 indicates an FIA field of the format chassis/slot/port/logical port for both destination and source. The keep field is not used. The priority field is not used. The fragment field is not used. The next hop filed is not used. The Encap field is zero or 1. The Mcast field is not used. The DST FIA type may be 0 or 1. The SRC FIA type may be zero or one. The BAS TTL field is not used. The forward info field is used. And the DST and SRC FIA fields are used. Embedded Cable Modem
The cable modem termination system comprises an asymmetrical communication system with two major components: a head end component and a client end component. The head end component, known as the cable modem termination system (CMTS) transmits a signal to a cable modem (CM) that is substantially different form the signal it receives from the CM. The CMTS transmits a 5 megabaud, 64/256 QAM, in the 54 to 850 MHz band signal and the CM transmits a 160 kilobaud to 2.56 megabaud, QPSK/16 QAM, in the 5 to 42 MHz band signal. Because the transmit signal is different from the receive signal, it is impossible to perform physical layer loopback for self- diagnostic testing.
Figure 18 shows the chassis 200 of Figure 4 with RF signal lines 906 running through the backplane 420 from the CMTS's 215, 900, 902, 904 to a chassis controller module 908. In actuality, RF signals lines run from each of the applications slots to each of the chassis controllers 908, 910. The figure is simplified for clarity. Each of the chassis controllers 908, 910 has an embedded cable modem system 912, 914 capable of receiving a signal from one of the CMTS ' s and generating a return signal to the sending CMTS . The embedded cable modem also contains diagnostic functions. The signal lines 906 in backplane carry the radio frequency (RF) signals from the CMTS to the embedded CM and back again. This type of RF signal routing on a backplane requires a high degree of care in the implementation of the backplane to ensure the RF signals are not contaminated by any of the digital signals on the backplane.
Figure 19 shows a CMTS application module 215 connected through the backplane 420 to an embedded cable modem 912. The CMTS application module 215 has a downstream modulator 920 and four upstream modulators 922, 924, 926, 928. A packet processing and backplane interface 930 sends and receives data at the backplane 420. The data is processed at the CMTS MAC layer 932. The embedded cable modem 908 receives the downstream signal from the CMTS application module 215 at a first 12-position relay 936 (having one position for each application module) . The embedded cable modem 912 sends return signals through a second 12-position relay 937 to the CMTS application module 215. The embedded cable modem 912 also receives CMTS signals at the second 12-position relay 937 for diagnostic purposes .
In the cable modem, the downstream RF signal goes through a first variable attenuator 938 and then a first summer 940 before being demodulated at a downstream demodulator 942. The signal is processed at the cable modem MAC layer 944. From the MAC layer 944, the signal may be processed at a CM packet processing and backplane interface 946 and then packets may be transmitted to the backplane 420. The signal could also be sent through an upstream burst modulator 948 that modulates the return signal. The return signal is then sent through second summer 950, followed by a second variable attenuator 952 and then through the second 12-position relay 937 where it is returned through an RF signal line to the CMTS application module 215. From the CM packet processing and backplane interface 946, the signal can be directed to the system interface 955 where, in the preferred embodiment of the invention, the diagnostic software resides . In a first alternative embodiment of the invention, the diagnostic software may reside in the chassis controller outside the embedded cable modem. In a second alternative embodiment, the diagnostic software may reside on an application module in the chassis, such as the CMTS module. In a third alternative embodiment, the diagnostic software may reside at the central network manager or some other remote, off-chassis location.
The embedded cable modem 912 has a calibrated broadband noise source 954 that generates a noise signal that can be added to the downstream signal at the first summer 940 and to the upstream signal at the second summer 950. The calibrated noise source 954 is used to generate test signals for carrier to noise diagnostics.
In operation, outgoing packets flow from the CMTS backplane interface 930, to the CMTS MAC 932, to the CMTS downstream modulator 920 where they are modulated onto a carrier in the range of 54 to 850 MHz. The resulting signal is then routed to the external hybrid fiber coax (HFC) where it is sent to the downstream CMs, including the embedded cable modem. Upstream packets are modulated and transmitted from the CMs and the embedded cable modem in the range of 5-42 MHz onto the HFC cabling into one of the upstream ports of the CMTS where the signal is routed into one of the upstream demodulators . The embedded cable modem is able to perform diagnostics on both the downstream and upstream signal as well as generate test signals to the CMTS.
In alternative embodiments of the invention, the embedded cable modem may reside on each CMTS application module. Alternatively, the embedded cable modem may itself be an application module.
Return Spectrum Analyzer
Fig. 19 shows a scanning receiver spectrum analyzer system 960 in the embedded cable modem 912. The integrated spectrum analyzer is a system for monitoring the spectrum of RF energy on the upstream cable plant for the purpose of operating a group of cable modems in frequency bands that are the most interference-free. The spectrum analyzer surveys the entire upstream spectrum (typically 5 - 42 MHz) to determine the noise level at each frequency, and then weighing this data with current operating conditions (e.g., bit error rates) to determine the best possible alternative channel frequency for modems on an upstream node, and then waiting for a trigger condition to occur which indicates a need for the one or more of the upstream demodulators and their corresponding user modems to change frequency on the upstream node. A node is equivalent to a physical wire or fiber, onto which signals from multiple user modems are aggregated and sent to a port of the CMTS.
Fig. 20 is a detailed block diagram of the spectrum analyzer system of Fig. 19. The CMTS application module 215 has four A/D converters 970, 972, 974, 976 in addition to four receiver/demodulators 922, 924, 926, 928. The receiver/demodulators are in actuality multiplexed to the A/D converters. The figure is simplified for clarity. The spectrum analyzer system 960 taps the signals of each upstream channel between the A/D converters and the receiver/demodulators . The spectrum analyzer system 960 has a digital switch or multiplexer 986, a spectrum analyzer 992, a bit error rate (BER) meter 994, a peak detector 996, a power detector 998, an AM demodulator 1000, a frequency subset demodulator 978, and an autocorrelation processor. The spectrum analyzer 992 has a tuner (adjustable band pass filter) 988 and a signal detector 990 to detect amplitude. The system has a graphical output to a graphical user interface 1002 and an audible output to an audible user interface 1004 which allow a user to observe the noise spectrum. The system further includes an alarm output 1006 which can be set to trigger if noise levels exceed predetermined levels for predetermined durations .
The frequency subset demodulator 978 performs phase, frequency, SSB, amplitude, or other demodulation processes in selected subsets of the upstream spectrum for the purpose of allowing a user to listen through the audible user interface 1004 to the noise to help determine its origin. The autocorrelation processor 980 processes samples of the RF spectrum, and provides an indication of the location of modulated, non-random interference, thus assisting the user in determining the source of unwanted signals . The AM demodulator is configured to determine the level of AC hum on any portion of the RF spectrum present on a node.
The spectrum analyzer system taps off the upstream digital signals in the CMTS. The four channels are switched by the digital switch. The spectrum analyzer tunes to a range in the spectrum for analysis . The BER meter holds a threshold value for noise in a signal. If the noise is above the threshold, the BER meter counts it resulting in a ratio of good symbols to bad symbols in the signal. Burst noise is detected in this way because the swept filter analyzer remains at a range for a period of time so that the range may be adequately analyzed.
The spectrum analyzer system monitors the signal to noise ratio and bit error rate on a current channel. When the noise ratio and bit error rate exceed a configurable threshold, the system obtains a new channel frequency from the channel selection method, and instructs the modems to move to that frequency. The method of selecting a new channel frequency involves weighing a variety of inputs . The spectrum analyzer system then determines which frequency statistically offers the best chance of improving the bit error rate performance of the modems in the current upstream node. The factors examined in selecting a new channel frequency are: the instantaneous noise level at each frequency, the peak power accumulated over time at each frequency, the average noise level at each frequency, the estimated bit error rate which a signal would see if it were operating at that a selected frequency, the histogram of the data which is the amount of time that the noise level at each frequency exceeds a group of predetermined levels, forbidden frequency bands which are frequencies where cable modems are prohibited to operate as configured by the user, and past histogram data which can, for, example be used to find frequencies that have high noise only during certain parts of the day.
Figure 21 shows a graph of ingress characterization. The spectrum analyzer system can display the spectrum analyzer results graphically on the graphical user interface. Two types of ingress can be isolated: burst noise and frequency domain ingress. Burst noise can be characterized by burst width, amplitude, and repetition rate. Frequency domain ingress can be characterized by bandwidth of modulation, amplitude, and time duration. These parameters can be correlated to bad packet rate vs. time of day. This information is useful for deciding which frequencies are the most interference-free. This information is also useful for providing graphical data about worst interference sources vs. time of day and worst interference sources vs . packet loss . In the present embodiment, the spectrum analyzer system was described as part of the embedded cable modem on the chassis controller. In alternative embodiments, the spectrum analyzer system may be on a CMTS application module, or an application module by itself, or an application module plugged into the chassis controller without the embedded modem.
It is to be understood that the above-described embodiments are simply illustrative of the principles of the invention. Various and other modifications and changes may be made by those skilled in the art which will embody the principles of the invention and fall within the spirit and scope thereof .

Claims

What is claimed is:
1. A method for optimizing upstream transmission frequencies of multiple physical cable modem channels, comprising the steps of: a) sampling the upstream transmission signals of the channels prior to demodulation; b) analyzing said sampling for noise above a threshold; c) monitoring noise at alternative frequencies for each channel; and d) adjusting the transmission frequency of a channel for which noise exceeds said threshold to a less noisy frequency as indicated by said monitoring; where at least steps (a) through (c) are performed without human intervention.
2. Apparatus for optimizing upstream transmission frequencies of multiple physical cable modem channels, comprising: a) interconnections adapted for connecting to the upstream transmission signals of the channels prior to demodulation; b) a multiplexer selectably connecting said interconnections to an adjustable band pass filter; c) an analytical apparatus connected to the output of said band bass filter comprising: i) a noise threshold detector providing indication of exceeding of a predetermined threshold; and ii) a signal amplitude recorder providing indication of signal quality; and d) an interconnection adapted for connecting said indications of exceeding said predetermined threshold and of signal quality to logic for adjusting the transmission frequency of the channels.
EP01933285A 2000-05-10 2001-05-10 System and process for return channel spectrum manager Withdrawn EP1290819A2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US568270 1990-08-15
US56827000A 2000-05-10 2000-05-10
PCT/US2001/015148 WO2001086856A2 (en) 2000-05-10 2001-05-10 System and process for return channel spectrum manager

Publications (1)

Publication Number Publication Date
EP1290819A2 true EP1290819A2 (en) 2003-03-12

Family

ID=24270617

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01933285A Withdrawn EP1290819A2 (en) 2000-05-10 2001-05-10 System and process for return channel spectrum manager

Country Status (6)

Country Link
EP (1) EP1290819A2 (en)
CN (1) CN1204708C (en)
AU (1) AU2001259722A1 (en)
CA (1) CA2408496A1 (en)
HK (1) HK1058869A1 (en)
WO (1) WO2001086856A2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI429219B (en) * 2006-05-01 2014-03-01 Koninkl Philips Electronics Nv Method of reserving resources with a maximum delay guarantee for multi-hop transmission in a distributed access wireless communications network
JP2010122617A (en) * 2008-11-21 2010-06-03 Yamaha Corp Noise gate and sound collecting device
CN103873149B (en) * 2014-02-27 2016-10-05 北京大学 A kind of high-precision optical fiber frequency transmission method
US9535150B1 (en) * 2015-06-12 2017-01-03 Rohde & Schwarz Gmbh & Co. Kg Method for calibrating a cable and respective measuring device
WO2017219318A1 (en) 2016-06-23 2017-12-28 华为技术有限公司 Cm and hfc network fault locating system and fault detection method
US10742264B2 (en) * 2017-03-31 2020-08-11 Intel Corporation Signaling method for interference group discovery in cable modems
US10135490B2 (en) * 2017-04-10 2018-11-20 Cisco Technology, Inc. Interference group discovery for full duplex network architecture in cable network environment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5368224A (en) * 1992-10-23 1994-11-29 Nellcor Incorporated Method for reducing ambient noise effects in electronic monitoring instruments
US5987069A (en) * 1996-12-24 1999-11-16 Gte Government Systems Corporation Method and apparatus for variably allocating upstream and downstream communication spectra
EP0949832A1 (en) * 1998-04-10 1999-10-13 Nortel Matra Cellular Method and apparatus for allocation of a transmission frequency within a given spectrum

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0186856A3 *

Also Published As

Publication number Publication date
WO2001086856A3 (en) 2002-04-11
CN1442004A (en) 2003-09-10
WO2001086856A2 (en) 2001-11-15
CN1204708C (en) 2005-06-01
CA2408496A1 (en) 2001-11-15
AU2001259722A1 (en) 2001-11-20
HK1058869A1 (en) 2004-06-04

Similar Documents

Publication Publication Date Title
US6853680B1 (en) System and process for embedded cable modem in a cable modem termination system to enable diagnostics and monitoring
US6611526B1 (en) System having a meshed backplane and process for transferring data therethrough
US6826195B1 (en) System and process for high-availability, direct, flexible and scalable switching of data packets in broadband networks
US6345051B1 (en) Method and apparatus for multiplexing of multiple users on the same virtual circuit
US7965722B2 (en) Communication of active data flows between a transport modem termination system and cable transport modems
US7792068B2 (en) Satellite receiver/router, system, and method of use
US7336680B2 (en) Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service
US6137793A (en) Reverse path multiplexer for use in high speed data transmissions
US20050265394A1 (en) Wideband cable modem with narrowband circuitry
US20020131403A1 (en) Transmit and receive system for a cable data service
US20020075805A1 (en) Broadband system with QOS based packet handling
US20070121624A1 (en) Method and system of network clock generation with multiple phase locked loops
WO2007037890A1 (en) Device, system, and method for transporting data using combined broadband and legacy network infrastructures
US20070121638A1 (en) Method and system of communicating superframe data
US20070121623A1 (en) Method and system for establishing narrowband communications connections using virtual local area network identification
WO2001086856A2 (en) System and process for return channel spectrum manager
US20030095545A1 (en) Addressing scheme for management data
Cisco Glossary of Cisco uBR10012 Router and CMTS Series Terms
Cisco Interface Commands (show pas caim - tx-queue-limit)
Cisco Configuring Frame Relay
Cisco Troubleshooting Ethernet, ATM Uplink, and POS Uplink Interfaces
Cisco Configuring the IMA Port Adapter
Cisco Installing and Configuring the MGX-1OC12POS-IR Back Card
Cisco Configuring Frame Relay
Watson et al. HANGMAN Gb/s network

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20021209

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RIN1 Information on inventor provided before grant (corrected)

Inventor name: UNGER, J., DAVID

Inventor name: NIKOLICH, PAUL E.

17Q First examination report despatched

Effective date: 20030812

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20051201