GB2394849A - Multi-channel replicating device for broadband optical signals, and systems including such devices - Google Patents

Multi-channel replicating device for broadband optical signals, and systems including such devices Download PDF

Info

Publication number
GB2394849A
GB2394849A GB0400378A GB0400378A GB2394849A GB 2394849 A GB2394849 A GB 2394849A GB 0400378 A GB0400378 A GB 0400378A GB 0400378 A GB0400378 A GB 0400378A GB 2394849 A GB2394849 A GB 2394849A
Authority
GB
United Kingdom
Prior art keywords
chassis
signals
optical
probe
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0400378A
Other versions
GB0400378D0 (en
Inventor
Douglas John Carson
William Ross Maclsaac
Alastair Reynolds
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agilent Technologies Inc
Original Assignee
Agilent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agilent Technologies Inc filed Critical Agilent Technologies Inc
Publication of GB0400378D0 publication Critical patent/GB0400378D0/en
Publication of GB2394849A publication Critical patent/GB2394849A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/14Monitoring arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0087Network testing or monitoring arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/12Network monitoring probes

Abstract

The device includes a plurality (four, eight, sixteen) replicators 175, e.g. in a rack mountable chassis, each having a plurality of outputs 178 providing multiple identical signals from each input signal. Each replicator 175 has means 177 for converting the optical signal to an electrical signal which is replicated and converted back by respective means 179 to an optical signal. The replicators 175 do not apply digital processing to the signals. Each replicator 175 has a standby selector (multiplexer) 180 receiving the electrical signals from the means 177 to select a desired one to be replicated at an additional output 182. The signals can be distributed for multiple applications, duplication for reliability, load sharing or a combination of these. Specifically, the device can receive tap signals from an optical splitting device for monitoring a bearer in a broadband telecommunications network to provide signals to a plurality of network monitoring units.

Description

MULTI-CHANNEL REPLICATING DEVICE FOR BROADBAND OPTICAL
SlGNALSq AND SYSTEMS INCLUDING SUCH DEVICES INTRODUCTION
The invention relates to telecommunications networks, and in particular to apparatus and systems for monitoring traffic in broadband networks.
10 In telecommunication networks, network element connectivity can be achieved using optical fibre bearers to carry data and voice traffic.
Data traffic on public telecommunication networks is expected to exceed voice traffic with Internet Protocol (IP) emerging as one data networking standard, in conjunction 15 with Asynchronous Transfer Mode (ATM) systems. Voice over IP is also becoming an important application for many Internet service providers with IP switches eonnee.ing IP networks to the public telephony network (PSTN). IP can be carried over a Sonet transport layer, either with or without ATM. In order to inter-operate with the PSTN, IP switches are also capable of inter-working with SS7, the common 20 signalling system for telecommunications networks, as defined by the International Telecommunications Union (ITU) standard for the exchange of signalling messages over a common signalling network.
Different protocols are used to set Up calls according to network type and supported 25 services. The signalling traffic carries messages to set up calls between the necessary network nodes. In response to the SS7 messages, an appropriate link through the transport network is established, to carry the actual data and voice traffic (the payload data) for the duration of each call,. Traditional SS7 links are time division multiplexed, so that the same physical bearer may be carrying the signalling and the 30 payload data. The SS7 network is effectively an example of an "out of band" signalling network, because the signalling is readily separated from the payload. For ATM and IP networks, however, the signalling and payload data are statistically multiplexed on the same bearer. In the case of statistical multiplexing the receiver has
to examine each message/cell to decide if it is carrying signalling or payload data.
One protocol similar to SS7 used in such IP networks is known as Gateway Control Protocol (GCP).
5 The monitoring of networks and their traffic is a fundarr.cutal requirement of any system. The "health" of the network must be monitored, to predict, detect and even anticipate failures, overloads and so forth. Monitoring is also crucial to billing of usage charges, both to end users and between service providers. The reliability Percentage availability) of monitoring equipment is a prime concern for service 10 providers and users, and many applications such as billing require "high availability" monitoring systems, such that outages, due to breakdown or maintenance, must be made extremely rare.
A widely --used monitoring system for SS7 signalling networks is acceSS7 _ from 15 Hewlett-Packard. An instrument extracts all the SS7 packetised signals at Signalling Transfer Points (STPs), which are packet switches analogous to IP routers, that route messages between end points in SS7 networks. The need can be seen for similar monitoring systems able to cope with combined IP/PSTN networks, especially at gateways where the two protocols meet. A problem arises, however, in the quantity 20 of data that needs to be processed for the monitoring of IP traffic. In Internet Protocol networks, there is no out of band signalling network separate from the data traffic itself. Rather, routing information is embedded in the packet headers of the data transport network itself, and the full data skeam has to be processed by the monitoring equipment to extract the necessary infonnation as to network health, 25 billing etc. Moreover, IP communication is not based on allocating each "call" with a link of fixed bandwidth for the duration of the call: rather bandwidth is allocated by packets on demand, in a link shared with any number of other data skeams.
Accordingly, there is a need for a new kind of monitoring equipment capable of 30 grabbing the vast volume of data flowing in the IP network bearers, and of processing it fast enough to extract and analyse the routing and other information crucial to the monitoring function. The requirements of extreme reliability mentioned above apply equally in the new environment.
Networks such as these may be monitored using instruments (generally referred to as probes) by making a passive optical connection to the optical fibre bearer using an optical splitter. However, this approach cannot be considered without due attention to 5 the optical power budget of the bearer, as the optical splitters are!ossy devices. In addition to this, it may be desirable to monitor the same bearer many times or to monitor the same bearer twice as part of a backup strategy for redundancy purposes.
With available instrumentation, this implies a multiplication of the losses, and also disruption to the bearers as each new splitter is installed. Issues of upgrading the 10 transmitter and/or receiver arise as losses mount up.
The inventors have analysed accept network monitoring systems (unpublished at the present filing date). This shows that the reasons for lack of availability of the system can be broken down into three broad categories: unplanned outages, such as software 15 defects; planned outages, such as software and hardware upgrades; and hardware failures. Further analysis shows that the majority of operational hours lost are caused by planned and unplanned maintenance, while hardware failures have a relatively minor effect. Accordingly, increasing the redundancy of disk drives, power supplies and the like, although psychologically comforting, can do relatively little to improve 20 system availability. The greatest scope for reducing operational hours lost and hence increasing availability is in the category of planned outages.
In order to implement a reliable monitoring system it would therefore be advantageous to have an architecture with redundancy allowing for spare probe units 25 that is tolerant of both probe failure and probe reconfiguration, and provides software redundancy. Monitoring equipment designed for this purpose does not currently exist. Service providers may therefore use stand-alone protocol analysers which are tools really 30 intended for the network commissioning stage. These usually terminate the fibre bearer, in place of the product being installed, or they plug into a specific test port on the product under test. Specific test software is then needed for each product.
Manufacturers have alternatively built diagnostic capability into the network
equipment itself, but each perceives the problems differently, leading to a leek of uniformity, and actual monitoring problems, as opposed to perceived problems, may not be addressed.
5 The inventors have recognized that, particularly because passive optical splitters have extremely high reliability, a probe architecture which provides for replication and redundancy in the monitoring system after the splitter would allow all the desired functionality and reliability to be achieved, without multiple physical taps in the network bearer, and hence without excessive power loss and degradation in the 10 system being monitored.
According to one aspect of this invention there is provided a multichannel replicating device for broadband optical signals, the device comprising one or more modules having: IS À a first plurality of input connectors for receiving broadband optical signals; À a larger plurality of output connectors for broadband optical signals; À means for replicating each received broadband optical signal to a plurality of said output connectors without digital processing.
20 Such a device allows multiple monitoring applications to be performed on a network signal with only one optical tap being inserted in the physical bearer or the operating network. Redundancy in the monitoring equipment can be provided, also with the single bearer tap. Change in the eonfguration of the monitoring equipment can be implemented without disturbing the bearer operation, or even the other monitoring 25 applications.
The replicating means may in particular include components for optical to electrical conversion and back to optical again.
30 The replicating device may further comprise an one or more additional optical outputs, and a selector devices for selecting which of the input signals is replicated at said additional output. This selection can be useful in particular in response to fault situations and planned outages within the network monitoring equipment.
The invention further provides a telecommunications network monitoring system . compusmg: À an optical splitting device, providing a tap signal for monitoring signals carried by 5 a bearer in a broadband telecommunications network; À a plurality of network monitoring units, each for receiving and analysing signals from a broadband optical bearer; and À a signal replicating device according to the one aspect set forth above, the signal replicating device being connected so as to receive said optical tap signal, and to 10 provide replicas of said optical tap signal to inputs of two or more of said network monitoring units.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 shows a model of a typical ATM network.
Figure 2 shows a data collection and packet processing apparatus connected to a physical telecommunications network via a LAN/WAN interconnect.
Figure 3 shows the basic functional architecture of a novel network probe apparatus, as featured in Figure 2.
Figure 4 shows a simple network monitoring system which can be implemented using 15 the apparatus of the type shown in Figure 3.
Figure 5 shows another application of the apparatus of Figure 4 giving 3+ 1 redundancy. 20 Figure 6 shows a larger redundant network monitoring system including a backup apparatus. Figure 7 shows an example of a modified probe apparatus permitting a "daisy chain" configuration to provide extra redundancy and/or processing power.
Figure 8 shows an example of daisy chaining the probe chassis of Figure 7 giving 8+1 redundancy.
Figure 9 shows a further application of the probe apparatus giving added processing 30 power per bearer.
( Figure 10 shows a second means of increasing processing power by linking more than one chassis together.
Figure 11 shows a signal replicating device (referred to as a Broadband Bridging 5 Isolator (BBI)) for use in a network monitoring system.
Figure 12 shows a typical configuration of a network monitoring system using the BBI of Figure 11 and several probe apparatuses.
10 Figures 13A and 13B illustrate a process of upgrading the processing power of a network monitoring system without interrupting operation.
Figurel4 is a functional schematic diagram of a generalised network probe apparatus showing the functional relationships between the major modules of the apparatus.
Figure 1 5A shows the general physical layout of modules in a specific network probe apparatus implemented in a novel chassis and backplane.
20 Figure 15B is a front view of the chassis and backplane of Figure 15A with all modules removed, showing the general layout of connectors and interconnections in the backplane.
Figure 1 5C is a rear view of the chassis and backplane of Figure 1 5A with all modules 25 removed, showing the general layout of connectors and interconnections in the backplane, and showing in cut-away form the location of a power supply module.
Figure 16 shows in block schematic form the interconnections between modules in the apparatus ofFigures 15A-C.
Figure 17 is a block diagram showing in more detail a cross-point switch module in the apparatus of Figure 16, and its interconnections with other modules.
Figure 18 is a block diagram showing in more detail a packet processor module in the apparatus of Figure 16, and its interconnections with other modules.
Figure 19 is a block diagram showing in more detail a combined LAN and chassis 5 management card in the apparatus of Figure] 6.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Background
10 Figure l shows a model of a telecommunication network 10 based on asynchronous transfer mode (ATM) bearers. Possible monitoring points on various bearers in the network are shown at 20 and elsewhere. Each bearer is generally an optical fibre carrying packetised data with both routing information and data "payload" travelling in the same data stream. Here "bearer" is used to mean the physical media that carries 15 the data and is distinct from a "link", which in this context is defined to mean a logical stream of data. Many links of data may be multiplexed onto a single bearer.
These definitions are provided for consistency in the present description, however,
and should not be taken to imply any limitation of the applicability of the techniques
disclosed, or on the scope of the invention defined in the appended claims. Those 20 skilled in the art will sometimes use the term "channel" to refer to a link (as defined), or "channel" may be used to refer to one of a number of virtual channels being carried over one link, which comprises the logical connection between two subscribers, or between a subscriber and a service provider. Note that such "channels" within the larger telecommunications network should not be confused with the monitoring 25 channels within the network probe apparatus of the embodiments to be described hereinafter. The payload may comprise voice traffic and/or other data. Different protocols may be catered for, with examples showing connections to Frame Relay Gateway, ATM and 30 DSLAM equipment being illustrated. User-Network traffic 22 and Network-Network traffic 24 are shown here as dashed lines and solid lines respectively.
In Figure 2 various elements 25-60 of a data collection and packet processing system distributed at different sites are provided for monitoring bearers L1-L8 etc. of a telecommunications network. The bearers in the examples herein operate in pairs L1, L2 etc. for bi- directional traffic, but this is not universal, nor is it essential to the 5 invention. Each pair is conveniently monitored by a separate probe unit 25, by means of optical splitters S1, S2 etc. inserted in the physical bearers. For example, one probe unit 25, which monitors bearers L1 and L2, is connected to a local area network (LAN) 60, along with other units at the same site. The probe unit 25 on an ATM/IP network must examine a vast quantity of data, and can be programmed to filter the 10 data by a Virtual Channel (VC) as a means of reducing the onboard processing load.
Filtering by IP address can be used to the same effect in the case of IP over SDH and other such optical networks. Similar techniques can be used for other protocols. Site processors 40 collate and aggregate the large quantity of information gathered by the probe units, and pass the results via a Wide Area Network (WAN) 30 to a central site 15 65. Here this information may be used for network planning and operations. It may alternatively be used for billing according to the volume of monitored traffic per subscriber or service provider or other applications.
The tenn "probe unit" is used herein refer to a functionally selfcontained sub-system 20 designed to carry out the required analysis for a bearer, or for a pair or larger group of bearers. Each probe unit may include separate modules to carry out such operations as filtering the packets of interest and then interpreting the actual packet or other data analysis. 25 In accordance with current trends, it is assumed in this description that the links to be
monitored carry Internet Protocol (IP) traffic over passive optical networks (PONs) comprising optical fibre bearers. Connection to such a network can only really be achieved through use of passive optical splitters S1, S2 etc. Passive splitters have advantages such as high reliability, comparatively small dimensions, various 30 connection configurations and the fact that no power or element management resources are required. An optical splitter in such a situation works by paring off a percentage of the optical power in a bearer to a test port, the percentage being variable according to hardware specifications.
A number of issues are raised when insertion of such a device is considered. For example, there should be sufficient bearer receiver power margins remaining at both test device and the through port to the rest of the network. It becomes necessary to 5 consider what is the most economic method of monitoring the bearer in the presence of a reduced test port power budget while limiting the optical power needed by the monitoring probe and if the network would have to be re-configured as a result of Inserting the device.
10 Consequently, inserting a power splitter to monitor a network frequently requires an increase in launch power. This entails upgrading the transmit laser assembly and installing an optical attenuator where needed to reduce optical power into the through path to normal levels. Such an upgrade would ideally only be performed once.
15 For these reasons it is not desirable to probe, for example, an ATM network more than once on any given bearer. Nevertheless, it would be desirable to have the ability for multiple probing devices to be connected to the same bearer, that is, have multiple outputs from the optical interface. The different probes may be monitoring different parameters. In addition, however, any network monitoring system must offer a high 20 degree of availability, and multiple probes are desirable in the interests of redundancy.
The probe apparatuses and ancillary equipment described below allow the implementation of such a network monitoring system which can be maintained and expanded with simple procedures, with minimal disruption to the network itself and to the monitoring applications.
Network Probe System - General Architecture Figure 3 shows the basic functional architecture of a multi-channel optical fibre telecommunications probe apparatus 50 combining several individual probe units, into a more flexible system than has hitherto been available. The network monitoring 30 apparatus shown receives N bearer signals 70 such as may be available from N optical fibre splitters. These enter a crosspoint switch 80 capable of routing each signal to any of M individual and independently-replaceable probe units 90. Each probe unit
corresponds in functionality broadly with the unit 25 shown in Figure 2. An additional external output 85 from the cross-point switch 80 is routed to an external connector.
This brings important benefits, as will be described below.
5 The cross-point switch 80 and interconnections shown in Figure 3 may be implemented using different technologies, for example using passive optical or optoelectronic cross-points. High speed networks, for example OC48, require electrical path lengths as short as possible. An optical switch would therefore be desirable, deferring as much as possible the conversion to electneal. However, the 10 optical switching technology is not yet fully mature. Therefore the present proposal is to have an electrical implementation for the cross-point switch 80, the signals converted from optical to electrical at the point of entry into the probe apparatus 50.
The scale of an optoelectronic installation will be limited by the complexities of the cross-point switch and size of the probe unit. The choice of interconnect technology 15 (for example between electrical and optical), is generally dependent on signal bandwidth-distance product. For example, in the case of high bandwidth/speed standards such as OC3, OC12, or OC48, inter-rack connections may be best implemented using optical technology.
20 Detailed implementation of the probe apparatus in a specific embodiment will be described in more detail with reference to Figures 14 to 19. As part of this, a novel chassis arrangement for multi-channel processing products is described, with reference to Figures 15A-15C, which may find application in fields beyond
telecommunications monitoring. First, however, applications of the multichannel 25 prose architecture will be described, with reference to Figures 4 tol3.
Figure 4 shows a simple monitoring application which can be implemented using the apparatus of the type shown in Figure 3. The cross-point switch 80 is integrated into the probe chassis 100 together with up to four independently operating probe units. In 30 this implementation of the architecture each probe unit (90 in Figure 3) is formed by a packet processor module 150 and single board computer SBC 160 as previously described. There are provided two packet processors 150 in each probe unit 90. Each packet processor can receive and process the signal of one half-duplex bearer. SBC
160 in each probe unit has the capacity to analyse and report the data collected by the two packet processors. Other modules included in the chassis provide LAN interconnections for onward reporting of results, probe management, power supply, and cooling modules (not shown in Figures 4 to 1 3B).
In this application example a single, fully loaded chassis 100 is used with no redundancy to monitor eight single (four duplex) bearers connected at 140 to the external optical inputs of the apparatus (inputs 70-1 to 70-N in Figure 3). The cross-
point switch external outputs 85 are shown but not used in this configuration.
10 Applications of these outputs are explained for example in the description of Figures
6, 8, 10 and 12 below.
Figure 5 shows an alternative application of the apparatus giving 3+1 redundancy.
Here three duplex bearer signals are applied to external inputs 140 of the probe 15 apparatus chassis 100, while the fourth pair of inputs 142 is unused. Within the chassis there are thus three primary probe units 90 plus a fourth, spare probe unit 120.
The cross-point switch 80 can be used to switch any of the other bearers to this spare probe unit in the event of a failure in another probe unit. Already by integrating the cross-point switch and several probe units in a single chassis, a scaleable packet 20 processor redundancy down to 1:1 is achieved without the overhead associated with an external cross-point switch. Since electrical failures cause only a very minor proportion of outages, redundancy within the chassis is valuable, with the added bonus that complex wiring outside the chassis may be avoided. One or more processors within each chassis can be spare at a given time, and switched 25 instantaneously and/or remotely if one of the other probe units becomes inoperative.
Figure 6 shows a larger redundant system comprising four primary probe apparatuses (chassis 100-1 to 100-4), and a backup chassis 130 which operates in the event of a failure in one of the primary chassis. In this example of a large redundant system there 30 are 16 duplex bearers being monitored. Each external input pair of the backup chassis is connected to receive a duplex bearer signal from the external optical output 85 of a respective one of the primary apparatuses 100-1 to 100-4. By this arrangement, in the event of a single probe unit failure in one of the primary apparatuses, a spare probe
unit within the backup chassis can take over the out-of-service unit's function.
Assuming all inputs and all of the primary probe units are operational in normal circumstances, we may say that 4:1 redundancy is provided.
5 Recognising that in this embodiment only one optical interface is connected to the bearer under test, the chassis containing the optical interfaces can if desired have redundant communications and/or power supply units (PSUs) and adopt a "hot swap" strategy to permit rapid replacement of any hardware failures. "Hot swap" in this context means the facility to unplug one module of a probe unit within the apparatus 10 and replace it with another without interrupting the operation or functionality of the other probe units. Higher levels of protection can be provided on top of this, if desired, as described below with reference to Figures l5A-C and 16.
Figure 7 shows a modified probe apparatus which provides an additional optical input 15 170 to the cross-point switch 80. In other words, the cross-point switch 80 has inputs for more bearer signals than can be monitored by the probe units within its chassis.
At the same time, with the external optical outputs 85, the cross-point switch 80 has outputs for more signals than can be monitored by the probe units within the chassis.
These additional inputs 170 and outputs 85 can be used to connect a number of probe 20 chassis together in a "daisy chain", to provide extra redundancy and/or processing power. By default, in the present embodiment, copies of the bearer signals received at daisy chain inputs 170 are routed to the external outputs 85. Any other routing can be commanded, however, either from within the apparatus or from outside via the LAN (not shown).
Figure 8 shows an example of daisy chaining the probe chassis to give 8:1 redundancy. The four primary probe chassis 100-1 to 100-4 are connected in pairs (100-1 & 100-2 and 100-3 & 100-4). The external outputs 85 on the first chassis of each pair are connected to the daisy chain inputs 170 on the second chassis. The 30 external outputs 85 of the second chassis are connected to inputs of the backup chassis 130 as before. These connections can carry the signal through to the spare chassis 130 when there has been a failure in a probe unit in either the first or second chassis in each pair. Unlike the arrangement of Figure 6, however, it will be seen that the
backup chassis 130 still has two spare pairs of external inputs. Accordingly, the system could be extended to accommodate a further four chassis (up to sixteen further probe units, and up to thirty-two further bearer signals), with the single backup chassis 130 providing some redundancy for all of them.
For applications that involve processor intensive tasks it may be desirable to increase the processing power available to monitor each bearer. This may be achieved by various different configurations, and the degree of redundancy can be varied at the same time to suit each application.
Figure 9 illustrates how it is possible to increase the processing power available for any given bearer by reconfiguring the probe units. In this configuration only two duplex bearer signals 140 are connected to the chassis 100. Two inputs 142 are unused. Within the cross-point switch 80 each bearer signal is duplicated and routed 15 to two probe units 90. This doubles the processing power available for each of the bearers 140. This may be for different applications (for example routine billing and fraud detection), or for more complex analysis on the same application. Each packet processor (150, Figure 4) and SBC (160) will be programmed according to the application desired. In particular, each packet processor, while receiving and 20 processing all the data carried by an associated bearer, will be programmed to filter the data and to pass on only those packets, cells, or header information which are needed by the SBC for a particular monitoring task. The ability to provide redundancy via the external outputs 85 still remains.
25 Figure 10 illustrates a second method of increasing processing power is by connecting more than one chassis together in a daisy chain or similar arrangement. Concerning the "daisy chain" inputs 170, Figure 10 also illustrates how a similar effect can be achieved using the unmodified apparatus (Figure 3), providing the apparatus is not monitoring its full complement of bearer signals. The external inputs 140 can thus be 30 connected to the external outputs 85 of the previous chassis, instead of special inputs 170.
In the configuration of Figure 10, two chassis 100-1 and 100-2 are fullyloaded with four probe units 90 each. External signals for all eight probe units are received at 140 from a single duplex bearer. The cross- point switch 80 is used to replicate these signals to every probe unit 90 within the chassis 100-1, and also to the external 5 outputs 85 of the first chassis 100-1. These outputs in turn are connected to one pair of inputs 144 of the second chassis 100-2. Within the second chassis, the same signals are replicated again and applied to all four probe units, and (optionally) to the external outputs 85 ofthe second chassis 100-2.
10 Thus, all eight probe units are able to apply their processing power to the same pair of signals, without tapping into the bearer more than once. By adding further chassis in such a daisy chain, processing power is scaleable practically to as much processing power as needed.
15 The examples given are for way of illustration only, showing how using the chassis architecture described it is possible to provide the user with the processing power needed and the redundancy to maintain operation of the system in the event of faults and planned outages. It will be appreciated that there are numerous different configurations possible, besides those described.
For example, it is also possible to envisage a bi-directional daisy chain arrangement.
Here, one output 85 of a first chassis might be connected to one input 170 of a second chassis, while the other output 85 is connected to an input 170 of a third chassis. This arrangement can be repeated if desired to form a bi-dircctional ring of apparatuses, 25 forming a kind of "optical bus".
The probe apparatus described above allow the system designer to achieve N+1 redundancy by using the cross-point switch 80 to internally re-route a bearer to a spare processor, or to another chassis. On the other hand, it will be recognised that 30 some types of failure (e.g. in the chassis power supply) will disrupt operation of all of the processors in the chassis. It is possible to reduce such a risk by providing N+1 PSU redundancy, as will be/has been described.
Broadband Bridging Isolator Figure 11 shows an optional signal replicating device for use in conjunction with the probe apparatus described above, or other monitoring apparatus. This device will be referred to as a Broadband Bridging Isolator (BBI). Broadband Bridging Isolator can 5 be scaled to different capacities, and to provide additional fault tolerance independently of the probe apparatuses described above. The basic unit comprises a signal replicator 175. For each unit, an (optical) input 176 is converted at 177 to an electrical signal, which is then replicated and converted at 179 etc. to produce a number of identical optical output signals at outputs 178- 1 etc..
Also provided within BBT]72 are one or more standby selectors (multiplexers) 180 (one only shown). Each selector 180 receives replicas of the input signals and can select from these a desired one to be replicated at a selector optical output 182. An additional input l 86 (shown in broken lines) may be provided which passes to the 15 selector 180 without being replicated, to permit "daisy chain" connection.
In use, BBI 172 takes a single tap input 176 from a bearer being monitored and distributes this to multiple monitoring devices, for example probe apparatuses of the type shown in Figures 3 to 10. For reliability, the standby selector 180 allows any of 20 the input signals to be switched to a standby chassis.
The number of outputs that are duplicated from each input is not critical. A typical implementation may provide four, eight or sixteen replicators 175 in a relatively small rack mountable chassis, each having (for example) four outputs per input. Although 25 the concepts here are described in terms of optical bearers, the same concepts could be applied to high speed electrical bearers (e.g. E3, DS3 and STMle).
The reasons for distributing the signal could be for multiple applications, duplication for reliability, load sharing or a combination of all three. It is important that only one 30 tap need be made in the operational bearer. As described in the introductory part of this specification, each optical tap reduces the strength of the optical signal reaching
the receiver. In marginal conditions, adding a tap may require boosting the signal on the operational bearer. Network operators do not want to disrupt their operational
networks unless they have to. The BBI allows different monitoring apparatuses for different applications to be connected, and removed and re-configured without affecting the operational bearer, hence the name "isolator". The BBI can even be used to re-generate this signal by feeding one of the outputs back into the network, so 5 that the BBI becomes part of the operational network.
The number of bearer signals that are switched through the standby selector 180 will depend on the user's requirements - this number corresponds effectively to "N" in the phrase "N+1 rcdundanc,?'. The number of standby selectors in each BBT is not 10 critical. Adding more means that more bearers can be switched should there be a failure. The BBI must have high reliability as when operational in a monitoring environment it an essential component in the monitoring of data, providing the only bridging link 15 between the signal bearers and the probe chassis. No digital processing of the bearer signal is performed in the BBI, which can thus be made entirely of the simplest and most reliable optoelectronic components. When technology permits, in terms of cost and reliability, there may be an "all-optical" solution, which avoids conversion to electrical form and back to optical. Presently, however, the state of the art favours the 20 optoelectronic solution detailed here. The BBI can be powered from a redundant power supply to ensure continuous operation. The number of bearers handled on a single card can be kept small so that in event of a failure the number of bearers impacted is small. The control of the standby switch can be by an external control processor. Figure 12 shows a system configuration using BBIs and two separate probe chassis 100-1 and 100-2 implementing separate monitoring applications. The two application chassis may be operated by different departments within the network operator's organization. A third, spare probe chassis 130 is shared in a standby mode. This 30 example uses two BBIs 172 to monitor a duplex bearer pair shown at L1, L2, and other bearers not shown. Splitters S1 and S2 respectively provide tap input signals from Ll,L2 to the inputs 176 of the separate BBIs. Each BBI duplicates the signal at its input 176 to two outputs 178, in the manner described above with reference to
Figure 11. For improved fault tolerance, the two four-way BBIs 172 are used to monitor half duplex bearers L1 and L2 separately. In other words, the two halves of the same duplex bearer are handled by different BBIs. Three further duplex bearers (L3-L8, say, not shown in the drawing), are connected to the remaining inputs of the 5 BBIs 172 in a similar fashion.
Using the standby selector 180 any one of the bearers can be switched through to the standby chassis 130 in the event of a failure of a probe unit in one of the main probe chassis lOO-1, 100-2. It ill be appreciated that, if there is a failure of a complete 10 probe chassis, then only one of the bearers can be switched through to the standby probe. In a larger system with, say, 16 duplex bearers, four main probe chassis and two standby chassis, the bearers distributed by each BET can be shared around the probe chassis so that each probe chassis processes one bearer from each BBI. Then all four bearers can be switched to the standby probe in the event of a complete 15 chassis failure.
It will be seen that the BBI offers increased resilience for users particularly when they have multiple departments wanting to look at the same bearers. The size of the BBI used is not critical and practical considerations will influence the number of inputs 20 and outputs. For example, the BBI could provide inputs for 16 duplex bearers, each being distributed to two or three outputs with four standby outputs. Where multiple standby circuits are used each will be capable of being Independently switched to any of the inputs.
25 Figures 13A and 13B illustrate a process of upgrading the processing power of a network monitoring system without interrupting operation, using the facilities of the replicating devices (BBIs 172) and probe chassis described above. Figure 1 3A shows an example of an "existing" system with one probe chassis 100-1. Four duplex bearer signals are applied to inputs] 40 of the chassis. Via the internal cross-point switch 80, 30 each bearer signal is routed to one probe unit 90. With a view to further upgrades and fault tolerance, a broadband bridging isolator (BBI) 172 is included. Each bearer signal is received from a tap in the actual bearer (not shown) at a BBI input 176. The same bearer signal is replicated at BBI outputs 178-l, 178-2 etc. The first set of
outputs 178-1 are connected to the inputs 140 of the probe chassis. The second set of outputs 178-2 are not used in the initial configuration.
Figure 13B shows an expanded system, which includes a second probe chassis 100-2 5 also loaded with four probe units 90. Consequently there are now provided two probe units per bearer, increasing the processing power available per bearer. It is a simple task to migrate from the original configuration in Figure 13A to the new one shown in Figure 13B: 10 Step 1 - Install the extra chassis 100-2 with the probe units, establishing the appropriate power supply and LAN communications.
Step 2 - Connect two of the duplicate BBI outputs 186 to inputs of the extra chassis 100-2. (All four could be connected for redundancy if desired.) Step 3 - Configure the new chassis 100-2 and probe units to monitor the two 15 bearer signals in accordance with the desired applications.
Step 4 - Rc-configurc the original chassis to cease monitoring the corresponding two bearer signals of the first set of outputs 178-1 (188 in Figure 13B). (The processing capacity freed in the original chassis 100-1 can then be assigned to expanded monitoring of the two duplex 20 bearer signals which remain connected to the BBI outputs 178- 1.) Step 5 Remove the connections 188 no longer being used. (These connections could be left for redundancy if desired.) In this example the processing power has been doubled from one probe W1it per bearer 25 to two probe units per bearer but it can be seen that such a scheme could be easily extended by connecting further chassis. At no point has the original monitoring capacity been lost, and at no point have the bearers themselves (not shown) been disrupted. Thus, for example, a module of one probe unit can be removed for upgrade while other units continue their own operations. If there is spare capacity, one of the 30 other units can step in to provide the functionality of the unit being replaced. After Step 2, the entire first chassis 100-1 could be removed and replaced while the second chassis 100-2 steps in to perform its functions. Variations on this method are
practically infinite, and can also be used for other types of migration, such as when increasing system reliability.
The hardware and methods used in these steps can be arranged to comply with "hot S swap" standards as defined earlier. The system of Figures 13A and 13B, and of course any of the systems described above, may further provide automatic sensing of the removal (or failure) of a probe unit (or entire chassis), and automatic re-
eonfiguration of switches and re-programming of probe units to resume critical monitoring functions with minimum delay. Preferably of course, the engineer would lO instruct the re-programming prior to any planned removal of a probe unit module. A further level of protection, which allows completely uninterrupted operation with minimum staff involvement, is to sense the unlocking of a processing card prior to actual removal, to reconfigure other units to take over the functions of the affected module, and then to signal to the engineer that actual removal is permitted. This will 15 be illustrated further below with reference to Figure 1 5A.
Multi-channel Probe Apparatus - Functional arrangement Figure 14 is a functional block schematic diagram of a multi-channel probe apparatus suitable for implementing the systems shown in Figures 4 to 13A and 13B. Like 20 numerals depict like elements. All of the modules shown in Figure 14 and their interconnections are ideally separately replaceable, and housed within a self-contained enclosure of standard rack-mount dimensions. The actual physical configuration of the network probe unit modules in a chassis with special backplane will be described later. A network interface module 200 provides optical Dlbre connectors for the incoming bearer signals EXT]-8 (70-1 to 70-N in Figure 3), and performs optical to electrical conversion. A cross-point switch 80 provides a means of linking these connections to appropriate probe units 90. Each input of a probe unit can be regarded as a separate 30 monitoring channel CH1, CH2 etc. As mentioned previously, each probe unit may in feet accept plural signals for processing simultaneously, and these may or may not be selectable independently, or grouped into larger monitoring channels. Additional optical outputs EXT 9,10 are provided to act as "spare" outputs (corresponding to 85
in Figure 4). In the embodiment, each probe unit 90 controls the crosspoint switch 80 to feed its inputs (forming channel CH1, 2, 3 or 4 etc.) with a bearer signal selected from among the incoming signals EXT 1-8. This selection may be pre-programmed in the apparatus, or may be set by remote command over a LAN. Each probe unit (90) 5 is implemented in two parts, which may conveniently be realised as a specialised packet processor 150 and a general purpose single board computer SBC 160 module.
There are provided four packet processors 150 each capable of filtering and pre-
processing eight half duplex bearer signals at full rate, and four SBCs 160 capable of fresher processing the results obtained by the packet processors. The packet 10 processors 150 comprise dedicated data processing hardware, while the SBC can be implemented using industry standard processors or other general purpose processing modules. The packet processors 150 are closely coupled by individual peripheral buses to their respective SBCs 160 so as to form self-contained processing systems, each packet processor acting as a peripheral to its "host" SBC. Each Packet Processor 15 150 carries out a high speed time critical cell and packet processing including data aggregation and filtering. A second level of aggregation is carried out in the SBC 160.
LAN and chassis management modules 230, 235 (which in the implementation described later are combined on a single card) provide central hardware platform 20 management and onward communication of the processing results. For this onward communication, multiple redundant LAN interfaces are provided between every SBC 160 and the LAN management module 230 across the backplane. The LAN management function has four LAN inputs (one from each SBC) and four LAN outputs (for redundancy) to the monitoring LAN network. Multiple connections are 25 provided as different SBC manufacturers use different pin connectors on their connectors. For any particular manufacturer there is normally only one connection between the SBC 160 and the LAN management module 230. The dual redundant LAN interfaces are provided for reliability in reporting the filtered and processed data to the next level of aggregation (site processor 40 in Figure 2). This next level can be 30 located remotely. Each outgoing LAN interface is connectable to a completely independent network, LANA or LAND to ensure reporting in case of LAN outages.
In case of dual outages, the apparatus has buffer space for a substantial quantity of reporting data.
The chassis management module 235 oversees monitoring and wiring functions via (for example) an I2C bus using various protocols. Although 12C is normally defined as a shared bus system, each probe unit for reliability has its own 12C connection 5 direct to the management module. The management module can also instruct the cross-point switch to activate the "spare" output (]abelled as monitoring channels CH9,10 and optical outputs EXT 9,10) when it detects failure of one of the probe unit modules. This operation can also be carried out under instruction via LAN.
10 The network probe having the architecture described above must be realised in a physical environment capable of fulfilling the functional specifications and other
hardware platform considerations such as the teleeornmunieations environment it is to be deployed in. A novel chassis (or "cardcage") configuration has been developed to meet these requirements within a compact rack-mountable enclosure. The chassis is 15 deployed as a fundamental component of the data collection and processing system.
Multi-channel Probe Apparatus - Physical Implementation Figures 15A, B and C show how the probe architecture of Figure 14 can be implemented with a novel chassis, in a particularly compact and reliable manner. To 20 support the network probe architecture for this embodiment there is also provided a custom backplane 190. Figure 16 shows which signals are carried by the backplane, and which modules provide the external connections. Similar reference signs are used as in Figure 14, where possible.
25 Referring to Figure 16 for an overview of the functional architecture, the similarities with the architecture of Figure 14 will be apparent. The network probe apparatus again has eight external optical terminals for signals EXT 1-8 to be monitored. Thes are received at a network interface module 200. A cross-point switch module 80 receives eight corresponding electrical signals EXT 1'-8' from module 200 through 30 the backplane 190. Switch 80 has ten signal outputs, forming eight monitoring channels CH1- 8 plus two external outputs (CH9,10). Four packet processor modules 150-1 to 150-4 receive pairs of these channels CH1,2, CH3,4 etc. respectively.
CH9,10 signals are fed back to the network interface module 200, and reproduced in optical form at external terminals EXT 9,10. All internal connections just mentioned are made through the backplane via transmission lines in the backplane]90. Each packet processor is paired with a respective SBC 160-1 to]60-4 by individual cPCI 5 bus connections in the backplane.
A LAN & Chassis Management module 230 is provided, which is connected to the other modules by I2C buses in the backplane, and by LAN connections. A LAN interface module 270 provides external LAN connections for the onward reporting of 10 processing results. Also provided is a fan assembly 400 for cooling and a power supply (PSU) module 420.
Referring to the views in Figure 15A chassis 100 carries a backplane]90 and provides support and interconnections for various processing modules.
15 Conventionally, the processing modules are arranged in slots to the "front" of the backplane, and space behind the backplane in a telecommunications application is occupied by specialized interconnect. This specialized interconnect may include further removable I/O cards referred to as "transition cards". The power supply and fans are generally located above and/or below the main card space, and the cards 20 (processing modules) are arranged vertically in a vertical airflow. These factors make for a very tall enclosure, and one which is far deeper than the ideals of 300mm or so in the NEBS environment. The present chassis features significant departures from the conventional design, which result in a compact and particularly shallow enclosure.
25 In the present chassis, the power supply module (PSU) 420 is located in a shallow space behind the backplane 190. The processing modules 150-1, 160-letc. at the front of the backplane are, moreover, arranged to lie horizontally, with their long axes parallel to the front panel. The cooling fans 400 are placed to one side of the chassis.
Airflow enters the chassis at the front at 410 and flows horizontally over the 30 components to be cooled, before exiting at the rear at 412. This arrangement gives the - chassis a high cooling capability while at the same time not extending the size of the chassis beyond the desired dimensions. The outer dimensions and front flange of the housing allow the chassis to be mounted on a standard 19 inch (483mm) equipment
rack, with just 5U height. Since the width of the enclosure is fixed by standard rack dimensions, but the height is freely selectable, the horizontal arrangement allows the space occupied by the enclosure to be matched to the number of processor slots required by the application. In the known vertical orientation, a chassis which 5 provides ten slots must be just as high as one which provides twenty slots, and additional height must be allowed for airflow arrangements at top and bottom.
Referring also to Figures 15B and 15C, there are ten card slots labelled F1-F10 on the front side of the backplane l9Q. There airs two shallow slots B1 sand B2 to the rear of l0 the backplane 190, back-to back with F9 and F10 respectively. The front slot dimensions correspond to those of the cPCI standard, which also defines up to five standard electrical connectors referred to generally as J1 to J5, as marked in Figures 15B and 15C. It will be known to the skilled reader that connectors J1 and J2 have 110 pins each, and the functions of these are specified in the cPCl standard (version 15 PICMG 2.0 R2. l (May 1 st 1998)).
Other connector positions are used differently by different manufacturers.
Eight of the front slots (Fl-F8) support the Packet Processor/SBC cards in pairs. The cards are removable using 'hot swap' techniques, as previously outlined, using thumb 20 levers 195 to lock/unlock the cards and to signal that a card is to be inserted/removed.
The other two front slots F9 and F10 are used for cross-point switch 80 and LAN/Management card 230 respectively. Slots F1 to F8 comply with the cPCI insofar as connectors J1, J2, J3 and J5 are concerned. Other bus standards such as VME could be also be used. The other slots F9 and F10 are unique to this design. All of the cPCI 25 connections are standard and the connectivity, routing and termination requirements are taken from the cPCI standard specification. Keying requirements are also taken
from the cPCI standard. The cPCI bus does not connect all modules, however: it is split into four independent buses CPCI1-4 to form four self-contained host-peripheral processing sub-systems. Failure of any packet processor/SBC combination will not 30 affect the other three probe units.
Each of the cards is hot-swappable and will automatically recover from any reconfiguration. Moreover, by providing switches responsive to operation of the
thumb levers 195, prior to physical removal of the card, the system can be warned of impending removal of an module. This warning can be used to trigger automatic re routing of the affected monitoring channel(s). The engineer replacing the card can be instructed to await a visual signal on the front panel of the card or elsewhere, before 5 completing the removal of the card. This signal can be sent by the LAN/Management module 270, or by a remote controlling site. This scheme allows easy operation for the engineer, without any interruption of the monitoring functions, and without special steps to command the re-routing. Such commands might otherwise require the co-ordination of actions at the lock! site with staff at a central site, or at best the same lO engineer might be required to move between the chassis being worked upon and a nearby PC workstation.
As mentioned above, the upper two front slots (F10, F9) hold the LAN & Management module 230 and the cross-point switch 80 respectively. Slot Bl (behind I S F9) carries a Network Transition card fomning network interface module 200, while the LAN interface 270 in slot B2 (behind FlO) carries the LAN connectors. All external connections to the apparatus are provided by special transition cards in these rear slots, and routed through the backplane. No cabling needs to reach directly the rear of the individual probe unit slots. No cabling at all is required to the front of the 20 enclosure. This is not only tidy externally of the housing, but leaves a clear volume behind the backplane which can be occupied by the PSU 420, shown cut-away in Figure 15C, yielding a substantial space saving over conventional designs and giving greater ease of maintenance. The rear slot positions B1, B2 are slightly wider, to accommodate the PSU connectors 422.
The J4 position in the backplane is customised to route high integrity network signals (labelled "RF" in Figure 15B). These are transported on custom connections not within cPCI standards. Figure 15B shows schematically how these connectors transport the bearer signals in monitoring channels CH1 etc. from the cross-point 30 switch 80 in slot F9 to the appropriate packet processors 150-1 etc. in slots F2, F4, F6, F8. The external bearer signals EXT1'-8' in electrical fomn can be seen passing through the backplane from the cross-point switch 80 (in slot F9) to the network interface module 200 (B1). These high speed, highintegrity signals are carried via
appropriately designed transmission lines in the printed wiring of the backplane 190.
The variation in transmission delay between channels in the chassis is not significant for the applications envisaged. However, in order to avoid phase errors it is still important to ensure that each half of any differential signal is routed from its source to 5 its destination using essentially equal delays. To ensure this, the delays must be matched to the packet processors for each set for the backplane and cross-point switch combination. It is important to note that these monitoring channels are carried independently on point-to point connections, rather than through any shared bus such as is provided in 10 the H. I l O protocol for computer telephony.
The backplane also carries I2C buses (SMB protocol) and the LAN wiring. These are carried to each SBC 160-l etc. either in the J3 position or the J5 position, depending on the manufacturer of the particular SBC, as described later. The LAN interface 15 module 270 provides the apparatus with two external LAN ports for communications to the next layer of data processing/ aggregation, for example a site processor.
Connectivity is achieved using two LANs (A and B) at 100 BaseT for a cardcage.
The LAN I/O can be arranged to provide redundant connection to the external host 20 computer 40. This may be done, for example, by using four internal LAN connections and four external LAN connections routed via different segments of the LAN 60. It is therefore possible to switch any SBC to either of the LAN connections such that any SBC may be on any one connection or split between connections. This arrangement may be changed dynamically according to circumstances, as in the case 25 of an error occurring, and allows different combinations of load sharing and redundancy. Additionally, this allows the probe processors to communicate with each other without going on the external LAN. However, this level of redundancy in the LAN connection cannot be achieved if the total data from the probe processors exceeds the capacity of any one external LAN connection.
An external timing port (not shown in Figure 16) is additionally provided for accurately time-stamping the data in the packet processor. The signal is derived from any suitable source, for example a GPS receiver giving a 1 pulse per second input. It is also possible to generate this signal using one of the Packet Processor cards, where 5 one Packet Processor becomes a master card and the others can synchronise to it.
The individual modules will now be described in detail, with reference toFigures 17 19. This will further clarify the inter-relationships between them, and the role of the backplane 190 End chassis 100.., 10 Cross-Point Switch Module 80 Figure 17 is a block diagram of the cross-point switch 80 and shows also the network line interfaces 300 (RX) and 310 (TX) provided on the network interface module 200.
There are eight optical line receiver interfaces 300 provided within module 200. There are thus eight bearer signals which are conditioned on the transition card (module 15 200) and transmitted in electrical form EXT 1'-8' directly through the backplane 190 to the cross-point switch card 80. Ten individually configurable multiplexers (selectors) M are provided, each freely selecting one of the eight inputs. Each monitoring channel (CH1-8) and hence each packet processor 150 can receive any of the eight incoming network signals (EXT1'-8').
The outputs to the packet processors (CH1-CH4) are via the backplane 190 (position J4, Figure 15B as described above) and may follow, amongst others, DS3/OC3/OC12/OC48 electrical standards or utilise a suitable proprietary interface.
Each packet processor module 150 controls its own pair of multiplexers M directly.
The external optical outputs EXT 9,10 are provided via transmit interface 310 of the module 200 for connecting to a spare chassis (as in Figure 8). These outputs can be configuecd to be any of the eight inputs, using a further pair of multiplexers M which are controlled by the LAN/Management Module 230. In this way, the spare processor 30 or chassis 130 mentioned above can be activated in case of processor failure. In an alternative implementation, the selection of these external output signals CH9 and
CHID can be performed entirely on the network interface module 200, without passing through the backplane or the cross-point switch module 80.
Although functionally each multiplexer M of the cross-point switch is described and 5 shown as being controlled by a respective packet processor 15Q, in the present embodiment this control is conducted via the LAN & management module 230.
Commands or requests for a particular connection can be sent to the LAN & management module from the packet processor (or associated SBC 160) via the LAN concoctions, or I2C,buses, provided in connectors J3 or J5.
10 Packet Processor Module 150 Figure 18 is a block diagram of one of the Packet Processor modules 150 of the apparatus. The main purpose of packet processor (PP) 150 is to capture data from the network interface. This data is then processed, analysed and filtered before being sent to a SBC via a loca] cPCI bus. Packet processor 150 complies with Compact PCI Hot 15 Swap specification PICMG-2.1 R 1.0, mentioned above. Packet Processor 150 here
described is designed to work up to 622Mbits/s using a Sonet/SDH frame structure carrying ATM cells using AAL5 Segmentation And Reassembly (SAR). Other embodiments can be employed using the same architecture, for example to operate at OC48 (2.4 Gbit.s-1).
The following description makes reference to a smgle "half" of the twochannel
packet processor module 150, and to a single Packet Processor/SBC pair only (single channel). The chassis as described supports four such Packet Processor/SBC pairs, and each packet processor comprises two processing means to handle multiple bearer 25 signals (multiple monitoring channels).
It is possible for the Packet Processor 150 to filter the incoming data. This is essential due to the very high speed of the broadband network interfaces being monitored, such as would be the case for OC-3 and above. The incoming signals are processed by the 30 Packet Processor, this generally taking the form of time stamping the data and performing filtering based on appropriate fields in the data. Different fields can be
chosen accordingly, for example ATM cells by VPI/VCI (VC) number, IP by IP
address, or filtering can be based on other, user defined fields. It is necessary to
provide the appropriate means to recover the clock and data from the incoming signal, as the means needed varies dependent on link media and coding schemes used.
In a typical example using ATM, ATM cells are processed by VPI/VCT (VC) number.
5 The Packet Processor is provided with means 320 to recover the clock and data from the incoming signal bit stream. The data is then 'deframed' at a transmission convergence sub-layer 330 to extract the ATM cells. The ATM cells are then time-
stamped 340 and then buffered in a First In First Out (FIFO) buffer 350 to smooth the rate of burst t,vne data. Cells from this FIFO buffer are then passed sequentially l 0 to an ATM cell processor 360. The packet processor can store ATM cells to allow it to re-assemble cells into a message - a Protocol Data Unit (PDU). Only when the PDU has been assembled wi]] it be sent to the SBC. Before assembly, the VC of a cell is checked to ascertain what actions should be taken, for example, to discard cell, assemble PDU, or pass on the raw cell.
Data is transferred into the SBC memory using cPCI DMA transfers to a data buffer 38.This ensures the very high data throughput that may be required if large amounts of data are being stored. The main limitation in the amount of data that is processed will be due to the applications software that processes it. It is therefore the 20 responsibility of the Packet Processor 150 to carry out as much pre-processing of the data as possible so that only that data which is relevant is passed up into the application domain.
The first function of the Packet Processor 150 is to locate the instructions for 25 processing the VC (virtual channel) to which the cell belongs. To do this it must convert the very large VPI/VCI of the cell into a manageable pointer to its associated processing instructions (VC # key). This is done using a hashing algorithm by hash generator 390, which in turn uses a VC hash table. Processor 150, having located the instructions, can then process the cell.
Processing the cell involves updating status information for the particular VC (e.g. cell count) and forwarding the cell and any associated information (e.g. "Protocol Data Unit (PDU) received") to the SBC 160 if required. By reading the status of a
particular VC, the processor can vary its action depending on the current status of that VC (e.g. providing summary information after first cell received). Cell processor 360
also requires certain configurable information which is applicable to all of its processing functions regardless of VC (e.g. buffer sizes) and this 'global' 5 configuration is accessible via a global configuration store.
A time stamping function 340 can be synchronized to an external GPS time signal or can be adjusted by the SBC 160. The SBC can also configure and monitor the 'defamer' (e.g. set up Came form ais arid monitor alarms) as well as select the optical 10 inputs (EXT 1-8) to be monitored. Packet Processor 150 provides all of the necessary cPCI interface functions.
Each packet processor board 150-1 etc. is removable without disconnecting power from the chassis. This board will not impact the performance of other boards in the 15 chassis other than the associated SBC. The microprocessor notifies the presence or absence of the packet processor and processes any signal loss conditions generated by the Packet Processor.
Single Board Computer (SBC) Modules 160 20 The SBC module 160 is not shown in detail herein, being a general-purpose processing module, examples including the Motorola CPV5350, FORCE CPCI-730, and SMT NAPA. The SBC 150 is a flexible, programmable device. In this specific embodiment two such devices may exist on one cPCI card, in the form of "piggyback" modules (PMCs). The 100 BaseT interfaces, disk memory etc. may also 25 be in the form of PMCs. As already described, communications via the cPCI bus (J1/J2) on the input side and via the LAN port on the output side and all other connections are via the backplane at the rear, unless for diagnostic purposes for which an RS-232 port is provided at the front.
30 LAN & Chassis Management Module 230 Figure 19 is a block diagram of the combined LAN and chassis management card for the network probe as has been described. Module 230 performs a number of key management functions, although the probe units 150/160 can be commanded
independently from a remote location, via the LAN interface. The card firstly provides a means for routing probe units SMB and LAN connections, including dual independent LAN switches 500A and 500B to route the LAN connections with redundancy and sufficient bandwidth to the outside world.
On the chassis management side, a Field Programmable Gate Array (FGPA) 510
within this module performs the following functions: À (520) I2C and SMB communications, with reference to chassis configuration storage registers 530, 10 À (540) 'magic packet' handling, for resetting the modules remotely in the event that the higher level network protocols "hang up"; À (550) environmental control and monitoring of fan speed and PSU & CPU temperatures functions to ensure optimal operating conditions for the chassis, and preferably also to minimise unnecessary power consumption and fan noise; A hardware watchdog feature 5G0 is also included to monitor the activity of all modules and take appropriate action in the event that any of them becomes inactive or unresponsive. This includes the ability to reset modules.
20 Finally, the management module implements at 580 "Multivendor Interconnect", whereby differences in the usage of cPCI connectors pins (or whatever standard is adopted) between a selection of processor vendors can be accommodated.
As mentioned previously, the chassis carries at some locations, cPCI processor 25 modules from a choice of selected vendors, but these are coupled via cPCT bus to special peripheral cards. While such cards are known in principle, and the processor-
pcripheral bus is fully specified, the apparatus described does not have a conventional interconucct arrangement for the broadband signals, multiple redundant LAN connections and so forth. Even for the same functions, such as the LAN signals and 30 I2C/SMB protocol for hardware monitoring, different SBC vendors place the relevant signals on different pins of the cPCI connector set, particularly they may be on certain pins in J3 with some vendors, and on various locations in J5 with others.
Conventionally, this means the system designer has to restrict the user's choice of
( 32 SBC modules to those of one vendor, or a group of vendors who have adopted the same pin assignment for LAN and SMB functions, besides the standard assignments for J1, and J2 which are specified for all cPCI products.
5 To overcome this obstacle a modular Multivendor Interconnect (MVI) solution may be applied. The MVI module 580 is effectively four productspecific configuration cards that individually route the LAN and SMB signals received from each SBC 160-
1 etc. to the correct locations on the LAN/Management cards. One MVI card exists for each processor. These are caulked piggyback on the L,AN,/mar,2, 5ement module 10 230, and each is accessible from the front panel of the enclosure. The backplane in locations J3 and J5 includes sufficient connectors, pins and interconnections between the modules to satisfy a number of different possible SBC types. Needless to say, when replacing a processor card with one of a different type, the corresponding MVI configuration card needs exchanging also.
An alternative scheme to switch the card connection automatically based on vendor ID codes read via the backplane can also be envisaged. In a particular embodiment, for example, the "Geographic Address" pins defined in the cPCT connector specifications may be available for signalling (under control of a start-up program)
20 which type of SBC 160 is in a given slot. The routing of SMB, LAN and other signals can then be switched electronically under control of programs in the LAN & management card 230.
Conclusion
25 Those skilled in the art will recognise that the invention in any of its aspects is not limited to the specific embodiments disclosed herein. In particular, unless specified in the claims, the invention is in no way limited to any particular type of processor, type of network to be monitored, protocol, choice of physical interconnect, choice of peripheral bus (cPCI v. VME, parallel v. serial etc.), number of bearers per chassis, 30 number of bearers per monitoring channel, number of monitoring channels per probe unit.
The feet that independent processor subsystems are arranged in the chassis allows multiple data paths from the telecommunications network to the LAN network, thereby providing inherent redundancy. On the other hand, for other applications such as computer telephony, reliability and availability may not be so critical as in the 5 applications addressed by the present embodiment. For such applications, a similar chassis arrangement but with H.110 bus in the backplane may be very useful.
Similarly, the ePCI bus, I2C bus and/or LAN interconnect may be shared among all the modules.
10 Each aspect of the invention mentioned above is to be considered as independent, such that the probe functional architecture can be used irrespective of the chassis configuration, and vice versa. On the other hand, the reader will reeognise that the specific combinations of these features offers in a highly desirable instrumentation system, which provides the desired functionality, reliability and availability levels in a 15 compact and scalable architecture.
In the specific embodiments described herein, each probe unit comprising first and second processor modules (the packet processor and SBC respectively) is configured to monitor simplex and duplex bearers. The invention, in any of its aspects, is not 20 limited to such embodiments. In particular, each probe unit may be adapted to process one or more individual bearer signals. in the ease of lower speed protocol signals the bearer signals can be multiplexed together (for example within the cross-point switch module 80 or network interface module 200) to take full advantage of the internal bandwidth of the architecture.
Attention is directed to co-pending UK patent application no. 99 23 142.5 (publication no. 2 354 905) relating to aspects of the above description as defined in the claims of
that application.

Claims (4)

1. A multi-channel replicating device for broadband optical signals, the device comprising one or more modules having: 5 À a first plurality of input connectors for receiving broadband optical signals; À a larger plurality of output connectors for broadband optical signals; and À means for replicating each received broadband optical signal to a plurality of said output connectors without digital processing.
10
2. A device as claimed in claim 1, wherein said replicating means includes components for optical to electrical conversion and back to optical again.
3. A device as claimed in claim l or claim 2, further comprising one or more additional optical outputs and a selector device for selecting which of the inputs 15 signals is replicated at said additional output.
4. A telecommunications network monitoring system comprising: À an optical splitting device, providing a tap signal for monitoring signals carried by a bearer in a broadband telecommunications network; 20 À a plurality of network monitoring units, each for receiving and analysing signals from a broadband optical bearer; and À a signal replicating device according to any one of the preceding claims, the signal replicating device being connected so as to receive said optical tap signal, and to provide replicas of said optical tap signal to inputs of two or more of said network 25 monitoring units.
GB0400378A 1999-10-01 1999-10-01 Multi-channel replicating device for broadband optical signals, and systems including such devices Withdrawn GB2394849A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9923142A GB2354905B (en) 1999-10-01 1999-10-01 Multi-channel network monitoring apparatus,signal replicating device,and systems including such apparatus and devices

Publications (2)

Publication Number Publication Date
GB0400378D0 GB0400378D0 (en) 2004-02-11
GB2394849A true GB2394849A (en) 2004-05-05

Family

ID=10861888

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0400378A Withdrawn GB2394849A (en) 1999-10-01 1999-10-01 Multi-channel replicating device for broadband optical signals, and systems including such devices
GB9923142A Expired - Fee Related GB2354905B (en) 1999-10-01 1999-10-01 Multi-channel network monitoring apparatus,signal replicating device,and systems including such apparatus and devices

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB9923142A Expired - Fee Related GB2354905B (en) 1999-10-01 1999-10-01 Multi-channel network monitoring apparatus,signal replicating device,and systems including such apparatus and devices

Country Status (1)

Country Link
GB (2) GB2394849A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10637121B2 (en) 2017-03-02 2020-04-28 Technetix B.V. Broadband signal tap

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7493438B2 (en) 2001-10-03 2009-02-17 Intel Corporation Apparatus and method for enumeration of processors during hot-plug of a compute node
EP2323300A1 (en) * 2009-11-12 2011-05-18 Intune Networks Limited Switch system and method for the monitoring of virtual optical paths in an Optical Burst Switched (OBS) Communication network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0080829A2 (en) * 1981-11-26 1983-06-08 Kabushiki Kaisha Toshiba Optical communication system
EP0469382A2 (en) * 1990-08-03 1992-02-05 Siemens Aktiengesellschaft Transmission apparatus for transmitting messages and additional signals
NL1011398C1 (en) * 1999-02-26 1999-04-13 Koninkl Kpn Nv Optical splitter for telecommunications network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5457729A (en) * 1993-03-15 1995-10-10 Symmetricom, Inc. Communication network signalling system link monitor and test unit
JP2964207B2 (en) * 1993-09-20 1999-10-18 富士通株式会社 Working standby switching control method
GB9405771D0 (en) * 1994-03-23 1994-05-11 Plessey Telecomm Telecommunications system protection scheme

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0080829A2 (en) * 1981-11-26 1983-06-08 Kabushiki Kaisha Toshiba Optical communication system
EP0469382A2 (en) * 1990-08-03 1992-02-05 Siemens Aktiengesellschaft Transmission apparatus for transmitting messages and additional signals
NL1011398C1 (en) * 1999-02-26 1999-04-13 Koninkl Kpn Nv Optical splitter for telecommunications network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10637121B2 (en) 2017-03-02 2020-04-28 Technetix B.V. Broadband signal tap

Also Published As

Publication number Publication date
GB9923142D0 (en) 1999-12-01
GB2354905A (en) 2001-04-04
GB0400378D0 (en) 2004-02-11
GB2354905B (en) 2004-02-11

Similar Documents

Publication Publication Date Title
US6925052B1 (en) Multi-channel network monitoring apparatus, signal replicating device, and systems including such apparatus and devices, and enclosure for multi-processor equipment
US7406038B1 (en) System and method for expansion of computer network switching system without disruption thereof
US7453870B2 (en) Backplane for switch fabric
US7460482B2 (en) Master-slave communications system and method for a network element
US20030200330A1 (en) System and method for load-sharing computer network switch
US6587470B1 (en) Flexible cross-connect with data plane
US7289436B2 (en) System and method for providing management of fabric links for a network element
US6678268B1 (en) Multi-interface point-to-point switching system (MIPPSS) with rapid fault recovery capability
CA2387550A1 (en) Fibre channel architecture
WO2001086454A2 (en) System having a meshed backplane and process for transferring data therethrough
US7428208B2 (en) Multi-service telecommunication switch
US7209477B2 (en) Multi-subshelf control system and method for a network element
US7535895B2 (en) Selectively switching data between link interfaces and processing engines in a network switch
Banwell et al. Physical design issues for very large ATM switching systems
GB2390927A (en) Multi-processor interconnection and management
GB2394849A (en) Multi-channel replicating device for broadband optical signals, and systems including such devices
CN1988530A (en) Remote control and control redundancy for distributed communication equipment
US7583689B2 (en) Distributed communication equipment architectures and techniques
Cisco Product Overview
Cisco Cisco AccessPath-TS3 Model 531 Product Overview
Cisco Product Overview
Cisco Service Interface (Line) Cards
Cisco Product Overview
Cisco Hardware Description
EP1636926B1 (en) Network switch for link interfaces and processing engines

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)