WO2002073903A1 - Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique - Google Patents

Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique Download PDF

Info

Publication number
WO2002073903A1
WO2002073903A1 PCT/US2002/007388 US0207388W WO02073903A1 WO 2002073903 A1 WO2002073903 A1 WO 2002073903A1 US 0207388 W US0207388 W US 0207388W WO 02073903 A1 WO02073903 A1 WO 02073903A1
Authority
WO
WIPO (PCT)
Prior art keywords
link
node
nodes
traffic
ring
Prior art date
Application number
PCT/US2002/007388
Other languages
French (fr)
Inventor
Derek T. Mayweather
Jason C. Fan
Steven Gemelos
Robert F. Kalman
Original Assignee
Luminous Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luminous Networks, Inc. filed Critical Luminous Networks, Inc.
Priority to JP2002571657A priority Critical patent/JP2004533142A/en
Priority to EP02721350A priority patent/EP1368937A4/en
Priority to CA002440245A priority patent/CA2440245A1/en
Publication of WO2002073903A1 publication Critical patent/WO2002073903A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/437Ring fault isolation or reconfiguration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0073Provisions for forwarding or routing, e.g. lookup tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • H04Q2011/0081Fault tolerance; Redundancy; Recovery; Reconfigurability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0086Network resource allocation, dimensioning or optimisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/009Topology aspects
    • H04Q2011/0092Ring

Definitions

  • This invention relates to communication networks and, in particular, to networks employing rings.
  • a type of service disruption that is of great concern is span outage, which may be due either to facility or equipment failures.
  • Carriers of voice traffic have traditionally designed their networks to be robust in the case of facility outages, e.g. fiber breaks.
  • voice or other protected services must not be disrupted for more than 60 milliseconds by a single facility outage. This includes up to 10 milliseconds for detection of a facility outage, and up to 50 milliseconds for rerouting of traffic.
  • a significant technology for implementing survivable networks meeting the above requirements has been SONET rings.
  • a fundamental characteristic of such rings is that there are one (or more) independent physical links connecting adjacent nodes in the ring.
  • Each link may be unidirectional, e.g. allow traffic to pass in a single direction, or may be bi-directional.
  • a node is defined as a point where traffic can enter or exit the ring.
  • a single span connects two adjacent nodes, where a span consists of all links directly connecting the nodes.
  • a span is typically implemented as either a two fiber or four fiber connection between the two nodes.
  • each link In the two fiber case, each link is bi-directional, with half the traffic in each fiber going in the "clockwise” direction (or direction 0), and the other half going in the "counterclockwise” direction (or direction 1 opposite to direction 0).
  • each link In the four fiber case, each link is unidirectional, with two fibers carrying traffic in direction 0 and two fibers carrying traffic in direction 1. This enables a communication path between any pair of nodes to be maintained on a single direction around the ring when the physical span between any single pair of nodes is lost. In the remainder of this document, references will be made only to direction 0 and direction 1 for generality.
  • UPSR UPSR
  • BLSR bi-directional line-switched rings
  • UPSR UPSR
  • BLSR bi-directional line-switched rings
  • UPSR UPSR
  • BLSR bi-directional line-switched rings
  • Fig. 1 This figure shows an N-node ring made up of nodes (networking devices) numbered from node 0 to node N-l and interconnected by spans.
  • nodes are numbered in ascending order in direction 0 starting from 0 for notational convenience.
  • a link passing traffic from node i to node j is denoted by dij.
  • a span is denoted by sij, which is equivalent to sji.
  • the term span will be used for general discussion.
  • link will be used only when necessary for precision.
  • traffic from node 0 to node 5 is shown taking physical routes (bold arrows) in both direction 0 and direction 1.
  • nodes will be numbered sequentially in an increasing fashion in direction 0 for convenience. Node 0 will be used for examples.
  • a special receiver implements "tail-end switching," in which the receiver selects the data from one of the directions around the ring.
  • the receiver can make this choice based on various performance monitoring (PM) mechanisms supported by SONET.
  • PM performance monitoring
  • This protection mechanism has the advantage that it is very simple, because no ring- level messaging is required to communicate a span break to the nodes on the ring. Rather, the PM facilities built into SONET ensure that a "bad" span does not impact physical connectivity between nodes, since no data whatsoever is lost due to a single span failure.
  • UPSR requires 100% extra capacity (for a single "hubbed” pattern) to 300% extra capacity (for a uniform “meshed” pattern) to as much as (N-l)*100% extra capacity (for an N node ring with a nearest neighbor pattern, such as that shown in Fig. 1) to be set aside for protection.
  • N-l nearest neighbor pattern
  • FIG. 2 A data from any given node to another typically travels in one direction (solid arrows) around the ring. Data communication is shown between nodes 0 and 5.
  • Half the capacity of each ring is reserved to protect against span failures on the other ring.
  • the dashed arrows illustrate a ring that is typically not used for traffic between nodes 0 and 5 except in the case of a span failure or in the case of unusual traffic congestion.
  • Fig. 2B the span between nodes 6 and 7 has experienced a fault. Protection switching is now provided by reversing the direction of the signal from node 0 when it encounters the failed span and using excess ring capacity to route the signal to node 5. This switching, which takes place at the same nodes that detect the fault, is very rapid and is designed to meet the 50 millisecond requirement.
  • BLSR protection requires 100% extra capacity over that which would be required for an unprotected ring, since the equivalent of the bandwidth of one full ring is not used except in the event of a span failure. Unlike UPSR, BLSR requires ring- level signaling between nodes to communicate information on span cuts and proper coordination of nodes to initiate ring protection.
  • a network protection and restoration technique and bandwidth reservation method is described that efficiently utilizes the total bandwidth in the network to overcome the drawbacks of the previously described networks, that is not linked to a specific transport protocol such as SONET, and that is designed to meet the Telcordia 50 millisecond switching requirement.
  • the disclosed network includes two rings, wherein a first ring transmits data in a "clockwise" direction (or direction 0), and the other ring transmits data in a "counterclockwise” direction (or direction 1 opposite to direction 0). Additional rings may also be used. The traffic is removed from the ring by the destination node.
  • a node monitors the status of each link for which it is at the receiving end, e.g. each of its ingress links, to detect a fault.
  • the detection of such a fault causes a highest-priority link status broadcast message to be sent to all nodes.
  • Processing at each node of the information contained in the link status broadcast message results in reconfiguration of a routing table within each node so as to identify the optimum routing of source traffic to the destination node after the fault.
  • all nodes know the status of the network and all independently identify the optimal routing path to each destination node when there is a fault in any of the links.
  • the processing is designed to be extremely efficient to maximize switching speed.
  • an interim step can be used.
  • a node that detects a link fault notifies its neighbor on the other side of that span that a link has failed. Any node that detects an ingress link failure or that receives such a notification wraps inbound traffic headed for that span around onto the other ring. Traffic will be wrapped around only temporarily until the previously described rerouting of traffic is completed.
  • unprotected traffic Since the remaining links will now see more data traffic due to the failed link, traffic designated as "unprotected” traffic is given lower priority and may be dropped or delayed in favor of the "protected” traffic.
  • Specific techniques are described for guaranteeing bandwidth availability for working and single failure traffic configurations, identifying a failed link, communicating the failed link to the other nodes, differentiating between protected and unprotected classes of traffic, and updating the routing tables.
  • the embodiments described transmit packets of data, the invention may be applied to any network transmitting frames, cells, or using any other protocol. Frames and cells are similar to packets in that all contain data and control information pertaining at least to the source and destination for the data.
  • a single frame may contain multiple packets, depending on the protocol.
  • a cell may be fixed-size, depending on the protocol.
  • Fig. 1 illustrates inter-node physical routes taken by traffic from node 0 to node 5 using SONET UPSR, where a failure of spans between any single pair of nodes brings down only one of the two distinct physical routes for the traffic.
  • Fig. 2A illustrates an inter-node physical route taken by traffic from node 0 to node 5 using SONET two-fiber BLSR.
  • Half of the capacity of each ring is reserved for protection, and half is used to carry regular traffic.
  • the ring represented with dashed lines is the ring in which protection capacity is used to reroute traffic due to the span failure shown.
  • Fig. 2B illustrates the bi-directional path taken by traffic from node 0 to node 5 using the SONET BLSR structure of Fig. 2 A when there is a failure in the link between nodes 6 and 7. Traffic is turned around when it encounters a failed link.
  • Fig. 3 illustrates a network in accordance with one embodiment of the present invention and, in particular, illustrates an inter-node physical route taken by traffic from node 0 to node 5.
  • Fig. 4 illustrates the network of Fig. 3 after a failure has occurred on the span between nodes 6 and 7.
  • a failure occurs impacting a link or span on the initial path (e.g., between nodes 0 and 5)
  • the traffic is rerouted at the ingress node to travel in the other direction around the ring to reach the destination node.
  • Fig. 5 illustrates the optional interim state of the network (based on wrapping traffic from one ring to the other) between that shown in Fig. 3 and that shown in Fig.
  • Fig. 6 illustrates pertinent hardware used in a single node.
  • Fig. 7 provides additional detail of the switching card and ring interface card in Fig. 6.
  • Fig. 8 is a flowchart illustrating steps used to identify a change in the status of the network and to re-route traffic through the network.
  • Fig. 9 illustrates additional detail of the shelf controller card shown in Fig. 6.
  • a fast topology communication mechanism to rapidly communicate information about a span break to all nodes in the ring.
  • a fast re-routing/routing table update mechanism to re-route paths impacted by a span break the other direction around the ring.
  • a given packet/flow between two nodes is transmitted in only a single direction around the network (even when there is a span fault) and is removed from the ring by the destination node, as is shown in Fig. 3 where node 0 transmits information to node 5 in only the direction indicated by the thick arrows.
  • a transmission from node 5 to node 0 would only go through nodes 6 and 7 in the opposite direction. This allows for optimized ring capacity utilization since no capacity is set aside for protection.
  • the least-cost physical route is typically used for protected traffic. This is often the shortest-hop physical route. For example, a transmission from node 0 to node 2 would typically be transmitted via node 1.
  • the shortest-hop physical route corresponds to the least-cost route when traffic conditions throughout the network are relatively uniform. If traffic conditions are not uniform, the least-cost physical route from node 0 to node 2 can instead be the long path around the ring.
  • the removal of packets from the ring by the destination node ensures that traffic does not use more capacity than is necessary to deliver it to the destination node, thus enabling increased ring capacity through spatial reuse of capacity.
  • An example of spatial reuse is the following. If 20% of span capacity is used up for traffic flowing from node 0 to node 2 via node 1, then the removal of this traffic from the ring at node 2 means that the 20% of span capacity is now available for any traffic flowing on any of the other spans in the ring (between nodes 2 and 3, nodes 3 and 4, etc.)
  • Fig. 4 shows a span break between nodes 6 and 7.
  • a transmission from node 0 to node 5 must now travel in a clockwise direction on another ring (illustrated by the thick arrows), adding to the traffic on that ring. Because some network capacity is lost in the case of a span outage, a heavily loaded network with no capacity set aside for protection must suffer some kind of performance degradation as a result of such an outage.
  • traffic is classified into a "protected” class and an "unprotected” class
  • network provisioning and control can be implemented such that protected traffic service is unaffected by the span outage.
  • This control is achieved through the use of bandwidth reservation management that processes provisioning requests considering the impact of a protection switch.
  • all of the performance degradation is "absorbed" by the unprotected traffic class via a reduction in average, peak, and burst bandwidth allocated to unprotected traffic on remaining available spans so that there is sufficient network capacity to carry all protected traffic.
  • Traffic within the unprotected class can be further differentiated into various subclasses such that certain subclasses suffer more degradation than do others.
  • the node on the receiving end of each link within the span detects that each individual link has failed. If only a single link is out, then only the loss of that link is reported. Depending on the performance monitoring (PM) features supported by the particular communications protocol stack being employed, this detection may be based on loss of optical (or electrical) signal, bit error rate (BER) degradation, loss of frame, or other indications.
  • PM performance monitoring
  • BER bit error rate
  • Each link outage must then be communicated to the other nodes. This is most efficiently done through a broadcast (store-and-forward) message (packet), though it could also be done through a unicast message from the detecting node to each of the other nodes in the network. This message must at least be sent out on the direction opposite to that leading to the broken span. The message must contain information indicating which link has failed.
  • a node Upon detection of an ingress link fault, a node must transmit a neighbor fault notification message to the node on the other side of the faulty link. This notification is only required if there is a single link failure, as the node using the failed link as an egress link would not be able to detect that it had become faulty. In the event that a full span is broken, the failure to receive these notifications do not affect the following steps.
  • a node Upon detection of an ingress link fault or upon receipt of a neighbor fault notification message, a node must wrap traffic bound for the corresponding egress link on that span onto the other ring. This is shown in Fig. 5. Traffic from node 0 bound for node 5 is wrapped by node 7 onto the opposite ring because the span connecting node 7 to node 6 is broken.
  • Connection Cnew(j, k, 0) has a peak provisioned, or allowable, bandwidth of B.
  • a connection may be provisioned either simplex or full-duplex, where a full-duplex connection consists of both Cnew(j, k, 0) and Cnew (k, j, 1) and accounting would be required for each direction.
  • a given connection Cnew(j, k, 0) can be provisioned as either transporting protected traffic or unprotected traffic.
  • Each link has a maximum traffic capacity of L. To determine if the link is full, all traffic on the link must be summed. The traffic may be broken into different categories. For example, if the bandwidth constraints for the ring are class-based (or other categories), the request must also contain the associated class (category). Also, it is important to note that the provisioned traffic of each type may be weighted, but is nominally one. Further, for bursty traffic, peak bandwidth considerations should be made in the bandwidth accounting. For example, if three classes are supported (EF, AF, and BE), the amount of traffic per class that is allowed on a link can be governed through class-specific over-subscription parameters c EF , c AF , c BE as defined by
  • Traffic matrices are used to determine the traffic provisioned in the ring.
  • the elements of the matrices represent the aggregate bandwidth from a source node to a destination node.
  • the matrix element in row ; and column k represents the aggregate bandwidth from nodey to node k.
  • P is the working traffic matrix for traffic requiring protection.
  • the matrix element P//, k] is the aggregate bandwidth from node/ to node k of protected traffic.
  • U is the working traffic matrix for traffic not requiring protection.
  • the matrix element ⁇ J ⁇ , k] is the aggregate bandwidth from node j to node k of unprotected traffic.
  • B is added/subtracted to/from ⁇ J[j, k] . If a full-duplex wire is provisioned/removed, B is added/subtracted also to/from Vfk,jJ.
  • the traffic flow around the ring is bi-directional. Both clockwise and counterclockwise rings carry traffic. Clockwise and counter-clockwise rings will have its own set of basic traffic matrices. For a class-based category system, for EF traffic in the clockwise direction, there are ⁇ EF and XA E and for the counter-clockwise direction there are V EF and U .
  • ScEF [x] ScEF [x] + PcEF (j mod N, k mod N) ;
  • ScEF [x] ScEF [x] + UcEF (j mod N, k mod N) ;
  • ScAF [x] ScAF [x] + PcAF (j mod N, k mod N) ;
  • ScAF fx] ScAF [x] + UcAF (j mod N, k mod N) ;
  • ScBE fx] ScBE [x] + PcBE (j mod N, k mod N) ;
  • ScBE [x] ScBE [x] + UcBE (j mod N, k mod N) ; ⁇
  • the single failure configurations must be checked.
  • a single link, w is failed, where w is between node w and node w+1 on the clock wise ring.
  • the unprotected crossconnects are provisioned as before, independent of the single failed link.
  • the same span loading algorithm described above is computed. Based upon the result, the reject or accept indication is provided to the higher layer. This is performed for each link in the clockwise and counter-clockwise direction. A failure of node N corresponds to a failure of links between nodes N-l and N+l .
  • This section describes a specific fast mechanism for communicating topology changes to the nodes in a ring network.
  • the mechanism for communicating information about a span or link break or degradation from a node to all other nodes on a ring is as follows.
  • a link status message is sent from each node detecting any link break or degradation on ingress links to the node, e.g. links for which the node is on the receiving end. (Therefore, for a single span break the two nodes on the ends of the span will each send out a link status message reporting on the failure of a single distinct ingress link.)
  • This message may be sent on the ring direction opposite the link break or on both ring directions. For robustness, it is desirable to send the message on both ring directions. In a network that does not wrap messages from one ring direction to the other ring direction, it is required that the message be sent on both ring directions to handle failure scenarios such as that in Fig. 4.
  • the message may also be a broadcast or a unicast message to each node on the ring.
  • broadcast ensures that knowledge of the link break will reach all nodes, even those that are new to the ring and whose presence may not be known to the node sending the message.
  • the mechanism ensures that the propagation time required for the message to reach all nodes on the ring is upper bounded by the time required for a highest priority message to travel the entire circumference of the ring. It is desirable that each mechanism also ensure that messages passing through each node are processed in the fastest possible manner. This minimizes the time for the message to reach all nodes in the ring.
  • the link status message sent out by a node should contain at least the following information: source node address, link identification of the broken or degraded link for which the node is on the receive end, and link status for that link.
  • the link status message can be expanded to contain link identification and status for all links for which the node is on the receive end.
  • the link identification for each link in general, should contain at least the node address of the node on the other end of the link from the source node and the corresponding physical interface identifier of the link's connection to the destination node. The mechanism by which the source node obtains this information is found in the co-pending application entitled "Dual-Mode Virtual Network Addressing," Serial
  • the physical interface identifier is important, for example, in a two-node network where the address of the other node is not enough to resolve which link is actually broken or degraded.
  • Link status should indicate the level of degradation of the link, typically expressed in terms of measured bit error rate on the link (or in the event that the link is broken, a special identifier such as 1).
  • the link status message may optionally contain two values of link status for each link in the event that protection switching is non-revertive.
  • An example of non- revertive switching is illustrated by a link degrading due to, for example, temporary loss of optical power, then coming back up. The loss of optical power would cause other nodes in the network to protection switch. The return of optical power, however, would not cause the nodes to switch back to default routes in the case of non-revertive switching until explicitly commanded by an external management system.
  • the two values of link status for each link therefore, may consist of a status that reflects the latest measured status of the link (previously described) and a status that reflects the worst measured status (or highest link cost) of the link since the last time the value was cleared by an external management system.
  • the link status message can optionally be acknowledged by the other nodes.
  • the source node may choose to re-send the link status message to all expected recipients, or re-send the link status message specifically to expected recipients that did not acknowledge receipt of the message.
  • This section describes a mechanism which allows a node in a ring network to rapidly re-route paths that cross broken links.
  • the following describes a fast source node re-routing mechanism when node 0 is the source node.
  • a preferred direction for traffic from nodes 0 to j is selected based on the direction with the lowest cost.
  • the mechanism for reassigning costs to the path to each destination node for each output direction from node 0 operates with a constant number of operations, irrespective of the current condition of the ring. (The mechanism may be further optimized to always use the minimum possible number of operations, but this will add complexity to the algorithm without significantly increasing overall protection switching speed.)
  • the mechanism for reassigning an output direction to traffic packets destined for a given node based on the path cost minimizes the time required to complete this reassignment.
  • a table is maintained at each node with the columns Destination Node, direction 0 cost, and direction 1 cost.
  • An example is shown as Table 1.
  • the computation of the cost on a direction from node 0 (assuming node 0 as the source) to node j may take into account a variety of factors, including the number of hops from source to destination in that direction, the cumulative normalized bit error rate from source to destination in that direction, and the level of traffic congestion in that direction. Based on these costs, the prefe ⁇ ed output direction for traffic from the source to any destination can be selected directly.
  • the example given below assumes that the costs correspond only to the normalized bit error rate from source to destination in each direction.
  • the cost on a given link is set to 1 if the measured bit e ⁇ or rate is lower than the operational bit error rate threshold. Conveniently, if all links are fully operational, the cumulative cost from node 0 to node j will be equal to the number of hops from node 0 to node j if there is no traffic congestion. Traffic congestion is not taken into account in this example.
  • the prefe ⁇ ed direction is that with the lower cost to reach destination node j. In the event that the costs to reach node j on direction 0 and on direction 1 are equal, then either direction can be selected. (Direction 0 is selected in this example.)
  • the normal operational cost for each physical route (source to destination) is computed from the link status table shown in Table 3.
  • the pseudocode for selection of the prefe ⁇ ed direction is:
  • a default value for this used in SONET is 10 ⁇
  • the link status table (accessed by a CPU at each node) is used to compute the costs in the prefe ⁇ ed direction table above.
  • the link status table's normal operational setting looks like:
  • the cost for each link dij is the normalized bit e ⁇ or rate, where the measured bit e ⁇ or rate on each link is divided by the default operational bit e ⁇ or rate (normally 10E-9 or lower). In the event that the normalized bit e ⁇ or rate is less than 1 for a link, the value entered in the table for that link is 1.
  • the pseudocode for the line "Update direction 0 cost and direction 1 cost" for each node j in the pseudocode for selection of prefe ⁇ ed direction uses the link status table shown in Table 3 as follows:
  • Linkcostsum d ⁇ r I is the sum of link costs all the way around the ring in direction 1, starting at node 0 and ending at node 0. ⁇
  • ⁇ MAX_COST is the largest allowable cost in the prefe ⁇ ed direction table.
  • Linkcost , r 0) l i nk i j is the cost of the link in direction 0 from node i to node j. ⁇ If (Linkcostsum d , r o ⁇ MAX_COST)
  • Linkcostsum dir 0 Linkcostsum d ⁇ r 0 +Linkcost d ⁇ r o, ⁇ ,nkj, o+i) odN; else
  • the update of the link status table is based on the following pseudocode:
  • the linkstatusmessage.status for that link is a very large value.
  • the linkstatusmessage.status for that link is the measured bit e ⁇ or rate on that link divided by the undegraded bit e ⁇ or rate of that link. All undegraded links are assumed to have the same undegraded bit e ⁇ or rate.
  • the link status table may optionally contain two cost columns per direction to handle non-revertive switching scenarios. These would be measured cost (equivalent to the columns cu ⁇ ently shown in Table 3) and non-revertive cost.
  • the non-revertive cost column for each direction contains the highest value of link cost reported since the last time the value was cleared by an external management system. This cost column (instead of the measured cost) would be used for prefe ⁇ ed direction computation in the non-revertive switching scenario.
  • the prefe ⁇ ed direction table may also optionally contain two cost columns per direction, just like the link status table. It may also contain two prefe ⁇ ed direction columns, one based on the measured costs and the other based on the non-revertive costs. Again, the non- revertive cost columns would be used for computations in the non-revertive switching scenario.
  • the costs of the links needed between the source node and destination node are added to determine the total cost.
  • the prefe ⁇ ed direction table for the source node 0 is then:
  • a co ⁇ esponding mapping table of destination node to prefe ⁇ ed direction in packet processors on the data path is modified to match the above table.
  • This section describes a specific fast mechanism for communication of a fault notification from the node on one side of the faulty span to the node on the other side.
  • This mechanism as described previously, is only necessary in the event of a single link failure, since the node using that link as its egress link cannot detect that it is faulty.
  • a neighbor fault notification message is sent from each node detecting any link break or degradation on an ingress link to the node.
  • the message is sent on each egress link that is part of the same span as the faulty ingress link.
  • the notification message can be acknowledged via a transmission on both directions around the ring. If it is not acknowledged, then the transmitting node must send the notification multiple times to ensure that it is received.
  • the message is highest priority to ensure that the time required to receive the message at the destination is minimized.
  • the neighbor fault notification message sent out by a node should contain at least the following information: source node address, link identification of the broken or degraded link for which the node is on the receive end, and link status for that link.
  • the neighbor fault notification message may be equivalent to the link status message broadcast to all nodes that has been previously described.
  • Fig. 9 illustrates one shelf controller card 62 in more detail.
  • the shelf controller 62 obtains status information from the node and interfaces with a network management system.
  • the shelf controller 62 both provisions other cards within the device 20 and obtains status information from the other cards.
  • the shelf controller interfaces with an external network management system and with other types of external management interfaces.
  • the software applications controlling these functions run on the CPU 92.
  • the CPU may be an IBM/Motorola MPC750 microprocessor.
  • a memory 93 represents memories in the node. It should be understood that there may be distributed SSRAM, SDRAM, flash memory and EEPROM to provide the necessary speed and functional requirements of the system.
  • the CPU is connected to a PCI bridge 94 between the CPU and various types of external interfaces.
  • the bridge may be an IBM CPC700 or any other suitable type.
  • Ethernet controllers 96 and 102 are connected to the PCI bus.
  • the controller may be an Intel 21143 or any other suitable type.
  • An Ethernet switch 98 controls the Layer 2 communication between the shelf controller and other cards within the device. This communication is via control lines on the backplane.
  • the layer 2 protocol used for the internal communication is
  • This switch may be a Broadcom BCM5308 Ethernet switch or any other suitable type.
  • the output of the Ethernet switch must pass through the Ethernet Phy block 100 before going on the backplane.
  • the Ethernet Phy may be a Bel Fuse, Inc., S558 or any other suitable type that interfaces directly with the Ethernet switch used.
  • the output of the Ethernet controller 102 must pass through an Ethernet Phy 104 before going out the network management system (NMS) 10/100 BaseT Ethernet port.
  • the Ethernet Phy may be an AMD AM79874 or any other suitable type.
  • Information is delivered between applications running on the shelf controller CPU and applications running on the other cards via well-known mechanisms including remote procedure calls (RPCs) and event-based notification. Reliability is provided via TCP/IP or via UDP/IP with retransmissions.
  • Provisioning of cards and ports via an external management system is via the
  • NMS Ethernet port Using a well-known network management protocol such as the Simple Network Management Protocol (SNMP), the NMS can control a device via the placement of an SNMP agent application on the shelf controller CPU.
  • the SNMP agent interfaces with a shelf manager application.
  • the shelf manager application is primarily responsible for the provisioning on tributary interface cards in 52.
  • Communication from the shelf controller onto the ring is via the switching card CPU. This type of communication is important for sending SNMP messages to remote devices on the ring from an external management system physically connected to the shelf.
  • the bandwidth management that determines whether provisioning is accepted runs on the shelf controller or an extemal workstation.
  • Fig. 6 illustrates the pertinent functional blocks in each node.
  • Node 0 is shown as an example.
  • Each node is connected to adjacent nodes by ring interface cards 30 and 32. These ring interface cards convert the incoming optical signals on fiber optic cables 34 and 36 to electrical digital signals for application to switching card 38.
  • Fig. 7 illustrates one ring interface card 32 in more detail showing the optical transceiver 40.
  • An additional switch in card 32 may be used to switch between two switching cards for added reliability.
  • the optical transceiver may be a Gigabit Ethernet optical transceiver using a 1300 nm laser, commercially available.
  • the serial output of optical transceiver 40 is converted into a parallel group of bits by a serializer/deserializer (SERDES) 42 (Fig. 6).
  • the SERDES 42 converts a series of 10 bits from the optical transceiver 40 to a parallel group of 8 bits using a table.
  • the 10 bit codes selected to co ⁇ espond to 8 bit codes meet balancing criteria on the number of 1 's and O's per code and the maximum number of consecutive l's and O's for improved performance. For example, a large number of sequential logical 1 's creates baseline wander, a shift in the long-term average voltage level used by the receiver as a threshold to differentiate between 1 's and O's.
  • the baseline wander is greatly reduced, thus enabling better AC coupling of the cards to the backplane.
  • the SERDES 42 When the SERDES 42 is receiving serial 10-bit data from the ring interface card 32, the SERDES 42 is able to detect whether there is an e ⁇ or in the 10-bit word if the word does not match one of the words in the table. The SERDES 42 then generates an e ⁇ or signal. The SERDES 42 uses the table to convert the 8-bit code from the switching card 38 into a serial stream of 10 bits for further processing by the ring interface card 32.
  • the SERDES 42 may be a model VSC 7216 by Vitesse or any other suitable type.
  • a media access controller (MAC) 44 counts the number of e ⁇ ors detected by the SERDES 42, and these e ⁇ ors are transmitted to the CPU 46 during an interrupt or pursuant to polling mechanism.
  • the CPU 46 may be a Motorola MPC860DT microprocessor. Later, it will be described what happens when the CPU 46 determines that the link has degraded sufficiently to take action to cause the nodes to re-route traffic to avoid the faulty link.
  • the MAC 44 also removes any control words forwarded by the SERDES and provides OSI layer 2 (data-link) formatting for a particular protocol by structuring a MAC frame.
  • the MAC 44 may a field programmable gate a ⁇ ay.
  • the packet processor 48 associates each of the bits transmitted by the MAC 44 with a packet field, such as the header field or the data field.
  • the packet processor 48 detects the header field of the packet structured by the MAC 44 and may modify information in the header for packets not destined for the node.
  • suitable packet processors 48 include the XPIF-300 Gigabit Bitstream Processor or the EPEF 4-L3C1 Ethernet Port L3 Processor by MMC Networks, whose data sheets are incorporated herein by reference.
  • the packet processor 48 interfaces with an external search machine/memory 47 (a look-up table) that contains routing information to route the data to its intended destination.
  • an external search machine/memory 47 a look-up table
  • the updating of the routing table in memory 47 will be discussed in detail later.
  • a memory 49 in Fig. 6 represents all other memories in the node, although it should be understood that there may be distributed SSRAM, SDRAM, flash memory, and EEPROM to provide the necessary speed and functional requirements of the system
  • the packet processor 48 provides the packet to a port of the switch fabric 50, which then routes the packet to the appropriate port of the switch fabric 50 based on the packet header. If the destination address in the packet header corresponds to the address of node 0 (the node shown in Fig. 6), the switch fabric 50 then routes the packet to the appropriate port of the switch fabric 50 for receipt by the designated node 0 tributary interface card 52 (Fig. 5) (to be discussed in detail later). If the packet header indicates an address other than to node 0, the switch fabric 50 routes the packet through the appropriate ring interface card 30 or 32 (Fig. 5). Control packets are routed to CPU 46. Such switching fabrics and the routing techniques used to determine the path that packets need to take through switch fabrics are well known and need not be described in detail.
  • One suitable packet switch is the MMC Networks model nP5400 Packet Switch Module, whose data sheet is incorporated herein by reference.
  • four such switches are connected in each switching card for faster throughput.
  • the switches provide packet buffering, multicast and broadcast capability, four classes of service priority, and scheduling based on strict priority or weighted fair queuing.
  • a packet processor 54 associated with one or more tributary interface cards receives a packet from switch fabric 50 destined for equipment (e.g., a LAN) associated with tributary interface card 52.
  • Packet processor 54 is bi-directional, as is packet processor 48. Packet processors 54 and 48 may be the same model processors.
  • packet processor 54 detects the direction of the data through packet processor 54 as well as accesses a routing table memory 55 for determining some of the desired header fields and the optimal routing path for packets heading onto the ring, and the desired path through the switch for packets heading onto or off of the ring. This is discussed in more detail later.
  • the packet processor 54 When the packet processor 54 receives a packet from switch fabric 50, it forwards the packet to a media access control (MAC) unit 56, which performs a function similar to that of MAC 44, which then forwards the packet to the SERDES 58 for serializing the data.
  • MAC media access control
  • SERDES 58 is similar to SERDES 42.
  • the output of the SERDES 58 is then applied to a particular tributary interface card, such as tributary interface card 52 in Fig. 5, connected to a backplane 59.
  • the tributary interface card may queue the data and route the data to a particular output port of the tributary interface card 52. Such routing and queuing by the tributary interface cards may be conventional and need not be described in detail.
  • the outputs of the tributary interface cards may be connected electrically, such as via copper cable, to any type of equipment, such as a telephone switch, a router, a LAN, or other equipment.
  • the tributary interface cards may also convert electrical signals to optical signals by the use of optical transceivers, in the event that the external interface is optical.
  • the above-described hardware processes bits at a rate greater than lGbps.
  • Fig. 8 is a flow chart summarizing the actions performed by the network hardware during a span failure or degradation. Since conventional routing techniques and hardware are well known, this discussion will focus on the novel characteristics of the prefe ⁇ ed embodiment.
  • each of the nodes constantly or periodically tests its links with neighboring nodes.
  • the MAC 44 in Fig. 7 counts e ⁇ ors in the data stream (as previously described) and communicates these e ⁇ ors to the CPU 46.
  • the CPU compares the bit e ⁇ or rate to a predetermined threshold to determine whether the link is satisfactory.
  • An optical link failure may also be communicated to the CPU.
  • CPU 46 may monitor ingress links from adjacent devices based on error counting by MAC 44 or based on the detection of a loss of optical power on ingress fiber 36. This detection is performed by a variety of commercially available optical transceivers such as the Lucent NetLight transceiver family.
  • the loss of optical power condition can be reported to CPU 46 via direct signaling over the backplane (such as via I2C lines), leading to an interrupt or low-level event at the CPU.
  • step 2 the CPU 46 determines if there is a change in status of an adjacent link. This change in status may be a fault (bit e ⁇ or rate exceeding threshold) or that a previously faulty link has been repaired. It will be assumed for this example that node 6 sensed a fault in ingress link connecting it to node 7.
  • step 2 If there is no detection of a fault in step 2, no change is made to the network. It is assumed in Fig. 8 that adjacent nodes 6 and 7 both detect faults on ingress links connecting node 6 to node 7. The detection of a fault leads to an interrupt or low-level event (generated by MAC 44) sent through switch fabric 50 to CPU 46 signaling the change in status.
  • interrupt or low-level event generated by MAC 44
  • nodes 6 and 7 attempt to notify each other directly of the ingress link fault detected by each.
  • the notification sent by node 6, for example, is sent on the egress link of node 6 connected to node 7. If the entire span is broken, these notifications clearly do not reach the destination. They are useful only if a single link within a span is broken. This is because a node has no way to detect a fiber break impacting an egress link. Based on this notification, each node can then directly wrap traffic in the fashion shown in Fig. 5.
  • the wrapping of traffic in node 6 is performed through a configuration command from CPU 46 to packet processor 48 connected as shown in Fig. 7 to ring interface card 32 (assuming that links from ring interface card 32 connect to node 7).
  • packet processor 48 After receiving this command, packet processor 48 loops back traffic through the switching fabric and back out ring interface card 30 that it normally would send directly to node 7.
  • Each communication by a node of link status is associated with a session number.
  • a new session number is generated by a node only when it senses a change in the status of a neighboring node. As long as the nodes receive packets with the cu ⁇ ent session number, then the nodes know that there is no change in the network. Both nodes 6 and 7 increment the session number stored at each node upon detection of a fault at each node.
  • both node 6 and node 7 then broadcast a link status message, including the new session number, conveying the location of the fault to all the nodes.
  • step 5 the identity of the fault is then used by the packet processor 54 in each node to update the routing table in memory 55.
  • Routing tables in general are well known and associate a destination address in a header with a particular physical node to which to route the data associated with the header. Each routing table is then configured to minimize the cost from a source node to a destination node. Typically, if the previously optimized path to a destination node would have had to go through the faulty link, that route is then updated to be transmitted through the reverse direction through the ring to avoid the faulty route.
  • the routing table for each of the packet processors 54 in each node would be changed as necessary depending upon the position of the node relative to the faulty link. Details of the routing tables have been previously described.
  • each of the nodes must acknowledge the broadcast with the new session number, and the originating node keeps track of the acknowledgments. After a time limit has been exceeded without receiving all of the acknowledgments, the location of the fault is re-broadcast without incrementing the sequence number. Accordingly, all nodes store the cu ⁇ ent topology of the ring, and all nodes may independently create the optimum routing table entries for the cu ⁇ ent configuration of the ring.
  • step 6 the routing table for each node has been updated and data traffic resumes. Accordingly, data originating from a LAN connected to a tributary interface card 52 (Fig. 5) has appended to it an updated routing header by packet processor 54 for routing the data through switch fabric 50 to the appropriate output port for enabling the data to arrive at its intended destination.
  • the destination may be the same node that originated the data and, thus, the switch fabric 50 would wrap the data back through a tributary interface card in the same node.
  • Any routing techniques may be used since the invention is generally applicable to any protocol and routing techniques.
  • the traffic to be transmitted around the healthy links may exceed the bandwidth of the healthy links. Accordingly, some lower priority traffic may need to be dropped or delayed, as identified in step 7. Generally, the traffic classified as "unprotected” is dropped or delayed as necessary to support the "protected” traffic due to the reduced bandwidth.
  • the packet processor 54 detects the header that identifies the data as unprotected and drops the packet, as required, prior to the packet being applied to the switch fabric 50. Voice traffic is generally protected.
  • switch fabric 50 routes any packet forwarded by packet processor 54 to the appropriate output port for transmission either back into the node or to an adjacent node.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Small-Scale Networks (AREA)

Abstract

The disclosed network includes two rings, wherein a first ring transmits data in a clockwise direction, and the other ring transmits data in a counterclockwise direction. The traffic is removed from the ring by the destination node. During normal operations, data between nodes can flow on either ring. Thus, both rings are fully utilized during normal operations. The nodes periodically test the bit error rate of the links (1) to detect a fault in one of the links (2). The detection of such a fault sends a broadcast signal to all nodes (3, 4) to reconfigure a routing table within the node so as to identify the optimum routing of source traffic to the destination node after the fault (5). Since the available links will now see more data traffic due to the failed link, traffic designated as 'unprotected' traffic is given lower priority and may be dropped or delayed in favor of the 'protected' traffic (7).

Description

BANDWIDTH RESERVATION REUSE IN DYNAMICALLY ALLOCATED RING PROTECTION AND RESTORATION TECHNIQUE
FIELD OF THE INVENTION
This invention relates to communication networks and, in particular, to networks employing rings.
BACKGROUND
As data services become increasingly mission-critical to businesses, service disruptions become increasingly costly. A type of service disruption that is of great concern is span outage, which may be due either to facility or equipment failures. Carriers of voice traffic have traditionally designed their networks to be robust in the case of facility outages, e.g. fiber breaks. As stated in the Telcordia GR-253 and GR- 499 specifications for optical ring networks in the telecommunications infrastructure, voice or other protected services must not be disrupted for more than 60 milliseconds by a single facility outage. This includes up to 10 milliseconds for detection of a facility outage, and up to 50 milliseconds for rerouting of traffic.
A significant technology for implementing survivable networks meeting the above requirements has been SONET rings. A fundamental characteristic of such rings is that there are one (or more) independent physical links connecting adjacent nodes in the ring. Each link may be unidirectional, e.g. allow traffic to pass in a single direction, or may be bi-directional. A node is defined as a point where traffic can enter or exit the ring. A single span connects two adjacent nodes, where a span consists of all links directly connecting the nodes. A span is typically implemented as either a two fiber or four fiber connection between the two nodes. In the two fiber case, each link is bi-directional, with half the traffic in each fiber going in the "clockwise" direction (or direction 0), and the other half going in the "counterclockwise" direction (or direction 1 opposite to direction 0). In the four fiber case, each link is unidirectional, with two fibers carrying traffic in direction 0 and two fibers carrying traffic in direction 1. This enables a communication path between any pair of nodes to be maintained on a single direction around the ring when the physical span between any single pair of nodes is lost. In the remainder of this document, references will be made only to direction 0 and direction 1 for generality.
There are 2 major types of SONET rings: unidirectional path-switched rings
(UPSR) and bi-directional line-switched rings (BLSR). In the case of UPSR, robust ring operation is achieved by sending data in both directions around the ring for all inter-node traffic on the ring. This is shown in Fig. 1. This figure shows an N-node ring made up of nodes (networking devices) numbered from node 0 to node N-l and interconnected by spans. In this document, nodes are numbered in ascending order in direction 0 starting from 0 for notational convenience. A link passing traffic from node i to node j is denoted by dij. A span is denoted by sij, which is equivalent to sji. In this document, the term span will be used for general discussion. The term link will be used only when necessary for precision. In this diagram, traffic from node 0 to node 5 is shown taking physical routes (bold arrows) in both direction 0 and direction 1. (In this document, nodes will be numbered sequentially in an increasing fashion in direction 0 for convenience. Node 0 will be used for examples.) At the receiving end, a special receiver implements "tail-end switching," in which the receiver selects the data from one of the directions around the ring. The receiver can make this choice based on various performance monitoring (PM) mechanisms supported by SONET. This protection mechanism has the advantage that it is very simple, because no ring- level messaging is required to communicate a span break to the nodes on the ring. Rather, the PM facilities built into SONET ensure that a "bad" span does not impact physical connectivity between nodes, since no data whatsoever is lost due to a single span failure.
Unfortunately, there is a high price to be paid for this protection. Depending on the traffic pattern on the ring, UPSR requires 100% extra capacity (for a single "hubbed" pattern) to 300% extra capacity (for a uniform "meshed" pattern) to as much as (N-l)*100% extra capacity (for an N node ring with a nearest neighbor pattern, such as that shown in Fig. 1) to be set aside for protection. In the case of two-fiber BLSR, shown in Fig. 2 A, data from any given node to another typically travels in one direction (solid arrows) around the ring. Data communication is shown between nodes 0 and 5. Half the capacity of each ring is reserved to protect against span failures on the other ring. The dashed arrows illustrate a ring that is typically not used for traffic between nodes 0 and 5 except in the case of a span failure or in the case of unusual traffic congestion.
In Fig. 2B, the span between nodes 6 and 7 has experienced a fault. Protection switching is now provided by reversing the direction of the signal from node 0 when it encounters the failed span and using excess ring capacity to route the signal to node 5. This switching, which takes place at the same nodes that detect the fault, is very rapid and is designed to meet the 50 millisecond requirement.
BLSR protection requires 100% extra capacity over that which would be required for an unprotected ring, since the equivalent of the bandwidth of one full ring is not used except in the event of a span failure. Unlike UPSR, BLSR requires ring- level signaling between nodes to communicate information on span cuts and proper coordination of nodes to initiate ring protection.
Though these SONET ring protection technologies have proven themselves to be robust, they are extremely wasteful of capacity. Additionally, both UPSR and BLSR depend intimately on the capabilities provided by SONET for their operation, and therefore cannot be readily mapped onto non-SONET transport mechanisms.
What is needed is a protection technology where no extra network capacity is consumed during "normal" operation (i.e., when all ring spans are operational), which is less tightly linked to a specific transport protocol, and which is designed to meet the Telcordia 50 millisecond switching requirement.
SUMMARY
A network protection and restoration technique and bandwidth reservation method is described that efficiently utilizes the total bandwidth in the network to overcome the drawbacks of the previously described networks, that is not linked to a specific transport protocol such as SONET, and that is designed to meet the Telcordia 50 millisecond switching requirement. The disclosed network includes two rings, wherein a first ring transmits data in a "clockwise" direction (or direction 0), and the other ring transmits data in a "counterclockwise" direction (or direction 1 opposite to direction 0). Additional rings may also be used. The traffic is removed from the ring by the destination node.
During normal operations (i.e., all spans operational and undegraded), data between nodes flows on the ring that provides the lowest-cost path to the destination node. If traffic usage is uniformly distributed throughout the network, the lowest-cost path is typically the minimum number of hops to the destination node. Thus, both rings are fully utilized during normal operations. Each node determines the lowest- cost path from it to every other node on the ring. To do this, each node must know the network topology.
A node monitors the status of each link for which it is at the receiving end, e.g. each of its ingress links, to detect a fault. The detection of such a fault causes a highest-priority link status broadcast message to be sent to all nodes. Processing at each node of the information contained in the link status broadcast message results in reconfiguration of a routing table within each node so as to identify the optimum routing of source traffic to the destination node after the fault. Hence, all nodes know the status of the network and all independently identify the optimal routing path to each destination node when there is a fault in any of the links. The processing is designed to be extremely efficient to maximize switching speed.
Optionally, if it is desired to further increase the switching speed, an interim step can be used. A node that detects a link fault notifies its neighbor on the other side of that span that a link has failed. Any node that detects an ingress link failure or that receives such a notification wraps inbound traffic headed for that span around onto the other ring. Traffic will be wrapped around only temporarily until the previously described rerouting of traffic is completed.
Since the remaining links will now see more data traffic due to the failed link, traffic designated as "unprotected" traffic is given lower priority and may be dropped or delayed in favor of the "protected" traffic. Specific techniques are described for guaranteeing bandwidth availability for working and single failure traffic configurations, identifying a failed link, communicating the failed link to the other nodes, differentiating between protected and unprotected classes of traffic, and updating the routing tables. Although the embodiments described transmit packets of data, the invention may be applied to any network transmitting frames, cells, or using any other protocol. Frames and cells are similar to packets in that all contain data and control information pertaining at least to the source and destination for the data. A single frame may contain multiple packets, depending on the protocol. A cell may be fixed-size, depending on the protocol.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 illustrates inter-node physical routes taken by traffic from node 0 to node 5 using SONET UPSR, where a failure of spans between any single pair of nodes brings down only one of the two distinct physical routes for the traffic.
Fig. 2A illustrates an inter-node physical route taken by traffic from node 0 to node 5 using SONET two-fiber BLSR. Half of the capacity of each ring is reserved for protection, and half is used to carry regular traffic. The ring represented with dashed lines is the ring in which protection capacity is used to reroute traffic due to the span failure shown.
Fig. 2B illustrates the bi-directional path taken by traffic from node 0 to node 5 using the SONET BLSR structure of Fig. 2 A when there is a failure in the link between nodes 6 and 7. Traffic is turned around when it encounters a failed link.
Fig. 3 illustrates a network in accordance with one embodiment of the present invention and, in particular, illustrates an inter-node physical route taken by traffic from node 0 to node 5.
Fig. 4 illustrates the network of Fig. 3 after a failure has occurred on the span between nodes 6 and 7. When a failure occurs impacting a link or span on the initial path (e.g., between nodes 0 and 5), the traffic is rerouted at the ingress node to travel in the other direction around the ring to reach the destination node. Fig. 5 illustrates the optional interim state of the network (based on wrapping traffic from one ring to the other) between that shown in Fig. 3 and that shown in Fig.
4.
Fig. 6 illustrates pertinent hardware used in a single node.
Fig. 7 provides additional detail of the switching card and ring interface card in Fig. 6.
Fig. 8 is a flowchart illustrating steps used to identify a change in the status of the network and to re-route traffic through the network.
Fig. 9 illustrates additional detail of the shelf controller card shown in Fig. 6.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The purpose of the invention described herein is to achieve fast protection in a ring network while providing for efficient network capacity utilization. Certain aspects of the preferred embodiment are:
a. Transmission of a given packet between two nodes in only one direction around the ring (rather than in both directions as is done in SONET UPSR).
b. Differentiation between "protected" and "unprotected" traffic classes.
c. A fast topology communication mechanism to rapidly communicate information about a span break to all nodes in the ring.
d. A fast re-routing/routing table update mechanism to re-route paths impacted by a span break the other direction around the ring.
e. An optional interim wrapping mechanism that may be used to further increase protection switching speed.
These aspects are described in more detail below. Unidirectional Transmission
A given packet/flow between two nodes is transmitted in only a single direction around the network (even when there is a span fault) and is removed from the ring by the destination node, as is shown in Fig. 3 where node 0 transmits information to node 5 in only the direction indicated by the thick arrows. A transmission from node 5 to node 0 would only go through nodes 6 and 7 in the opposite direction. This allows for optimized ring capacity utilization since no capacity is set aside for protection.
The least-cost physical route is typically used for protected traffic. This is often the shortest-hop physical route. For example, a transmission from node 0 to node 2 would typically be transmitted via node 1. The shortest-hop physical route corresponds to the least-cost route when traffic conditions throughout the network are relatively uniform. If traffic conditions are not uniform, the least-cost physical route from node 0 to node 2 can instead be the long path around the ring.
The removal of packets from the ring by the destination node ensures that traffic does not use more capacity than is necessary to deliver it to the destination node, thus enabling increased ring capacity through spatial reuse of capacity. An example of spatial reuse is the following. If 20% of span capacity is used up for traffic flowing from node 0 to node 2 via node 1, then the removal of this traffic from the ring at node 2 means that the 20% of span capacity is now available for any traffic flowing on any of the other spans in the ring (between nodes 2 and 3, nodes 3 and 4, etc.)
Protected and Unprotected Traffic Classes
In the case of unidirectional transmission described above, the loss of any span in the ring will result in a reduction in network capacity. This follows from the fact that traffic that would flow along a given span during normal operations must share the capacity of other spans in the case of a failure of that span. For example, Fig. 4 shows a span break between nodes 6 and 7. In contrast to Fig. 3, a transmission from node 0 to node 5 must now travel in a clockwise direction on another ring (illustrated by the thick arrows), adding to the traffic on that ring. Because some network capacity is lost in the case of a span outage, a heavily loaded network with no capacity set aside for protection must suffer some kind of performance degradation as a result of such an outage. If traffic is classified into a "protected" class and an "unprotected" class, network provisioning and control can be implemented such that protected traffic service is unaffected by the span outage. This control is achieved through the use of bandwidth reservation management that processes provisioning requests considering the impact of a protection switch. In such a case, all of the performance degradation is "absorbed" by the unprotected traffic class via a reduction in average, peak, and burst bandwidth allocated to unprotected traffic on remaining available spans so that there is sufficient network capacity to carry all protected traffic. Traffic within the unprotected class can be further differentiated into various subclasses such that certain subclasses suffer more degradation than do others.
Fast Topology Communication Mechanism
Due to Telcordia requirements previously mentioned, the loss of a span in a ring must be rapidly sensed and communicated to all nodes in a ring.
In the case of a span outage, the node on the receiving end of each link within the span detects that each individual link has failed. If only a single link is out, then only the loss of that link is reported. Depending on the performance monitoring (PM) features supported by the particular communications protocol stack being employed, this detection may be based on loss of optical (or electrical) signal, bit error rate (BER) degradation, loss of frame, or other indications.
Each link outage must then be communicated to the other nodes. This is most efficiently done through a broadcast (store-and-forward) message (packet), though it could also be done through a unicast message from the detecting node to each of the other nodes in the network. This message must at least be sent out on the direction opposite to that leading to the broken span. The message must contain information indicating which link has failed. Fast Source Node Re-routing Mechanism
When a link outage message is received by a given node, the node must take measures to re-route traffic that normally passed through the link. A possible sequence of actions is:
a. Receive link outage message;
b. Evaluate all possible inter-node physical routes (there are 2*(N-1) of them in an N node ring) to determine which ones are impacted by the loss of the link;
c. Update routing tables to force all impacted traffic to be routed the other way around the ring; and
d. Update capacity allocated to unprotected traffic classes to account for reduced network capacity associated with the link outage. Details of how this capacity allocation is accomplished are not covered in this specification.
Being able to perform the operations above quickly requires that the various tables be properly organized to rapidly allow affected paths to be identified. Additionally, updates must be based either on computationally simple algorithms or on pre-calculated lookup tables.
Optional Interim Wrapping Mechanism
To increase the speed of protection switching, it may be desirable to take direct action at the node(s) detecting the fault, rather than waiting for re-routing to take place at all nodes. A possible sequence of actions is:
a. Upon detection of an ingress link fault, a node must transmit a neighbor fault notification message to the node on the other side of the faulty link. This notification is only required if there is a single link failure, as the node using the failed link as an egress link would not be able to detect that it had become faulty. In the event that a full span is broken, the failure to receive these notifications do not affect the following steps. b. Upon detection of an ingress link fault or upon receipt of a neighbor fault notification message, a node must wrap traffic bound for the corresponding egress link on that span onto the other ring. This is shown in Fig. 5. Traffic from node 0 bound for node 5 is wrapped by node 7 onto the opposite ring because the span connecting node 7 to node 6 is broken.
The above steps are optional and should only be used if increased protection switching speed using this approach is required. This is because wrapping traffic from one ring onto the other uses up significantly more ring capacity than the standard approach described in this document. During the period, albeit short, between the start of wrapping and the completion of rerouting at source nodes, the capacity that must be reserved for protection is as much as that required in two-fiber BLSR.
Specific Algorithms
Bandwidth Reservation for Protected and Unprotected Traffic Provisioning
This section describes the mechanism used to account for provisioned bandwidth on the ring. Define Cnew(j, k, 0) as a new simplex connection from node j to node k on ring 0 (the clockwise ring as shown in Fig. 3). Assume that k>j. If not, the representative node numbering across the ring (for this example) can be re-done so that j=0 and k=k-j. Similarly, Cnew(k, j, 1) would be a new simplex connection from node k to node j on ring 1 (the counter-clockwise ring as shown in Fig. 3). Connection Cnew(j, k, 0) has a peak provisioned, or allowable, bandwidth of B. A connection may be provisioned either simplex or full-duplex, where a full-duplex connection consists of both Cnew(j, k, 0) and Cnew (k, j, 1) and accounting would be required for each direction. A given connection Cnew(j, k, 0) can be provisioned as either transporting protected traffic or unprotected traffic.
Each link has a maximum traffic capacity of L. To determine if the link is full, all traffic on the link must be summed. The traffic may be broken into different categories. For example, if the bandwidth constraints for the ring are class-based (or other categories), the request must also contain the associated class (category). Also, it is important to note that the provisioned traffic of each type may be weighted, but is nominally one. Further, for bursty traffic, peak bandwidth considerations should be made in the bandwidth accounting. For example, if three classes are supported (EF, AF, and BE), the amount of traffic per class that is allowed on a link can be governed through class-specific over-subscription parameters cEF, cAF, cBE as defined by
L ≥ cEF ■ SEF + cAF ■ SAF + cBE ■ SBE where L is the high-speed link data rate and S is the aggregate traffic
Traffic matrices are used to determine the traffic provisioned in the ring. The elements of the matrices represent the aggregate bandwidth from a source node to a destination node. Thus the matrix element in row ; and column k represents the aggregate bandwidth from nodey to node k. There are two basic matrices defined:
P is the working traffic matrix for traffic requiring protection. The matrix element P//, k] is the aggregate bandwidth from node/ to node k of protected traffic. When a new wire is provisioned/removed, with protection, from node/ to node k, with bandwidth B, B is added/subtracted to/from P /, kj. If a full-duplex wire is provisioned/removed, B is added subtracted also to/from Vfk,jJ.
U is the working traffic matrix for traffic not requiring protection. The matrix element \Jβ, k] is the aggregate bandwidth from node j to node k of unprotected traffic. When a new wire is provisioned/removed, without protection, from nodey to node k, with bandwidth B, B is added/subtracted to/from \J[j, k] . If a full-duplex wire is provisioned/removed, B is added/subtracted also to/from Vfk,jJ.
The traffic flow around the ring is bi-directional. Both clockwise and counterclockwise rings carry traffic. Clockwise and counter-clockwise rings will have its own set of basic traffic matrices. For a class-based category system, for EF traffic in the clockwise direction, there are ¥EF and XAE and for the counter-clockwise direction there are VEF and U .
Using the construct above, several checks can be made to determine if the bandwidth is available to support a new connection. These checks include verifying the bandwidth is available to support the working traffic configuration and every possible fault traffic configuration. Using the contstructs above, if Cnew(j, k, 0) is provisioned, B is added to the Pc/)', k] element in the population matrix. Then the following class-based category span loading algorithm is run to verify the bandwidth on each span is available for the working configuration.
for (x=0 to N-l) { //spans 0 to N-l for an N node network// ScEF [x] = 0; //Span X utilization due to EF traffic ScAF [x] = 0; //Span X utilization due to AF traffic ScBE [x] = 0; //Span X utilization due to BE traffic
for (j = (1+x) to (N+x) ) { for (k = (1+x) to j ) {
ScEF [x] = ScEF [x] + PcEF (j mod N, k mod N) ; ScEF [x] = ScEF [x] + UcEF (j mod N, k mod N) ;
ScAF [x] = ScAF [x] + PcAF (j mod N, k mod N) ;
ScAF fx] = ScAF [x] + UcAF (j mod N, k mod N) ;
ScBE fx] = ScBE [x] + PcBE (j mod N, k mod N) ; ScBE [x] = ScBE [x] + UcBE (j mod N, k mod N) ; }
}
Sc [x] = cEF*ScEF [xl + cAF*ScAF [x] + cBE*ScBE [x] ;
//Total Span X Utilization//
if (Sc[x] > L) reject_provisioning_attempt=l;
}
If a rejection indication is not provided to the higher layer, the single failure configurations must be checked. To develop a single failure configuration, one-by one, a single link, w, is failed, where w is between node w and node w+1 on the clock wise ring. The traffic matrices are populated as discussed above; however, traffic that traversed link w is now switched at the source to the other ring. For each provisioned protected crossconnect C(j, k, 0), the matrix is populated as follows: if (k >= j ) { if (w>=k or w<j ) )
Add crossconnect bandwidth to Pc[j,k]; Else Add crossconnect bandwidth to Pcc[j,k];
} else { if (w>=j or w<k) )
Add crossconnect bandwidth to Pcc[j,k]; Else
Add crossconnect bandwidth to Pc[j,k];
For crossconnect C(j,k,l), the matrix is populated as follows:
if (k >= j) { if (w>=j or w<k) )
Add crossconnect bandwidth to Pcc[j,k]; Else Add crossconnect bandwidth to Pc[j,k];
} else { if (w<=j or w>k) )
Add crossconnect bandwidth to Pc[j,k]; Else
Add crossconnect bandwidth to Pcc[j,k];
The unprotected crossconnects are provisioned as before, independent of the single failed link.
Once the single failure traffic configuration is generated as described, the same span loading algorithm described above is computed. Based upon the result, the reject or accept indication is provided to the higher layer. This is performed for each link in the clockwise and counter-clockwise direction. A failure of node N corresponds to a failure of links between nodes N-l and N+l .
Fast Topology Communication Mechanism
This section describes a specific fast mechanism for communicating topology changes to the nodes in a ring network. The mechanism for communicating information about a span or link break or degradation from a node to all other nodes on a ring is as follows.
A link status message is sent from each node detecting any link break or degradation on ingress links to the node, e.g. links for which the node is on the receiving end. (Therefore, for a single span break the two nodes on the ends of the span will each send out a link status message reporting on the failure of a single distinct ingress link.) This message may be sent on the ring direction opposite the link break or on both ring directions. For robustness, it is desirable to send the message on both ring directions. In a network that does not wrap messages from one ring direction to the other ring direction, it is required that the message be sent on both ring directions to handle failure scenarios such as that in Fig. 4. The message may also be a broadcast or a unicast message to each node on the ring. For robustness and for capacity savings, it is desirable to use broadcast. In particular, broadcast ensures that knowledge of the link break will reach all nodes, even those that are new to the ring and whose presence may not be known to the node sending the message. In either case, the mechanism ensures that the propagation time required for the message to reach all nodes on the ring is upper bounded by the time required for a highest priority message to travel the entire circumference of the ring. It is desirable that each mechanism also ensure that messages passing through each node are processed in the fastest possible manner. This minimizes the time for the message to reach all nodes in the ring.
The link status message sent out by a node should contain at least the following information: source node address, link identification of the broken or degraded link for which the node is on the receive end, and link status for that link. For simplicity of implementation, the link status message can be expanded to contain link identification and status for all links for which the node is on the receive end. The link identification for each link, in general, should contain at least the node address of the node on the other end of the link from the source node and the corresponding physical interface identifier of the link's connection to the destination node. The mechanism by which the source node obtains this information is found in the co-pending application entitled "Dual-Mode Virtual Network Addressing," Serial
No. , filed herewith by Jason Fan et al., assigned to the present assignee and incorporated herein by reference. The physical interface identifier is important, for example, in a two-node network where the address of the other node is not enough to resolve which link is actually broken or degraded. Link status should indicate the level of degradation of the link, typically expressed in terms of measured bit error rate on the link (or in the event that the link is broken, a special identifier such as 1).
The link status message may optionally contain two values of link status for each link in the event that protection switching is non-revertive. An example of non- revertive switching is illustrated by a link degrading due to, for example, temporary loss of optical power, then coming back up. The loss of optical power would cause other nodes in the network to protection switch. The return of optical power, however, would not cause the nodes to switch back to default routes in the case of non-revertive switching until explicitly commanded by an external management system. The two values of link status for each link, therefore, may consist of a status that reflects the latest measured status of the link (previously described) and a status that reflects the worst measured status (or highest link cost) of the link since the last time the value was cleared by an external management system.
The link status message can optionally be acknowledged by the other nodes.
In the event that the message is not acknowledged, it must be sent out multiple times to ensure that it is received by all other nodes. In the event that the message requires acknowledgement on receipt, it must be acknowledged by all expected recipient nodes within some time threshold. If not, the source node may choose to re-send the link status message to all expected recipients, or re-send the link status message specifically to expected recipients that did not acknowledge receipt of the message. Fast Source Node Re-routing Mechanism
This section describes a mechanism which allows a node in a ring network to rapidly re-route paths that cross broken links. The following describes a fast source node re-routing mechanism when node 0 is the source node.
For each destination node j, a cost is assigned to each output direction (0 and
1) from node 0 on the ring. A preferred direction for traffic from nodes 0 to j is selected based on the direction with the lowest cost. For simplicity, the mechanism for reassigning costs to the path to each destination node for each output direction from node 0 operates with a constant number of operations, irrespective of the current condition of the ring. (The mechanism may be further optimized to always use the minimum possible number of operations, but this will add complexity to the algorithm without significantly increasing overall protection switching speed.) The mechanism for reassigning an output direction to traffic packets destined for a given node based on the path cost minimizes the time required to complete this reassignment.
A table is maintained at each node with the columns Destination Node, direction 0 cost, and direction 1 cost. An example is shown as Table 1. The computation of the cost on a direction from node 0 (assuming node 0 as the source) to node j may take into account a variety of factors, including the number of hops from source to destination in that direction, the cumulative normalized bit error rate from source to destination in that direction, and the level of traffic congestion in that direction. Based on these costs, the prefeπed output direction for traffic from the source to any destination can be selected directly. The example given below assumes that the costs correspond only to the normalized bit error rate from source to destination in each direction. The cost on a given link is set to 1 if the measured bit eπor rate is lower than the operational bit error rate threshold. Conveniently, if all links are fully operational, the cumulative cost from node 0 to node j will be equal to the number of hops from node 0 to node j if there is no traffic congestion. Traffic congestion is not taken into account in this example.
For a representative ring with a total of 8 nodes (in clockwise order 0, 1, 2, 3, 4, 5, 6, 7), the table's normal operational setting at node 0 is: Table 1. Preferred direction table at node 0
Figure imgf000019_0001
The prefeπed direction is that with the lower cost to reach destination node j. In the event that the costs to reach node j on direction 0 and on direction 1 are equal, then either direction can be selected. (Direction 0 is selected in this example.) The normal operational cost for each physical route (source to destination) is computed from the link status table shown in Table 3.
The pseudocode for selection of the prefeπed direction is:
For j=l to N-l {N is the total number of nodes in the ring}
Update direction 0 cost (dir_0_cost(j)) and direction 1 cost (dir_l_cost(j)) for each destination node j; {expanded later in this section} {HYST FACT is the hysteresis factor to prevent a ping-pong effect due to BER variations in revertive networks. A default value for this used in SONET is 10}
If (dir_0_cost(j) < dir_l_cost(j)/HYST_FACT), dir_prefeπed(j) = 0; Else if (dir_l_cost(j)< dir_0_cost(j)/HYST_FACT), dir_prefeπed(j) = 1 ; Else if dir_prefeπed(j) has a pre-defined value,
{This indicates that dir_prefeπed(j) has been previously set to a prefeπed direction and thus should not change if the above two conditions were not met} dir_prefeπed(j) does not change; Else if dir_prefeπed(j) does not have a pre-defined value, if dir_0_cost(j) < dir_l_cost(j), dir_prefeπed(j) = 0; Else if dir_l_cost(j) < dir_0_cost(j), dir_prefeπed(j) = 1; Else dir_prefeπed(j) = 0; End {else if dir_prefeπed(j) does not have a pre-defined value} End {for loop j}
The link status table (accessed by a CPU at each node) is used to compute the costs in the prefeπed direction table above. The link status table's normal operational setting looks like:
Table 3. Link status table (identical at every node)
Figure imgf000020_0001
The cost for each link dij is the normalized bit eπor rate, where the measured bit eπor rate on each link is divided by the default operational bit eπor rate (normally 10E-9 or lower). In the event that the normalized bit eπor rate is less than 1 for a link, the value entered in the table for that link is 1. The pseudocode for the line "Update direction 0 cost and direction 1 cost" for each node j in the pseudocode for selection of prefeπed direction uses the link status table shown in Table 3 as follows:
{Initialization of Linkcostsum values in each direction. These variables are operated on inside the for loop below to generate dir_0_cost(j)and dir_l_cost(j).}
Linkcostsumchro = 0;
{ Linkcostsumdιr I is the sum of link costs all the way around the ring in direction 1, starting at node 0 and ending at node 0.}
Linkcostsumdιr ι = sum over all links(Linkcostd,r ι); For j=0 to N-l {N is the total number of nodes in the ring}
{MAX_COST is the largest allowable cost in the prefeπed direction table. Linkcost ,r 0) link ij is the cost of the link in direction 0 from node i to node j.} If (Linkcostsum d,ro < MAX_COST)
Linkcostsum dir 0 = Linkcostsum dιr 0+Linkcost dιro,ι,nkj, o+i) odN; else
Linkcostsum d)ro = MAX_COST; dir_0_cost(j) = Linkcostsum dιr 0; If (Linkcostsum dιr i < MAX_COST)
Linkcostsum dιr i = Linkcostsum dιr rLinkcost dιr lt ι,nk ,+1) modN, )', else
Linkcostsum dιr , = MAX_COST; dir_l_cost(j) = Linkcostsum dιr ύ End {for loop j}
The update of the link status table is based on the following pseudocode:
{This version of the pseudocode assumes more than 2 nodes in the ring} If (linkstatusmessage.source = node i) and (linkstatusmessage.neighbor = node j) and (direction = 0)
Linkcost d,r 0, link ι,j = linkstatusmessage.status; else if (linkstatusmessage. source = node i) and (linkstatusmessage.neighbor = node j) and (direction = l)Linkcost ir 1, link j, i = linkstatusmessage.status;
In the event that a link is broken, the linkstatusmessage.status for that link is a very large value. In the event that a link is degraded, the linkstatusmessage.status for that link is the measured bit eπor rate on that link divided by the undegraded bit eπor rate of that link. All undegraded links are assumed to have the same undegraded bit eπor rate.
The link status table may optionally contain two cost columns per direction to handle non-revertive switching scenarios. These would be measured cost (equivalent to the columns cuπently shown in Table 3) and non-revertive cost. The non-revertive cost column for each direction contains the highest value of link cost reported since the last time the value was cleared by an external management system. This cost column (instead of the measured cost) would be used for prefeπed direction computation in the non-revertive switching scenario. The prefeπed direction table may also optionally contain two cost columns per direction, just like the link status table. It may also contain two prefeπed direction columns, one based on the measured costs and the other based on the non-revertive costs. Again, the non- revertive cost columns would be used for computations in the non-revertive switching scenario.
As an example, assume that the clockwise link between node 2 and node 3 is degraded with factor a (where a > HYST FACT), the clockwise link between node 4 and node 5 is broken (factor MAX), the counterclockwise link between node 1 and node 2 is degraded with factor b (where b > HYST_FACT), and the counterclockwise link between node 5 and node 6 is degraded with factor c (where c < a/HYST_FACT). The link status table for this example is shown in Table 5. Table 5. Example of link status table with degraded and broken links
Figure imgf000023_0001
The costs of the links needed between the source node and destination node are added to determine the total cost.
The prefeπed direction table for the source node 0 is then:
Table 7. Example of preferred direction table with degraded and broken links
Figure imgf000023_0002
(In the selection of the prefeπed direction, it is assumed that HYST_FACT = 10.)
Once these prefeπed directions are determined, a coπesponding mapping table of destination node to prefeπed direction in packet processors on the data path is modified to match the above table.
Neighbor Fault Notification in Optional Interim Wrapping Mechanism
This section describes a specific fast mechanism for communication of a fault notification from the node on one side of the faulty span to the node on the other side. This mechanism, as described previously, is only necessary in the event of a single link failure, since the node using that link as its egress link cannot detect that it is faulty.
A neighbor fault notification message is sent from each node detecting any link break or degradation on an ingress link to the node. The message is sent on each egress link that is part of the same span as the faulty ingress link. To ensure that it is received, the notification message can be acknowledged via a transmission on both directions around the ring. If it is not acknowledged, then the transmitting node must send the notification multiple times to ensure that it is received. The message is highest priority to ensure that the time required to receive the message at the destination is minimized.
The neighbor fault notification message sent out by a node should contain at least the following information: source node address, link identification of the broken or degraded link for which the node is on the receive end, and link status for that link. For simplicity of implementation, the neighbor fault notification message may be equivalent to the link status message broadcast to all nodes that has been previously described. Mechanisms to Provide Provisioning and Routing Information to Tributary Interface Caxds
Fig. 9 illustrates one shelf controller card 62 in more detail. The shelf controller 62 obtains status information from the node and interfaces with a network management system. The shelf controller 62 both provisions other cards within the device 20 and obtains status information from the other cards. In addition, the shelf controller interfaces with an external network management system and with other types of external management interfaces. The software applications controlling these functions run on the CPU 92. The CPU may be an IBM/Motorola MPC750 microprocessor.
A memory 93 represents memories in the node. It should be understood that there may be distributed SSRAM, SDRAM, flash memory and EEPROM to provide the necessary speed and functional requirements of the system.
The CPU is connected to a PCI bridge 94 between the CPU and various types of external interfaces. The bridge may be an IBM CPC700 or any other suitable type.
Ethernet controllers 96 and 102 are connected to the PCI bus. The controller may be an Intel 21143 or any other suitable type.
An Ethernet switch 98 controls the Layer 2 communication between the shelf controller and other cards within the device. This communication is via control lines on the backplane. The layer 2 protocol used for the internal communication is
100BaseT switched Ethernet. This switch may be a Broadcom BCM5308 Ethernet switch or any other suitable type.
The output of the Ethernet switch must pass through the Ethernet Phy block 100 before going on the backplane. The Ethernet Phy may be a Bel Fuse, Inc., S558 or any other suitable type that interfaces directly with the Ethernet switch used.
The output of the Ethernet controller 102 must pass through an Ethernet Phy 104 before going out the network management system (NMS) 10/100 BaseT Ethernet port. The Ethernet Phy may be an AMD AM79874 or any other suitable type. Information is delivered between applications running on the shelf controller CPU and applications running on the other cards via well-known mechanisms including remote procedure calls (RPCs) and event-based notification. Reliability is provided via TCP/IP or via UDP/IP with retransmissions.
Provisioning of cards and ports via an external management system is via the
NMS Ethernet port. Using a well-known network management protocol such as the Simple Network Management Protocol (SNMP), the NMS can control a device via the placement of an SNMP agent application on the shelf controller CPU. The SNMP agent interfaces with a shelf manager application. The shelf manager application is primarily responsible for the provisioning on tributary interface cards in 52.
Communication from the shelf controller onto the ring is via the switching card CPU. This type of communication is important for sending SNMP messages to remote devices on the ring from an external management system physically connected to the shelf. The bandwidth management that determines whether provisioning is accepted runs on the shelf controller or an extemal workstation.
DESCRIPTION OF HARDWARE
Fig. 6 illustrates the pertinent functional blocks in each node. Node 0 is shown as an example. Each node is connected to adjacent nodes by ring interface cards 30 and 32. These ring interface cards convert the incoming optical signals on fiber optic cables 34 and 36 to electrical digital signals for application to switching card 38.
Fig. 7 illustrates one ring interface card 32 in more detail showing the optical transceiver 40. An additional switch in card 32 may be used to switch between two switching cards for added reliability. The optical transceiver may be a Gigabit Ethernet optical transceiver using a 1300 nm laser, commercially available.
The serial output of optical transceiver 40 is converted into a parallel group of bits by a serializer/deserializer (SERDES) 42 (Fig. 6). The SERDES 42, in one example, converts a series of 10 bits from the optical transceiver 40 to a parallel group of 8 bits using a table. The 10 bit codes selected to coπespond to 8 bit codes meet balancing criteria on the number of 1 's and O's per code and the maximum number of consecutive l's and O's for improved performance. For example, a large number of sequential logical 1 's creates baseline wander, a shift in the long-term average voltage level used by the receiver as a threshold to differentiate between 1 's and O's. By utilizing a 10-bit word with a balanced number of 1 's and O's on the backplane, the baseline wander is greatly reduced, thus enabling better AC coupling of the cards to the backplane.
When the SERDES 42 is receiving serial 10-bit data from the ring interface card 32, the SERDES 42 is able to detect whether there is an eπor in the 10-bit word if the word does not match one of the words in the table. The SERDES 42 then generates an eπor signal. The SERDES 42 uses the table to convert the 8-bit code from the switching card 38 into a serial stream of 10 bits for further processing by the ring interface card 32. The SERDES 42 may be a model VSC 7216 by Vitesse or any other suitable type.
A media access controller (MAC) 44 counts the number of eπors detected by the SERDES 42, and these eπors are transmitted to the CPU 46 during an interrupt or pursuant to polling mechanism. The CPU 46 may be a Motorola MPC860DT microprocessor. Later, it will be described what happens when the CPU 46 determines that the link has degraded sufficiently to take action to cause the nodes to re-route traffic to avoid the faulty link. The MAC 44 also removes any control words forwarded by the SERDES and provides OSI layer 2 (data-link) formatting for a particular protocol by structuring a MAC frame. MACs are well known and are described in the book "Telecommunication System Engineering" by Roger Freeman, third edition, John Wiley & Sons, Inc., 1996, incorporated herein by reference in its entirety. The MAC 44 may a field programmable gate aπay.
The packet processor 48 associates each of the bits transmitted by the MAC 44 with a packet field, such as the header field or the data field. The packet processor 48 then detects the header field of the packet structured by the MAC 44 and may modify information in the header for packets not destined for the node. Examples of suitable packet processors 48 include the XPIF-300 Gigabit Bitstream Processor or the EPEF 4-L3C1 Ethernet Port L3 Processor by MMC Networks, whose data sheets are incorporated herein by reference.
The packet processor 48 interfaces with an external search machine/memory 47 (a look-up table) that contains routing information to route the data to its intended destination. The updating of the routing table in memory 47 will be discussed in detail later.
A memory 49 in Fig. 6 represents all other memories in the node, although it should be understood that there may be distributed SSRAM, SDRAM, flash memory, and EEPROM to provide the necessary speed and functional requirements of the system
The packet processor 48 provides the packet to a port of the switch fabric 50, which then routes the packet to the appropriate port of the switch fabric 50 based on the packet header. If the destination address in the packet header corresponds to the address of node 0 (the node shown in Fig. 6), the switch fabric 50 then routes the packet to the appropriate port of the switch fabric 50 for receipt by the designated node 0 tributary interface card 52 (Fig. 5) (to be discussed in detail later). If the packet header indicates an address other than to node 0, the switch fabric 50 routes the packet through the appropriate ring interface card 30 or 32 (Fig. 5). Control packets are routed to CPU 46. Such switching fabrics and the routing techniques used to determine the path that packets need to take through switch fabrics are well known and need not be described in detail.
One suitable packet switch is the MMC Networks model nP5400 Packet Switch Module, whose data sheet is incorporated herein by reference. In one embodiment, four such switches are connected in each switching card for faster throughput. The switches provide packet buffering, multicast and broadcast capability, four classes of service priority, and scheduling based on strict priority or weighted fair queuing.
A packet processor 54 associated with one or more tributary interface cards, for example, tributary interface card 52, receives a packet from switch fabric 50 destined for equipment (e.g., a LAN) associated with tributary interface card 52. Packet processor 54 is bi-directional, as is packet processor 48. Packet processors 54 and 48 may be the same model processors. Generally, packet processor 54 detects the direction of the data through packet processor 54 as well as accesses a routing table memory 55 for determining some of the desired header fields and the optimal routing path for packets heading onto the ring, and the desired path through the switch for packets heading onto or off of the ring. This is discussed in more detail later. When the packet processor 54 receives a packet from switch fabric 50, it forwards the packet to a media access control (MAC) unit 56, which performs a function similar to that of MAC 44, which then forwards the packet to the SERDES 58 for serializing the data. SERDES 58 is similar to SERDES 42.
The output of the SERDES 58 is then applied to a particular tributary interface card, such as tributary interface card 52 in Fig. 5, connected to a backplane 59. The tributary interface card may queue the data and route the data to a particular output port of the tributary interface card 52. Such routing and queuing by the tributary interface cards may be conventional and need not be described in detail. The outputs of the tributary interface cards may be connected electrically, such as via copper cable, to any type of equipment, such as a telephone switch, a router, a LAN, or other equipment. The tributary interface cards may also convert electrical signals to optical signals by the use of optical transceivers, in the event that the external interface is optical.
In one embodiment, the above-described hardware processes bits at a rate greater than lGbps.
Functions of Hardware During Span Failure/Degradation
Fig. 8 is a flow chart summarizing the actions performed by the network hardware during a span failure or degradation. Since conventional routing techniques and hardware are well known, this discussion will focus on the novel characteristics of the prefeπed embodiment.
In step 1 of Fig. 8, each of the nodes constantly or periodically tests its links with neighboring nodes. The MAC 44 in Fig. 7 counts eπors in the data stream (as previously described) and communicates these eπors to the CPU 46. The CPU compares the bit eπor rate to a predetermined threshold to determine whether the link is satisfactory. An optical link failure may also be communicated to the CPU. CPU 46 may monitor ingress links from adjacent devices based on error counting by MAC 44 or based on the detection of a loss of optical power on ingress fiber 36. This detection is performed by a variety of commercially available optical transceivers such as the Lucent NetLight transceiver family. The loss of optical power condition can be reported to CPU 46 via direct signaling over the backplane (such as via I2C lines), leading to an interrupt or low-level event at the CPU.
In step 2, the CPU 46 determines if there is a change in status of an adjacent link. This change in status may be a fault (bit eπor rate exceeding threshold) or that a previously faulty link has been repaired. It will be assumed for this example that node 6 sensed a fault in ingress link connecting it to node 7.
If there is no detection of a fault in step 2, no change is made to the network. It is assumed in Fig. 8 that adjacent nodes 6 and 7 both detect faults on ingress links connecting node 6 to node 7. The detection of a fault leads to an interrupt or low-level event (generated by MAC 44) sent through switch fabric 50 to CPU 46 signaling the change in status.
In optional step 3, nodes 6 and 7 attempt to notify each other directly of the ingress link fault detected by each. The notification sent by node 6, for example, is sent on the egress link of node 6 connected to node 7. If the entire span is broken, these notifications clearly do not reach the destination. They are useful only if a single link within a span is broken. This is because a node has no way to detect a fiber break impacting an egress link. Based on this notification, each node can then directly wrap traffic in the fashion shown in Fig. 5. The wrapping of traffic in node 6 is performed through a configuration command from CPU 46 to packet processor 48 connected as shown in Fig. 7 to ring interface card 32 (assuming that links from ring interface card 32 connect to node 7). After receiving this command, packet processor 48 loops back traffic through the switching fabric and back out ring interface card 30 that it normally would send directly to node 7. Each communication by a node of link status is associated with a session number. A new session number is generated by a node only when it senses a change in the status of a neighboring node. As long as the nodes receive packets with the cuπent session number, then the nodes know that there is no change in the network. Both nodes 6 and 7 increment the session number stored at each node upon detection of a fault at each node.
In step 4, both node 6 and node 7 then broadcast a link status message, including the new session number, conveying the location of the fault to all the nodes. Each node, detecting the new session number, forwards the broadcast to its adjacent node.
A further description of the use of the session number in general topology reconfiguration scenarios, of which a link or span failure is one, is found in the co- pending application entitled "Dual-Mode Virtual Network Addressing," by Jason Fan et al., assigned to the present assignee and incorporated herein by reference.
In step 5, the identity of the fault is then used by the packet processor 54 in each node to update the routing table in memory 55. Routing tables in general are well known and associate a destination address in a header with a particular physical node to which to route the data associated with the header. Each routing table is then configured to minimize the cost from a source node to a destination node. Typically, if the previously optimized path to a destination node would have had to go through the faulty link, that route is then updated to be transmitted through the reverse direction through the ring to avoid the faulty route. The routing table for each of the packet processors 54 in each node would be changed as necessary depending upon the position of the node relative to the faulty link. Details of the routing tables have been previously described.
In one embodiment, each of the nodes must acknowledge the broadcast with the new session number, and the originating node keeps track of the acknowledgments. After a time limit has been exceeded without receiving all of the acknowledgments, the location of the fault is re-broadcast without incrementing the sequence number. Accordingly, all nodes store the cuπent topology of the ring, and all nodes may independently create the optimum routing table entries for the cuπent configuration of the ring.
In step 6, the routing table for each node has been updated and data traffic resumes. Accordingly, data originating from a LAN connected to a tributary interface card 52 (Fig. 5) has appended to it an updated routing header by packet processor 54 for routing the data through switch fabric 50 to the appropriate output port for enabling the data to arrive at its intended destination. The destination may be the same node that originated the data and, thus, the switch fabric 50 would wrap the data back through a tributary interface card in the same node. Any routing techniques may be used since the invention is generally applicable to any protocol and routing techniques.
Since some traffic around the ring must be re-routed in order to avoid the faulty link, and the bandwidths of the links are fixed, the traffic to be transmitted around the healthy links may exceed the bandwidth of the healthy links. Accordingly, some lower priority traffic may need to be dropped or delayed, as identified in step 7. Generally, the traffic classified as "unprotected" is dropped or delayed as necessary to support the "protected" traffic due to the reduced bandwidth.
In one embodiment, the packet processor 54 detects the header that identifies the data as unprotected and drops the packet, as required, prior to the packet being applied to the switch fabric 50. Voice traffic is generally protected.
In step 8, switch fabric 50 routes any packet forwarded by packet processor 54 to the appropriate output port for transmission either back into the node or to an adjacent node.
The above description of the hardware used to implement one embodiment of the invention is sufficient for one of ordinary skill in the art to fabricate the invention since the general hardware for packet switching and routing is very well known. One skilled in the art could easily program the MACs, packet processors, CPU 46, and other functional units to cany out the steps describe herein. Firmware or software may be used to implement the steps described herein. While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that changes and modifications may be made without departing from this invention in its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as fall within the true spirit and scope of this invention.

Claims

CLAIMSWhat is claimed is:
1. A method performed by a communications network, said network comprising nodes interconnected by communication links, at least some of said nodes being connected in a ring by said links, said method comprising: accounting for bandwidth based on source steered restoration; reserving bandwidth on a worst-case single failure scenario basis; avoiding redundancy in accounting for reservation protection; applying traffic configuration matrices to determine span loading.
2. A method performed by a communications network, said network comprising nodes interconnected by communication links, at least some of said nodes being connected in a ring by said links, said method comprising: determining whether individual links are operating above a predetermined operational threshold; broadcasting a first link status message identifying one of the individual links that is not operating above the predetermined operational threshold to the nodes; updating a routing table at each of the nodes such that the routing tables specify routes that avoid the individual link identified the first link status message.
3. The method of claim 2, wherein the determining whether individual links are operating above a predetermined threshold comprises comparing a bit eπor rates associated with the individual links to a predetermined threshold bit eπor rate.
4. The method of claim 2, further comprising: determining that the link identified the first link status message is operating above a predetermined operational threshold; broadcasting a second link status message conveying that the link identified the first link status message is operating above a predetermined operational threshold to each of the nodes; updating the routing table at each of the nodes such that the routing tables specify at least some routes that include the individual link identified the first link status message.
5. The method of claim 2, further comprising routing traffic through the network in accordance with the updated routing tables.
6. The method of claim 2, further comprising: determining whether certain traffic is of a first class or of a second class; providing priority access to the network for the first class traffic.
7. The method of claim 2, further comprising transmitting an acknowledge message from each of the nodes that has received the first link status message.
8. The method of claim 7, further comprising: waiting for the expiration of a predetermined time period after the broadcasting the first link status message determining whether at least a predetermined number of the acknowledge messages have been received; re-transmitting the first link status message if fewer than the predetermined number of the acknowledgement messages have been received.
9. The method of claim 7, wherein the first link status message further includes a session identifier.
10. The method of claim 2, further comprising: transmitting a fault notification message to the node at an opposite end of the link that is not operating above the predetermined operational threshold; receiving the fault notification message at the node at the opposite end of the link that is not operating above the predetermined operational threshold; rerouting traffic at the node at the opposite end of the link that is not operating above the predetermined operational threshold in response to receiving the fault notification message.
11. A method performed by a communications network, said network comprising nodes interconnected by communication links, at least some of said nodes being connected in a ring by said links, said method comprising: determining whether individual links are operating above a predetermined operational threshold; broadcasting a first link status message identifying one of the individual links that is not operating above the predetermined operational threshold to the nodes; updating a routing table at each of the nodes such that the routing tables specify routes that avoid the individual link identified the first link status message; routing traffic through the network in accordance with the updated routing tables determining that the link identified the first link status message is operating above a predetermined operational threshold; broadcasting a second link status message conveying that the link identified the first link status message is operating above a predetermined operational threshold to each of the nodes; updating the routing table at each of the nodes such that the routing tables specify at least some routes that include the individual link identified the first link status message.
12. The method of claim 11 , further comprising: determining whether certain traffic is of a first class or of a second class; providing priority access to the network for the first class traffic.
13. The method of claim 11 , further comprising transmitting an acknowledge message from each of the nodes that has received the first link status message.
14. The method of claim 11 , further comprising: waiting for the expiration of a predetermined time period after the broadcasting the first link status message determining whether at least a predetermined number of the acknowledge messages have been received; re-transmitting the first link status message if fewer than the predetermined number of the acknowledgement messages have been received.
15. The method of claim 11 , wherein the first link status message further includes a session identifier.
16. The method of claim 11 , further comprising: transmitting a fault notification message to the node at an opposite end of the link that is not operating above the predetermined operational threshold; receiving the fault notification message at the node at the opposite end of the link that is not operating above the predetermined operational threshold; rerouting traffic at the node at the opposite end of the link that is not operating above the predetermined operational threshold in response to receiving the fault notification message.
PCT/US2002/007388 2001-03-12 2002-03-11 Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique WO2002073903A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2002571657A JP2004533142A (en) 2001-03-12 2002-03-11 Reuse of bandwidth reservation in protection and restoration techniques for dynamically allocated rings
EP02721350A EP1368937A4 (en) 2001-03-12 2002-03-11 Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique
CA002440245A CA2440245A1 (en) 2001-03-12 2002-03-11 Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/805,360 2001-03-12
US09/805,360 US20030031126A1 (en) 2001-03-12 2001-03-12 Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique

Publications (1)

Publication Number Publication Date
WO2002073903A1 true WO2002073903A1 (en) 2002-09-19

Family

ID=25191360

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/007388 WO2002073903A1 (en) 2001-03-12 2002-03-11 Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique

Country Status (6)

Country Link
US (1) US20030031126A1 (en)
EP (1) EP1368937A4 (en)
JP (1) JP2004533142A (en)
CN (2) CN1606850A (en)
CA (1) CA2440245A1 (en)
WO (1) WO2002073903A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004312672A (en) * 2002-11-18 2004-11-04 Korea Electronics Telecommun Ring selection method for dual ring network
WO2007051374A1 (en) * 2005-10-31 2007-05-10 Huawei Technologies Co., Ltd. A method for guaranteeing classification of service of the packet traffic and the method of rate restriction
WO2007125111A1 (en) * 2006-04-28 2007-11-08 Nokia Siemens Networks Gmbh & Co. Kg A method and system for the protection of ethernet rings
WO2008051665A2 (en) * 2006-10-23 2008-05-02 Dot Hill Systems Corporation Adaptive sas phy configuration
EP2124392A1 (en) * 2008-05-20 2009-11-25 Hangzhou H3C Technologies Co., Ltd. Ring network routing method and ring network node
US7673185B2 (en) 2006-06-08 2010-03-02 Dot Hill Systems Corporation Adaptive SAS PHY configuration
US7746767B2 (en) 2003-08-05 2010-06-29 Telecom Italia S.P.A. Method for providing extra-traffic paths with connection protection in a communication network, related network and computer program product therefor
US8462652B2 (en) 2007-11-13 2013-06-11 Fujitsu Limited Transmission device and switchover processing method

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7545755B2 (en) * 2000-03-03 2009-06-09 Adtran Inc. Routing switch detecting change in session identifier before reconfiguring routing table
US6950409B1 (en) * 2001-04-30 2005-09-27 Fujitsu Limited Method and system for provisioning non-preemptible unprotected traffic in a bi-directional ring
DE10127286C2 (en) * 2001-06-05 2003-04-24 Fujitsu Siemens Computers Gmbh data ring
US7289513B1 (en) * 2001-06-15 2007-10-30 Cisco Technology, Inc. Switching fabric port mapping in large scale redundant switches
US7054264B2 (en) * 2001-07-24 2006-05-30 Corrigent Systems Ltd. Interconnect and gateway protection in bidirectional ring networks
US7061859B2 (en) 2001-08-30 2006-06-13 Corrigent Systems Ltd. Fast protection in ring topologies
US6973595B2 (en) * 2002-04-05 2005-12-06 International Business Machines Corporation Distributed fault detection for data storage networks
US7454494B1 (en) * 2003-01-07 2008-11-18 Exfo Service Assurance Inc. Apparatus and method for actively analyzing a data packet delivery path
US7522614B1 (en) * 2003-02-28 2009-04-21 3Com Corporation Multi-service access platform for telecommunications and data networks
JP3868939B2 (en) * 2003-08-20 2007-01-17 富士通株式会社 Device for detecting a failure in a communication network
EP1587272A1 (en) * 2004-04-13 2005-10-19 Alcatel Method and apparatus for load distribution in a wireless data network
US20060041715A1 (en) * 2004-05-28 2006-02-23 Chrysos George Z Multiprocessor chip having bidirectional ring interconnect
US7787469B2 (en) 2004-07-12 2010-08-31 Altera Corporation System and method for provisioning a quality of service within a switch fabric
JP4704171B2 (en) * 2005-09-30 2011-06-15 富士通株式会社 COMMUNICATION SYSTEM, TRANSMISSION DEVICE, AND RESERVED BAND SETTING METHOD
US8248916B2 (en) * 2005-12-30 2012-08-21 Telefonaktiebolaget Lm Ericsson (Publ) Recovery methods for restoring service in a distributed radio access network
CN1852211B (en) * 2006-04-11 2010-04-07 华为技术有限公司 Method and apparatus for eliminating ring ID error report message on ring network
US7961817B2 (en) * 2006-09-08 2011-06-14 Lsi Corporation AC coupling circuit integrated with receiver with hybrid stable common-mode voltage generation and baseline wander compensation
JP4890239B2 (en) * 2006-12-27 2012-03-07 富士通株式会社 RPR transmission route designation method and apparatus
WO2008091943A2 (en) * 2007-01-23 2008-07-31 Telchemy, Incorporated Method and system for estimating modem fax performance over packet networks
US7962717B2 (en) * 2007-03-14 2011-06-14 Xmos Limited Message routing scheme
CN101472259B (en) * 2007-12-28 2010-12-08 华为技术有限公司 Method and device for triggering policy control and charging function
JP4488094B2 (en) * 2008-07-28 2010-06-23 ソニー株式会社 Communication node, communication method, and computer program
CN101860484A (en) * 2010-05-24 2010-10-13 中兴通讯股份有限公司 Dynamic adjustment method and network device of switching loop
JP5655696B2 (en) 2011-05-11 2015-01-21 富士通株式会社 Network and its failure relief method
CN102316484B (en) * 2011-09-08 2017-09-29 中兴通讯股份有限公司 Method and system for switching ring network wireless device
US20130083652A1 (en) * 2011-09-29 2013-04-04 Electronics And Telecommunications Research Institute Apparatus and method of shared mesh protection switching
US9007923B2 (en) * 2011-10-31 2015-04-14 Itron, Inc. Quick advertisement of a failure of a network cellular router
CN103684951B (en) * 2012-08-31 2017-06-20 中国移动通信集团公司 A kind of ring network protection method and system
CN103795601B (en) * 2012-11-04 2018-04-10 中国移动通信集团公司 A kind of method and device for realizing looped network Steering protections
US9154408B2 (en) * 2013-02-26 2015-10-06 Dell Products L.P. System and method for traffic polarization during failures
CN105765909A (en) * 2013-06-27 2016-07-13 华为技术有限公司 Link switching method and device
US9929899B2 (en) * 2013-09-20 2018-03-27 Hewlett Packard Enterprises Development LP Snapshot message
KR101631651B1 (en) * 2013-12-04 2016-06-20 주식회사 쏠리드 Optical Repeater of Ring Topology type
US9306775B1 (en) 2014-09-11 2016-04-05 Avago Technologies General Ip (Singapore) Pte. Ltd. Adaptation of gain of baseline wander signal
CN104253762B (en) * 2014-09-22 2018-01-23 广州华多网络科技有限公司 The method and device of concurrent processing
JP6683090B2 (en) 2016-09-26 2020-04-15 株式会社デンソー Relay device
CN108632121A (en) * 2017-03-23 2018-10-09 中兴通讯股份有限公司 A kind of pretection switch method and device for looped network
CN107465966B (en) * 2017-08-31 2020-06-05 中国科学院计算技术研究所 Topology reconstruction control method for optical network
US11239932B2 (en) * 2018-11-14 2022-02-01 Cisco Technology, Inc. Circuit emulation maintaining transport overhead integrity
CN109981454A (en) * 2019-03-29 2019-07-05 中国人民银行清算总中心 The broadcast controlling method and device of dynamic routing broadcasting packet
JP7105728B2 (en) * 2019-05-24 2022-07-25 古河電気工業株式会社 Communication system, communication system control method, and communication device
US11283518B2 (en) * 2019-11-08 2022-03-22 Infinera Corporation Method and apparatus for a restoration network with dynamic activation of pre-deployed network resources
CN111650450B (en) * 2020-04-03 2022-07-15 杭州奥能电源设备有限公司 Identification method based on direct current mutual string identification device
WO2022061783A1 (en) * 2020-09-25 2022-03-31 华为技术有限公司 Routing method and data forwarding system
US12034570B2 (en) 2022-03-14 2024-07-09 T-Mobile Usa, Inc. Multi-element routing system for mobile communications
CN116033585A (en) * 2023-03-24 2023-04-28 深圳开鸿数字产业发展有限公司 Data transmission method, device, communication equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5187706A (en) * 1990-10-30 1993-02-16 At&T Bell Laboratories Dual access rings for communications networks
US6246692B1 (en) * 1998-02-03 2001-06-12 Broadcom Corporation Packet switching fabric using the segmented ring with resource reservation control
US6256292B1 (en) * 1996-07-11 2001-07-03 Nortel Networks Corporation Self-healing line switched ring for ATM traffic
US6301267B1 (en) * 1997-03-13 2001-10-09 Urizen Ltd. Smart switch
US6301254B1 (en) * 1999-03-15 2001-10-09 Tellabs Operations, Inc. Virtual path ring protection method and apparatus

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130986A (en) * 1990-04-27 1992-07-14 At&T Bell Laboratories High speed transport protocol with two windows
WO1995006988A1 (en) * 1993-09-02 1995-03-09 Telstra Corporation Limited A method of allocating spare capacity to links of a telecommunications network
JPH0795225A (en) * 1993-09-20 1995-04-07 Fujitsu Ltd Bidirectional ring network control system
GB9403223D0 (en) * 1994-02-19 1994-04-13 Plessey Telecomm Telecommunications network including remote channel switching protection apparatus
JP3511763B2 (en) * 1995-11-17 2004-03-29 株式会社日立製作所 ATM network system and connection admission control method
US5793745A (en) * 1996-05-06 1998-08-11 Bell Communications Research, Inc. Bundled protection switching in a wide area network background of the invention
DE19703992A1 (en) * 1997-02-03 1998-08-06 Siemens Ag Method for the equivalent switching of transmission devices in ring architectures for the bidirectional transmission of ATM cells
US6269452B1 (en) * 1998-04-27 2001-07-31 Cisco Technology, Inc. System and method for fault recovery for a two line bi-directional ring network
US6246667B1 (en) * 1998-09-02 2001-06-12 Lucent Technologies Inc. Backwards-compatible failure restoration in bidirectional multiplex section-switched ring transmission systems
US6392992B1 (en) * 1998-11-30 2002-05-21 Nortel Networks Limited Signal degrade oscillation control mechanism
JP2000174815A (en) * 1998-12-09 2000-06-23 Nec Corp Qos protection device
IT1304049B1 (en) * 1998-12-23 2001-03-07 Cit Alcatel METHOD TO OPTIMIZE, IN THE EVENT OF A FAULT, THE AVAILABILITY OF THE LOW PRIORITY CANALIA IN A TRANSOCEANIC FIBER OPTIC RING TYPE MS-SP
US6690644B1 (en) * 1999-02-17 2004-02-10 Zhone Technologies, Inc. Mechanism for 1:1, 1+1, and UPSR path-switched protection switching
US6317426B1 (en) * 1999-06-03 2001-11-13 Fujitsu Network Communications, Inc. Method and apparatus for hybrid protection in a switching network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5187706A (en) * 1990-10-30 1993-02-16 At&T Bell Laboratories Dual access rings for communications networks
US6256292B1 (en) * 1996-07-11 2001-07-03 Nortel Networks Corporation Self-healing line switched ring for ATM traffic
US6301267B1 (en) * 1997-03-13 2001-10-09 Urizen Ltd. Smart switch
US6246692B1 (en) * 1998-02-03 2001-06-12 Broadcom Corporation Packet switching fabric using the segmented ring with resource reservation control
US6301254B1 (en) * 1999-03-15 2001-10-09 Tellabs Operations, Inc. Virtual path ring protection method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1368937A4 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004312672A (en) * 2002-11-18 2004-11-04 Korea Electronics Telecommun Ring selection method for dual ring network
JP4520134B2 (en) * 2002-11-18 2010-08-04 韓國電子通信研究院 Ring selection method for dual ring network
US7746767B2 (en) 2003-08-05 2010-06-29 Telecom Italia S.P.A. Method for providing extra-traffic paths with connection protection in a communication network, related network and computer program product therefor
WO2007051374A1 (en) * 2005-10-31 2007-05-10 Huawei Technologies Co., Ltd. A method for guaranteeing classification of service of the packet traffic and the method of rate restriction
WO2007125111A1 (en) * 2006-04-28 2007-11-08 Nokia Siemens Networks Gmbh & Co. Kg A method and system for the protection of ethernet rings
US7673185B2 (en) 2006-06-08 2010-03-02 Dot Hill Systems Corporation Adaptive SAS PHY configuration
US7536584B2 (en) 2006-06-08 2009-05-19 Dot Hill Systems Corporation Fault-isolating SAS expander
WO2008051665A2 (en) * 2006-10-23 2008-05-02 Dot Hill Systems Corporation Adaptive sas phy configuration
GB2457179A (en) * 2006-10-23 2009-08-12 Dot Hill Systems Corp Adaptive SAS PHY configuration
WO2008051665A3 (en) * 2006-10-23 2008-06-19 Dot Hill Systems Corp Adaptive sas phy configuration
GB2457179B (en) * 2006-10-23 2011-08-17 Dot Hill Systems Corp Adaptive SAS PHY configuration
US8462652B2 (en) 2007-11-13 2013-06-11 Fujitsu Limited Transmission device and switchover processing method
EP2124392A1 (en) * 2008-05-20 2009-11-25 Hangzhou H3C Technologies Co., Ltd. Ring network routing method and ring network node
US8004967B2 (en) 2008-05-20 2011-08-23 Hangzhou H3C Technologies Co., Ltd. Ring network routing method and ring network node

Also Published As

Publication number Publication date
EP1368937A4 (en) 2004-11-10
CN1606850A (en) 2005-04-13
CA2440245A1 (en) 2002-09-19
JP2004533142A (en) 2004-10-28
US20030031126A1 (en) 2003-02-13
EP1368937A1 (en) 2003-12-10
CN101854284A (en) 2010-10-06

Similar Documents

Publication Publication Date Title
US6680912B1 (en) Selecting a routing direction in a communications network using a cost metric
US20030031126A1 (en) Bandwidth reservation reuse in dynamically allocated ring protection and restoration technique
US7929428B2 (en) Switch for dynamically rerouting traffic due to detection of faulty link
US6865149B1 (en) Dynamically allocated ring protection and restoration technique
EP1262042B1 (en) Routing switch for dynamically rerouting traffic due to detection of faulty link
EP1348265B1 (en) Maintaining quality of packet traffic in optical network when a failure of an optical link occurs
US6952396B1 (en) Enhanced dual counter rotating ring network control system
US7778162B2 (en) Multiple service ring of N-ringlet structure based on multiple FE, GE and 10GE
US7486614B2 (en) Implementation method on multi-service flow over RPR and apparatus therefor
US20020112072A1 (en) System and method for fast-rerouting of data in a data communication network
JP4167072B2 (en) Selective protection against ring topology
EP1445919A2 (en) Dual-mode virtual network addressing
WO2000013376A9 (en) Redundant path data communication
US20050002329A1 (en) Method and apparatus for a hybrid variable rate pipe
JP2006005941A (en) Fault protection in each service of packet network, and method and apparatus for restoration
JP2003501880A (en) Relative hierarchical communication network with distributed routing via modular switches with packetized security codes, parity exchange, and priority transmission scheme
US20040202467A1 (en) Protection mechanism for an optical ring
Cisco Chapter 9, Ethernet Operation
Cisco Chapter 9, Ethernet Operation
US7710878B1 (en) Method and system for allocating traffic demands in a ring network
US20030063561A1 (en) Equivalent switching method for transmission devices in mpls networks
Zhong et al. Optical resilient Ethernet rings for high-speed MAN networks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2002721350

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2440245

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2002571657

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 028085167

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2002721350

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 2002721350

Country of ref document: EP