WO2000074310A2 - Method and system for path protection in a communications network - Google Patents

Method and system for path protection in a communications network Download PDF

Info

Publication number
WO2000074310A2
WO2000074310A2 PCT/US2000/015457 US0015457W WO0074310A2 WO 2000074310 A2 WO2000074310 A2 WO 2000074310A2 US 0015457 W US0015457 W US 0015457W WO 0074310 A2 WO0074310 A2 WO 0074310A2
Authority
WO
WIPO (PCT)
Prior art keywords
path
protection
link
node
failure
Prior art date
Application number
PCT/US2000/015457
Other languages
French (fr)
Other versions
WO2000074310A3 (en
Inventor
Pierre A. Humblet
Bruce D. Miller
Raj Shanmugaraj
Steven Sherry
Peter B. Beaulieu
Michael W. Fortuna
Michael C. Yip
William Abraham
Original Assignee
Astral Point Communications, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/324,454 external-priority patent/US6992978B1/en
Application filed by Astral Point Communications, Inc. filed Critical Astral Point Communications, Inc.
Priority to AU53235/00A priority Critical patent/AU5323500A/en
Publication of WO2000074310A2 publication Critical patent/WO2000074310A2/en
Publication of WO2000074310A3 publication Critical patent/WO2000074310A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5619Network Node Interface, e.g. tandem connections, transit switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5625Operations, administration and maintenance [OAM]
    • H04L2012/5627Fault tolerance and recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports

Definitions

  • Protection usually denotes fast recovery (e.g., ⁇ 50 ms) from a failure without accessing a central server or database or attempting to know the full topology of the network.
  • protection can be achieved either by triggering a preplanned action or by running a very fast distributed algorithm.
  • restoration usually denotes a more leisurely process (e.g., minutes) of re-optimizing the network after having collected precise topology and traffic information.
  • Protection can occur at several different levels, including automatic protection switching, line switching and path switching.
  • the most basic protection mechanism is 1 :N automatic protection switching (APS).
  • APS can be used when there are at least N+l links between two points in a network. N of these links are active while one is a spare that is automatically put in service when one of the active links fails.
  • APS is a local action that involves no changes elsewhere in the network.
  • path switching In path switching, the protection that is provided in the network is path specific and generally traffic loops can be avoided. Path switching is generally the most bandwidth efficient protection mechanism; however, it suffers from the so-called “failure multiplication" problem wherein a single link failure causes many path failures. There are two approaches to path protection: passive and active.
  • the node discovering the failure sends a message upstream on all paths that use the failed element. This message should eventually reach a recovery point.
  • the process of scanning lists and sending numerous distinct messages can be time consuming.
  • the node discovering the failure broadcasts a notification message to every node in the network. That message contains the identity of the failed element.
  • a node scans all the protection paths passing through it and takes appropriate actions for paths affected by the failure.
  • the implicit method is generally faster because it requires fewer sequential message transmissions and because the propagation of messages takes place in parallel with recovery actions.
  • having a node find out which of its paths uses a failed network element can be a lengthy process, potentially more demanding than finding all paths using a failed network element.
  • the broadcasting includes detecting the link failure at one or both of the nodes connected to the failed link, identifying nodes connected to the one or both detecting nodes that belong to the same area as the failed link and sending the failure message only to such identified nodes.
  • nodes connected thereto which belong to the same areas as the failed link are identified and the failure message is sent only to such identified nodes.
  • a reliable transmission protocol is provided wherein at one or more of the nodes, a LAPD (link access protocol - D channel) protocol unnumbered information frame containing the failure message is sent to connected nodes. The failure message is resent in another unnumbered information frame after a time interval unless an unnumbered acknowledgment frame containing or referencing the failure message is received from the connected node
  • Each protection path is assigned a bandwidth
  • the assigned protection path bandwidth is a fixed amount that can range from 0 to 100 percent of the bandwidth associated with the corresponding working path
  • the relationship between the protection path bandwidth and the corresponding working path is statistical or va ⁇ able
  • the working paths that include the at least one failed link are switched to their respective protection paths, with a higher p ⁇ o ⁇ ty protection path preempting one or more lower p ⁇ o ⁇ ty paths that share at least one link if the link capacity of the at least one shared link is otherwise exceeded by addition of the preempting protection path.
  • the higher p ⁇ ority protection paths can preempt lower priority protection paths and lower priority working paths that share at least one link.
  • a method of protection path switching includes establishing a plurality of working paths, each working path including a working path connection between ports of a switch fabric in each node of a series of interconnected nodes.
  • a protection path activation list is maintained for each communications link in the network, each list comprising an ordered listing of path entries, each path entry associated with a particular working path for that communications link and including at least one path activation command for effecting activation of a protection path connection between ports of the switch fabric.
  • the method includes implementing the path activation commands for each of the path entries of the particular protection path activation list associated with the failed link.
  • a drop list is maintained for each switch fabric output port, each drop list comprising an ordered listing of path entries, each path entry including at least one path deactivation command for effecting deactivation of a path connection usmg that switch fabnc output port if the protection path data rate is greater than the available port capacity.
  • a method of path protection in a network of nodes interconnected by communications links includes establishing a working path through a first se ⁇ es of nodes, the working path having a working path bandwidth.
  • a protection path is assigned to the working path through a second se ⁇ es of nodes, the protection path having a protection path bandwidth in relation to the working path bandwidth.
  • the working path is switched to the assigned protection path
  • the working path can include a working path established from a customer node over a pnmary communications link and through the first se ⁇ es of nodes.
  • the protection path can include a protection path assigned from the customer node over a secondary communications link and through the second se ⁇ es of nodes.
  • the pnmary and secondary links compnse different media, for example, optical fiber, copper wire facilities, wireless and free-space optics
  • the working path can be established between one of the nodes of the first senes of nodes and a node of a third senes of nodes m a second network over a pnmary communications link
  • the protection path can be assigned between one of the nodes of the second series of nodes and a node of a fourth senes of nodes in the second network over a secondary communications link.
  • FIGs. 2A and 2B show the network of FIG 1 reconfigured with protection paths to handle particular link failures in the working paths.
  • FIG. 2C shows the network of FIG 1 reconfigured with protection paths to handle link failures with preemption.
  • FIG. 3 A shows a communications network of switching nodes connected to a customer node with a configured working path.
  • FIG. 3B shows the network of FIG. 3 A reconfigured with a protection path over a secondary network facility.
  • FIG. 4A shows a pair of communications networks interconnected by a pnmary link with a configured working path.
  • FIG. 4B shows the network arrangement of FIG. 4A reconfigured with a protection path over a secondary network facility
  • FIG. 5 is a block diagram showing a prefe ⁇ ed embodiment of a switching node.
  • FIG. 6 A is a schematic block diagram showing the switching node of FIG. 5.
  • FIG. 6B is a schematic block diagram of the control module portion of the fabric controller card in FIG. 6A.
  • FIG. 6C is a schematic block diagram of the message bus interface logic.
  • FIG. 6D illustrates a message bus frame format.
  • FIG. 6E is a timing diagram relating to message bus arbitration.
  • FIG. 6F is a timing diagram relating to message transfer.
  • FIG. 7 shows a network of nodes arranged in overlapping areas.
  • FIG. 8 shows the network of FIG 7 reconfigured with protection paths to handle link failures in two areas.
  • FIG. 9 shows another network node arrangement using overlapping areas.
  • FIG. 10 shows the network of FIG. 9 reconfigured with a protection path to handle a link failure in one of the two areas
  • FIG. 11 shows the network of FIG 9 reconfigured with a protection path to handle a link failure in the other of the two areas
  • FIG. 12 illustrates a flow diagram of a reliable transmission protocol
  • FIGs. 13A-13C illustrate the broadcast algonthm in the network of FIG. 7.
  • FIG 14 is a schematic diagram illustrating the relationship between working paths and linked lists for the switchover mechanism
  • FIG. 15 is a schematic diagram illustrating an embodiment of linked lists for squelching and activating paths.
  • FIG. 16 is a schematic diagram illustrating an embodiment of a table and linked list for dropping paths
  • FIG. 17 is a table indicating the structure for keeping port capacities and drop pointers associated with the table and drop list of FIG. 16
  • FIG. 1 illustrates in schematic form a communications network which includes several switching nodes denoted A, B. C, D. E, F, G and H.
  • the nodes are interconnected by physical communications links 12, 14, 18, 20, 24, 28, 30, 34 and 36.
  • the network further includes endpoints U. . W, X. Y and Z which are connected to corresponding nodes A, C, D, E, F and H by links 10, 16, 22, 26, 32 and 38, respectively.
  • An embodiment of the switching node is desc ⁇ bed further herein.
  • the network is used to configure logical connections or working paths between endpoints. Each working path begins at one endpomt, traverses one or more nodes and communications links and terminates at a second endpomt.
  • the first working path WP begins at endpomt U, traverses nodes A, B, C and links 10, 12, 14. 16 and terminates at endpomt V.
  • the second working path WP 2 starts at endpomt W and passes through nodes D, E and links 22, 24, 26 and terminates at endpomt X
  • the third working path WP 3 begins at endpomt Y and traverses nodes F, G, H and links 32, 34, 36, 38 and terminates at endpomt Z.
  • the communications links each have a fixed capacity or bandwidth for carrying logical channels.
  • Each working path uses a logical channel on each of the links along the particular path.
  • the number of working paths passing through any particular link should not exceed the link capacity
  • working paths WP, and WP 2 each require a bandwidth of 75 Mbps while working path WP 3 requires a bandwidth of 50 Mbps
  • the bandwidth capacity of communications link 24 is shown as 150 Mbps.
  • link 24 can accommodate additional working paths having bandwidth requirements up to 100 Mbps
  • each of the working paths and protection paths is assigned a p ⁇ o ⁇ ty level A protection path and its associated working path are not necessa ⁇ ly assigned the same pno ⁇ ty Those working paths and protection paths having low pno ⁇ ty are deemed preemptable by higher p ⁇ o ⁇ ty protection paths.
  • a path that cannot be preempted is also referred to as being non- preemptable.
  • a high prio ⁇ ty protection path can preempt one or more low pno ⁇ ty paths that share a communications link if the link capacity of the shared link would otherwise be exceeded by addition of the preempting protection path.
  • working paths WP, and WP 2 are assigned high p ⁇ o ⁇ ty and the working path WP is assigned low p ⁇ o ⁇ ty It should be understood that there can also be a range of p ⁇ ontv levels such that one protection path can have a higher p ⁇ o ⁇ ty than another protection path
  • FIG 2 A illustrates the network of FIG 1 reconfigured to handle a failure in the first working path WP,
  • a failure has occurred on communications link 14 and the logical connection that traversed the path defined by working path WP, is now provided using a protection path PP
  • a mechanism for effecting fast switchover to the protection path is desc ⁇ bed further herein
  • the protection path PP is precalculated at the time the working path WP, is configured in the network
  • the bandwidth for the protection path can be provisioned in a range from 0 to 100% of the working path bandwidth.
  • FIG 2B illustrates the network of FIG 1 leconfigured to handle a failure in the second working path WP 2 .
  • a failure has occurred on communications link 34 and the logical connection that traversed the path defined by working path WP 2 is now provided using a protection path PP :
  • the protection path PP 2 is precalculated at the time the working path WP 2 is configured in the network and the provisioned bandwidth is 70 Mbps
  • the protection path PP 2 starts at endpomt Y, traverses nodes F, D, E, H and links 32, 28, 24, 30, 38 and terminates at endpomt Z
  • the protection path PP 2 shares the communications link 24 between nodes D and E that is used to carry working path WP*, Again, since the total bandwidth (120 Mbps) - ⁇ :
  • protection path PP instead has a higher priori tv than protection path PP 2 , then protection path PP, can also preempt protection path PP 2 should the need anse due to diffenng capacity constraints on the shared link 24
  • a centralized network management system attempts to find routes with enough capacity for all working and protection paths
  • the network management system also finds routes for the preemptable paths, reusing the protection capacity of non-preemptable paths
  • a communications network 722 includes several switching nodes denoted BB, CC. DD and EE
  • the nodes are interconnected by phvsical communications links 710, 712, 714 and 716
  • Endpoints U and V are connected to corresponding nodes AA and DD by links 702, 718, respectively
  • Node AA is a customer node which is connected to network node BB over pnmary communications link 704
  • the links 706, 708 of the secondary network 720 can be, for example, DS3 copper lines, additional optical fiber facilities or other media such as wireless or free-space optics.
  • the secondary network 720 may belong to the same or a different network service provider.
  • a working path WP begins at endpoint U. traverses nodes AA, BB, CC, DD and links 702, 704, 710, 712, 718 and terminates at endpoint V.
  • working path WP requires a bandwidth of 75 Mbps. This particular bandwidth is given only by way of example and is not meant to limit the invention.
  • FIG. 3B illustrates the network arrangement of FIG. 3 A reconfigured to handle a failure in working path WP,,.
  • a failure has occurred on primary link 704 and the logical connection that traversed the path defined by working path WP ⁇ is now provided using a protection path PP, , .
  • Switchover to the protection path can be provided in accordance with the fast switchover mechanism described further herein.
  • the protection path PP is assigned to the working path WP ⁇ .
  • the bandwidth for the protection path can be provisioned in a range from 0 to 100% of the working path bandwidth. In this case, the bandwidth of protection path PP, , is provisioned as 45 Mbps based on a given capacity provisioned on secondary network 720.
  • the protection path PP U starts at endpoint U. traverses nodes AA, CC, DD and links 702, 706, 708, 712, 718 and terminates at endpoint V. Note that preemptable, lower priority traffic can share the communications bandwidth provided by the secondary links 706, 708. - i j -
  • FIG. 4A A network arrangement bridging two networks is shown in FIG. 4A.
  • communications networks 722A, 722B each include switching nodes denoted BB, CC, DD and EE.
  • the nodes are interconnected by physical communications links 710A, 712A. 714A. 716A and 710B, 712B, 714B, 716B, respectively.
  • Node DD of network 722A is connected to node BB of network 722B over primary communications link 724.
  • the link 724 is non-diverse in that a failure in the link would leave service between the networks 722A, 722B incomplete.
  • diverse routing is provided between the networks by connecting node DD of network 722 A to node CC of network 722B via secondary links 726, 728 of secondary network 720.
  • a working path WP 22 traverses nodes EE, DD of network 722A and nodes BB, EE, DD of network 722B over links 716A, 724, 714B, 716B.
  • FIG. 4B illustrates the network arrangement of FIG. 4A reconfigured to handle a failure in working path WP 22 .
  • a failure has occurred on primary link 724 and the logical connection that traversed the path defined by working path WP 22 is now provided using a protection path PP 22 .
  • the protection path switchover can be provided according to the fast switchover mechanism described further herein.
  • the protection path PP 22 traverses nodes EE. DD of network 722A and nodes CC, DD of network 722B over links 716A, 726. 728. 712B. Note that preemptable, lower priority traffic can share the communications bandwidth provided by the secondary network links 726, 728.
  • FIGs. 5, 6 A and 6B An embodiment of a switching node 100 is now described at a high level with reference to FIGs. 5, 6 A and 6B.
  • the terms “fabric” and “switch fabric” are used interchangeably herein to refer to the combined control and cell/packet buffer storage components of the system.
  • the fabric memory card 110 provides the cell buffer storage and includes static RAM 1 ION address generation logic HOB, memory buffers 1 10C and clocking 110D.
  • the memory buffers 110C buffer cells between memory 1 10A and the port interface circuits 104B, 108B on the line cards 104 and system controller 108, respectively.
  • the address generation logic HOB derives the physical addresses for cell storage by snooping control messages transported on the midplane 102.
  • the memory card 110 further includes multiplexers HOE which multiplex the cell data paths between the midplane 102 and the memory buffers 110A.
  • the fabric controller card 106 performs many of the functions that relate to aspects of the present invention.
  • the fabric controller includes four control modules 120A, 102B, 120C, 120D and a control module interface 1 18 for interfacing the control modules to the midplane 102.
  • Each control module manages cell flows for a subset of the I O ports.
  • CM 120A- 120D System-wide messaging paths exist between the fabric controller card 106, system controller 108, and the line cards 104. Normal cell data paths are between the hne cards and the fabnc memory card 1 10 CPU cell data paths are between the fabnc controller card and the fabnc memory or between the system controller and the fabnc memory. Finally, cell header paths are between the line cards and the fabnc controller card, or between the system controller and the fabric controller card.
  • the fabnc controller card 106 uses the controller portion of the AnyFlow 5500TM chip set provided by MMC Networks These five chips completely determine the behavior of the fabnc
  • Each control module (CM) 120A- 120D includes 4 of the 5 chips, and manages 16 I/O ports of the switching node 100.
  • Each of the control modules 120A-120D includes two different modular switch controllers (MSCl) 204A-204D and (MSC2)
  • CM pairs 120A, 120B and 120C, 120D respectively, a per- flow queue controller (PFQ) 212A-212D and a per- flow scheduler (PFS) 216A-216D.
  • the CMI 1 18 A. 118B are shared between CM pairs 120A, 120B and 120C, 120D, respectively The chip set runs synchronously at 50 MHZ.
  • Each MSCl, MSC2 pair communicates with other MSC pairs in the system via the CMIs 118A, 118B using dedicated internal buses 220
  • the messages passed between MSCs contain the information needed for each CM to maintain its own set of captive data structures, which together comp ⁇ se the complete state of the cell switching fabnc.
  • Each MSCl 204A-204D has a CPU port (not shown) for internal register access.
  • Both the MSCl and the MSC2 have interfaces to the cell header portion of the fabnc interconnect matnx 110 (FIG. 6A), but only the MSC2 d ⁇ ves this bus. Both devices have unique captive memones 202A-202D and 206A-206D, respectively, for their own data structures.
  • the PFQ 212A-212D manages the cell queues for each output flow associated with its 16 output ports It connects to the MSC2 and its own local memones 210A- 210D
  • the PFS 216 A- 216D supports an assortment of scheduling algo ⁇ thms used to manage Quality of Service (QoS) requirements
  • QoS Quality of Service
  • the PFS has its own local memones 214A-214D and its own CPU register interface
  • the PFQ and PFS communicate via flow activation and deactivation messages
  • the CMIs 118 A, 118B route messages between MSCs in CM pairs
  • the CMIs are meshed together m a specific fashion depending on the number of CM pairs, and therefore the total number of supported ports and fabnc bandwidth
  • the fabric controller card 106 further includes a control processor 116
  • the control processor 1 1 which is, for example, a Motorola MPC8x0, provides for setup of the MMC data structures and the internal registers of the CM chip set
  • the control processor 1 16 has a path to the system-wide message bus provided on the midplane 102 through message interface 106C for communication with the mam processor 108 A on the system controller card 108
  • the fabnc controller card 106 further includes local Flash PROM 136 for boot and diagnostic code and local SDRAM memory 134 into which its real-time image can be loaded and from which it executes
  • the card supports a local UART connection 140 and an Ethernet port 142 which are used for lab debugging
  • the card includes system health momto ⁇ ng logic 138, stats engine
  • stats memory 130 path protection accelerator 122, path protection memory 124, registers 126 and switch command accelerator 128
  • the path protection accelerator 122 which in an embodiment is implemented as an FPGA, is used to speed-up the process of remapping traffic flows in the fabnc and is desc ⁇ bed in further detail herein below
  • the switch command accelerator 128 facilitates the sending and receiving of certain types of cells (e g , Operations, Admimstration and Management cells) between the fabnc control processor 1 16 and the MSC1 204A-204D (FIG. 6B).
  • the stats engine 132 and stats memory 130 are used for accumulating statistics regarding the cell traffic through the switching node 100.
  • the message bus interface 108C includes a 60x Bus Interface 402; descriptor engines 404, 406, 408 and 410; DMA engines 414, 416, 418 and 420; FIFOs 424, 426, 428 and 430; receive (RX) engines 432A, 432B and transmit (TX) engine 434.
  • the message bus interface 108C includes slave registers 412, arbiter 422 and arbiter/control 436. Note that the message bus interfaces 104C and 106C are configured similarly.
  • the 60x bus interface logic 402 interfaces an external 60x bus to the internal
  • FPGA logic of the message bus interface 108C Primary features of the 60x bus interface logic include support of single and burst transfers as a master and support of single beat slave operations. The latter are required to access internal registers for initialization and to read interrupt status.
  • the message bus interface 108C supports four external memory-resident circular queues (not shown). The queues contain descriptors used for TX and RX operations.
  • the descriptor engines which include high-priority RX and TX descriptor engines 404, 408 and low-priority RX and TX descriptor engines 406, 410, respectively, fetch from ⁇ 1 !
  • the DMA engines which include high-p ⁇ onty RX and TX DMA engines 414, 418 and low-pno ⁇ ty RX and TX DMA engines 416, 420, respectively, transfer data between FIFOs 424, 426, 428 and 430 and the external 60x bus
  • the address and byte count are loaded in the corresponding DMA engine
  • the byte count is sourced from the descriptor du ⁇ ng TX and sourced from a frame header dunng RX
  • the high and low p ⁇ onty TX DMA engines 418, 420 read data from external memory and the high and low prio ⁇ ty RX DMA engines 414, 416 wnte data to external memory
  • the TX DMA engines 418. 420 support descnptor chaining At the end of a normal (not chained) transfer, the DMA engine places a CRC word and an EOF marker m the FIFO This marker informs the TX engine that the message is over. If the desc ⁇ ptor's chain bit is set, upon completion of the DMA transfer, no CRC word or EOF marker is placed in the FIFO Once a desc ⁇ ptor without the chain bit set is encountered, completion of the DMA transfer results in the w ⁇ tmg of a CRC word and EOF marker The arbiter 422 determines which master is allowed to use the 60x bus next.
  • the RX engines 432 A, 432B monitor the message bus and begin assembling data into 64 bit quantities prior to storing them in the corresponding FIFOs 424, 426.
  • the RX engine simply loads the FIFO until an almost full watermark occurs. At that point, the RX engine asserts flow control and prevents the transmitter from sending new data until the FIFO drains.
  • the arbiter/control logic 436 arbitrates for the message buses 102A, 102B and controls external transceiver logic Normally this logic requests on both message buses 102 A, 102B and uses whichever one is granted Slave register bits (and also the descnptor header) can force usage of a single message bus to prevent requests to a broken bus Also present m the logic 436 is a timer that measures bus request length If the timer reaches a terminal count, the request gets dropped and an error is reported back to the associated processor
  • Each message bus 102A. 102B requires a centralized arbitration resource.
  • the system requires 32 request lines (for high and low pno ⁇ ty) and 16 grant lines per message bus Arbitration is done in a round-robm fashion m a centralized arbitration resource located on the system controller card 108, with high-pno ⁇ ty requests given precedence over low pno ⁇ ty requests.
  • Each message bus includes the following signals FR Frame 604 VALID Valid bit 612
  • Message bus arbitration signaling for the message bus 102 A. 102B, as seen by a bus requestor using message bus interface 108C. is shown in FIG. 6E wherein the following signals are used: CLK - 25Mhz clock signal 602; FR - message bus frame signal 604; REQ - message bus request signal 606; GNT - message bus grant signal 608 and qualified grant signal 610.
  • the FR signal 604 indicates the message time inclusive of SOF and EOF.
  • the new master may drive FR and other signals one cycle later. This allows one dead cycle between frames.
  • Message bus transfer signaling for the message bus 102 A, 102B is shown in FIG. 6F wherein a typical (but very short) message bus transfer is illustrated.
  • the DATA bus signal 618 is shown with HI. H2 indicating header bytes, PI, P2, P3, P4, P5, P6, P7, P8 indicates payload bytes, C indicating CRC byte, and X indicating invalid data.
  • FC signal 620 does not need to be responded to immediately.
  • the present invention includes a scheme for implicit failure notification which features fast and reliable distributed broadcast of failure messages both between and within nodes.
  • Another important aspect of the broadcast notification according to the present invention is the notion of confining broadcast messages within a network area.
  • node area 40 includes nodes Al , B 1 , C1. F1, Gl and HI.
  • Node area 42 includes nodes Cl, Dl, El, HI, Jl and Kl Note that the overlap occurs such that nodes Cl and HI and link 56 are fully included m both areas
  • a working path WP 4 is also shown which starts at node Al. traverses nodes Bl, Cl, Dl and links 44, 46, 58, 60 and terminates at node El As noted, it is preferable to define a protection path within each area Thus, as shown in FIG.
  • protection path PP 4A which starts at node Al, traverses nodes FI, Gl and links 48, 52, 50 and terminates at node C l, provides protection against a failure event, e.g., failed link 44, for working path WP in area 40
  • protection path PP 4B which starts at node Cl, traverses nodes HI, 11 and links 56, 64, 62 and terminates at node El, provides protection against a failure event, e g , failed link 60 for working path WP 4 in area 42
  • the termination of protection path PP 4A m node Cl is connected to the start of protection path PP B Refeinng now to FIGs.
  • Node area 40' includes nodes Al, Bl, C1, D1, F1, G1 and HI.
  • Node area 42 includes nodes Cl, Dl, El, HI, Jl and Kl as described in the example shown in FIGs 7 and 8 In this example, the overlap occurs such that nodes Cl, Dl and HI and links 56. 58 are fully included in both areas.
  • a protection path PP 4A which starts at node Al, traverses node FI, Gl and links 48, 52, 50' and terminates at node Dl, provides protection against a failure event, e.g., failed link 44, for working path WP 4 in area 40' as shown in FIG. 10 Note that the termination of protection path PP 4A in node Dl is connected to working path segment WP 4B which represents that portion of working path WP 4 m area 42
  • protection path PP 4B which starts at node Cl, traverses nodes HI, Jl and links 56, 64, 62 and terminates at node E 1 , provides protection against a failure event, e.g., failed link 60, for working path WP 4 in area 42 as shown m FIG. 11.
  • a failure event e.g., failed link 60
  • the start of protection path PP 4B in node Cl is connected to working path segment WP 4A which represents that portion of working path WP 4 m area 40'
  • link 58 connecting nodes Cl and Dl belongs to both areas 40', 42 A failure of link 58 is protected by one of the two protection paths PP 4 X .
  • FIGs. 7 and 8 provide protection against double link failures, one m each area. However, such an arrangement cannot protect against a failure in node Cl.
  • the network arrangement in FIGs 9-1 1 provides protection against a single failure in either area and is resilient to failure of node C 1
  • protection path While only one protection path is associated with a particular working path per area for the particular embodiment described herein above, it should be understood that in other embodiments, there can be multiple protection paths per area that are associated with a working path.
  • a broadcast algo ⁇ thm for fast failure notification and protection switching according to the present invention is now descnbed
  • the broadcast algonthm is mtended for use m link failure notification
  • a circuit management service responsible for managing the pair of working/protection paths can handle such matters as revertive or non-revertive restoration by using other signaling mechanisms
  • the broadcast notification has two aspects notification withm a node and broadcast messaging between nodes
  • each network switching node includes one or more line cards for terminating a particular communications link to another node A link failure is detected by one or both of the line cards which terminate the failed link
  • the line card uses a message bus withm the node to notify other elements of the node with a high pno ⁇ ty multicast message.
  • These other node elements desc ⁇ bed above, include: 1 The other line cards processors, which then disseminate the broadcast inside the appropnate network area(s), using the fast (line layer) SONET data communication channel (DCC);
  • the fabnc controller card 106 (FIG 6A), which activates the protection switchover mechanism desc ⁇ bed further herein, and
  • the system controller card (FIG. 6 A), which performs a high level cleanup and alarming. Note that in case of a line card processor failure, the system controller sends the message on its behalf. If the system controller fails, an alternate controller takes over.
  • the format of the broadcast message is shown in the following table:
  • the first two bytes identify the protocol ID
  • the next two bytes are used to indicate a failure counter
  • the following six bytes are used to indicate the node ID
  • the identification of the failed link is provided by the remaining two bytes
  • the line cards send and recen e broadcast messages over the SONET DCC
  • the line cards have local information available to determine if the broadcast is about an already known failure or about a new failure, and whether the link is in their local area. In the case of a known failure, the broadcast is extinguished. If the line card determines that the link failure is a new failure, the same process for disseminating the message over the message bus occurs Note that a fiber cable cut can result in several (almost simultaneous) broadcasts, one per affected optical avelength or color
  • the broadcast messages are numbered with a "failure counter"
  • the counter value can be modulo 2 (a single bit), although it is preferable to number the counter values modulo 255, reserving OXFF In the latter case, the companson can be done m a ⁇ thmetic modulo 255 That is, numbers m [ ⁇ -127, l-l] mod 255 are "less that I” and those in [I+ I , I— 127] mod 255 are "greater than I”
  • the failure counter can be either line card specific or node specific The trade-off is between table size (larger for line card counters) and complexity (race condition: two simultaneous failures mside a node must have distinct numbers) The following descnbes the case of a single network area Desc ⁇ ption of the multi-area case follows When a line card receives an update o ⁇ ginatmg at a link L, the line card compares a previously stored failure counter value for link L with the value in the broadcast message
  • a line card broadcasts the message throughout the node, but the outgoing line cards only forwards the message within the correct areas Note that since the outgoing line cards may want to look at the failure counter in the message in order not to send a duplicate, the extra processing associated with this option is not significant
  • the message is broadcast on all line cards, however, on reception a line card checks that it belongs to the proper area, discarding the message if necessary Note that discarded messages must still be acknowledged per the transmission protocol desc ⁇ bed below
  • OSPF Open Shortest Path First
  • OSPF Version 2 Since OSPF propagation is independent of the broadcast protocol of the present invention, it may not be in svnch with the broadcast information To remedy this problem, the OSPF messaging can include the latest failure counter sent by each link
  • the system controller When receiving an OSPF message, the system controller will compare failure counters (m the modulo 255 sense) in the message with those values stored locally If the OSPF message appears to be late, the information contained therein is discarded OSPF includes a mechanism (time out) to determine that a node has become disconnected When such an event occurs, the svstem controller will set the failure counters associated with all links of disconnected nodes to the reserved value (0XFF) in an internal table and m the tables of the line cards in the node Reliance on the OSPF timeout simplifies the broadcast protocol It should be understood that other routing protocols, such as p ⁇ vate network-to-network interface (PNNI), can also be used
  • LAPD link access protocol - D channel
  • I Information
  • UI Unnumbered Information
  • a reliable transmission protocol is made possible by using the unnumbered mode of LAPD and taking advantage of the fact that the failure message format provides for messages that are already numbered
  • the protocol can be understood with reference to the flow diagram of FIG 12
  • a line card sends a broadcast message in a UI frame at block 80 and initializes a timer at block 82 It is preferable to have a timer m the line card dedicated to each link in the network
  • a node receiving a UI frame replies with a UA frame containing the same information as contained in the UI frame If the line card receives such a UA frame at block 84, the timer is disabled at block 86 If no UA frame is received at block 84, then the timer is incremented at block 88 and the line card checks for time out of the timer at block 90 On time out, the line card retransmits the broadcast message at block 80 The time out can be less than the link round t ⁇ p delay, but m that case retransmitted messages can have lower pno ⁇ ty
  • LAPD protocol adds 6 bytes (reusing the closing flag as an opening flag) to the failure message format, so that the overall length of the message is 18 bytes (before possible bit stuffing)
  • the same basic retransmission algo ⁇ thm without LAPD formatting can be used to provide reliable transmission on the message bus inside a node as desc ⁇ bed herein above.
  • FIG. 13 A a failure is shown having occurred m communications link 44 which spans nodes Al and Bl.
  • the respective line cards of nodes Al and Bl which terminate the link 44 detect the failure.
  • a failure message is formatted by the detecting line cards and multicast over the message bus of the respective node in accordance with the procedures desc ⁇ bed herein above.
  • each of the nodes Al and Bl happens to only have one additional link, namely link 48 from node Al to node F 1 and link 46 from node B 1 to node C 1 Accordingly, a broadcast message BM AF is sent from node Al to node FI and a broadcast message BM BC is sent from node Bl to node Cl by the respective line cards. At nodes Cl and FI, reception of the respective broadcast messages BM AF ,
  • BM BC are acknowledged as shown m FIG 13B.
  • the failure message is further multicast on the message bus of each of nodes Cl and FI to other line cards within these nodes.
  • Node Cl has three additional links, namely link 50 to node Gl, link 56 to node HI and link 58 to node Dl . Since link 58 terminates outside area 40.
  • node Cl only sends a broadcast message BM CG to node Gl and a broadcast message BM CH to node HI
  • Node FI has only one additional link, namely link 52 to node Gl . Accordingly, node FI sends a broadcast message BM FG to node Gl
  • Node HI acknowledges reception of message BM CH and multicasts the message on its message bus. Since link 64 terminates outside area 40, node HI only sends a broadcast message BM HG to node Gl on link 54. Nodes Gl and HI each will acknowledge and extinguish the respective messages BM HG and BM GH since such messages will contain the same failure counter value as previously received in messages BM CG and BM CH respectively.
  • An alternate broadcast algorithm omits the failure counter and relies only on time-outs.
  • a line card maintains a list of link failures received for the first time within the N previous seconds (or other time unit).
  • the link ID and the time of reception are entered on the list, and the message is acknowledged and forwarded as described previously.
  • the link is already on the list, the message is acknowledged but not forwarded, thus extinguishing the broadcast.
  • the list entry is deleted N seconds after being posted.
  • N must be chosen to be longer than the period of time in which broadcast messages about a link might be in transit in the network. Also, a link that has failed should not be brought back up until N seconds plus the maximum broadcast propagation time have elapsed since the failure. This extra delay is the tradeoff for the elimination of the failure counter.
  • OSPF messages do not include failure counters.
  • OSPF messages announcing that a link is up are discarded if they arrive less than M seconds after a link has been placed on the list described above. Indeed, this UP value must refer to the state of the link before the latest failure announced by the broadcast algorithm. This event happens when a failure broadcast overtakes an OSPF broadcast.
  • the time M must be chosen to be less than N minus the maximum failure broadcast propagation time, but greater than the OSPF broadcast and processing time.
  • the working path output is disabled or "squelched” before enabling the protection path using a "squelch” list for each link in the local area.
  • the switching node For each network link in its local network area, the switching node maintains an "activate" list for protection paths that have a working path using that link (using the information carried in the path establishment messages described above). The relationship between the activation list for different links and working paths is illustrated in FIG. 14. As shown, working path WP 1 has an entry 302 in the linked list 300 for each of links a, b and c. Working path WP2 has an entry in the linked list for links a and b.
  • the activation list entries include commands for quickly activating the protection paths.
  • the position of a path on any of the lists can be determined by the priority assignments noted herein above. Further, to avoid poor capacity utilization in case of multiple failures, if working path WPI appears before working path WP2 at one node, it should appear before working path WP2 at all common nodes. Otherwise, it is possible for two protection paths that exhaust bandwidth on different links to prevent each other from being activated.
  • the switching node For each port, the switching node maintains a "drop" list of preemptable paths.
  • the list entries include commands for quickly disabling the output flow.
  • the position of a path on the list can be determined by a priority scheme.
  • FIG. 15 illustrates two linked lists that are maintained by software in path protection memory 124 of the fabric controller card 106 (FIG. 6A).
  • the first list is known as the squelch list 310. It represents those paths that should be disabled upon notification of a corresponding failure.
  • the second list is the activate list 312, which lists those previously provisioned paths that should be activated to complete the switchover.
  • There is one pair of lists for each possible failure that is protected by a predetermined path (only one list pair is shown in FIG. 15).
  • Each list contains a series of paths 318, 320 respectively, with each path in the lists containing data structures 322, 324 that include an input port number, output port number, a list of fabric switch commands, a data rate for that path, and status.
  • the input and output port numbers identify physical ports in the fabnc which conespond to the input and output of the path, respectively.
  • software also keeps a table 330 with two entnes per port as shown in FIG. 16
  • the first entry 332 is the port capacity, which is updated each time software adds or deletes a connection using that output port. It represents the current working utilization as an absolute number.
  • the second entry 334 is a pointer to the head of a drop list 336 for that output port.
  • the drop list 336 is a linked list of preemptable traffic paths which hardware is allowed to disable to free-up output port capacity for a protection switchover.
  • the drop list 336 has a format 338 similar to that of the squelch list 310 and the activate list 312, although the output port field points only to itself in this case.
  • the output port capacity table 330 and the drop list 336 are organized as adjacent entries 350, 352 for each of the 128 output ports of the system as shown in the table structure of FIG. 17.
  • the squelch function first invalidates the VPI/VCI mapping, which causes the switch to discard these cells at the output port Next, it adds the output flow to the reset queue of the scheduler.
  • FIG. 15 assume that Failure A has been identified.
  • Software sets the squelch pointer 31 1 to the head of the list containing Paths denoted SP[0], SP[1], and SP[2].
  • the path protection accelerator 122 (FIG.
  • path protection accelerator 122 Before activating a path, path protection accelerator 122 compares the current port capacity indexed by the output port in APOP[n] against the required path rate of the activation path found in APDR[n]. Assume for this example that paths AP[0] and AP[2] do not need extra capacity freed.
  • path protection accelerator 122 finds that this capacity is already greater than that required by APDR[0], meaning it is safe to activate protection path AP[0].
  • the switch commands are executed, consisting of CPU port writes to the particular MSCl chips (204A-204D in FIG. 6B) controlling the input translation for that path and the corresponding PFS chips (216A-216D in FIG. 6B) controlling the scheduling. In this case, the proper MSCl to access must be supplied as part of the switch commands. Since there are more paths on the activate list, the path protection accelerator 122 moves on to AP[1].
  • Path protection accelerator 122 uses APOP[l] to point to the head of the appropriate drop list 336.
  • the process of dropping lower priority output traffic is similar to the squelch process, except that the drop list is only traversed as far as necessary, until the capacity of that output port exceeds APDR[1].
  • each dropped path status DPSF[port,m] is updated along the way to reflect its deactivation and its data rate DPDR[port,m] is added to the capacity for APOP[l] If the path protection accelerator 122 reaches the end of the drop list and APDR[1] still exceeds the newly computed capacity of the output port APOP[l], the attempted protection switchover has failed and is terminated Assuming that activation of AP[1] was successful, path protection accelerator 122 repeats the process for AP[2], after which it reaches the end of the activate list, indicating the successful completion of the switchover The network management system may subsequently reroute or restore the paths that have been dropped
  • the data structures that have been referred to above in connection with the squelch, activate and drop lists are now described
  • the Path Output Port is a 7-bit number, ranging from 0 to 127, which represents the range of line card ports, per the MMC numbenng convention used m the fabnc
  • the Path Input Port is a 7-bit number, ranging from 0 to 127, which represents the range of line card ports, per the MMC numbenng convention used in the fabnc
  • the Path Data Rate represents the data rate where all 0's indicates zero data rate Each increment represents a bandwidth increment
  • the Path Status Flags (PSF) reflect the state of a path that can be, or has been, squelched, dropped, or activated States can include the following bits Working Protecting Failed Dropped Squelched
  • the Switch Commands give the hardware directions about the exact operations it must perform at the CPU interface to the Control Module (MSCl and PFS)
  • the following accesses are required w ⁇ tes to the Input Translation Table (ITT) via the MSCl controlling the input port (activate) w ⁇ tes to the Output Translation Table (OTT) via the MSC2 controlling the output port (squelch, drop) w ⁇ tes to the Scheduler External Memory (SEM) via the PFS controlling the output port (squelch, drop)
  • ITT Input Translation Table
  • OTT Output Translation Table
  • SEM Scheduler External Memory
  • the second operation for path squelching and dropping is to put a flow on the Reset Queue, by accessing the Scheduler External Memory attached to the output PFS, which has its own CPU interface, Command Register, and General Purpose Registers (G0-G2) Two (2) 16-bit wntes are needed, plus the wnte of the CMR
  • the Output Flow ID and Scheduler Address must be supplied by software
  • the other values are fixed and can be supplied by hardware
  • the third hardware-assisted access into the Control Module involves modifying an Input Translation Table (ITT) entry via the MSCl associated with the input port This access is used to activate the protection path, and it is similar to the one used to squelch a path Five (5) 16-bit writes are required, plus the wnte of the CMR
  • the values in R0-R4 must be supplied by software Hardware can supply the CMR value
  • Software builds the linked lists of path structures in the memory 124 attached to the path protection accelerator 122 which is implemented as an FPGA (FIG 6
  • reg_save Squelch Path Flags
  • reg_save Squelch Output Port
  • reg_save Squelch Path Rate
  • reg_save Switch Parameter 0
  • Fail update_flags Activate Path Flags to Add_Fa ⁇ led
  • mem_wnte Current Activate Pointer, Activate Path Flags
  • the pseudo-code disclosed herein above provides a framework for the protection hardware, and allows bookkeeping of the memory operations that are required.
  • a computer program product that includes a computer usable medium
  • a computer usable medium can include a readable memory device, such as a hard dnve device, a CD-ROM, a DVD-ROM, or a computer diskette, havmg computer readable program code segments stored thereon.
  • the computer readable medium can also include a communications or transmission medium, such as a bus or a commumcations link, either optical, wired, or wireless, having program code segments earned thereon as digital or analog data signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A system and method for fast and reliable failure notification and accelerated switchover for path protection in a communications network of nodes interconnected by communications links is described. A method of path protection includes establishing plural working paths through the nodes. For each working path, an associated protection path is assigned. Upon a failure event, working paths that include the failed link are switched to their respective protection paths. The working and protection paths can include links on different networks having different media. At each node, linked lists for protection path activation, working path deactivation and path preemption are implemented upon a failure event.

Description

METHOD AND SYSTEM FOR PATH PROTECTION IN A COMMUNICATIONS NETWORK
RELATED APPLICATION
This application is related to U.S. Patent Application No. 09/324,454 filed June 2, 1999, and U.S. Patent Application No. 09/524,479, filed March 13, 2000 the entire teachings of which are incorporated herein by reference.
BACKGROUND
In communications networks, there are two types of mechanisms for handling network failures: protection and restoration. Protection usually denotes fast recovery (e.g., < 50 ms) from a failure without accessing a central server or database or attempting to know the full topology of the network. Typically, protection can be achieved either by triggering a preplanned action or by running a very fast distributed algorithm. By contrast, restoration usually denotes a more leisurely process (e.g., minutes) of re-optimizing the network after having collected precise topology and traffic information.
Protection can occur at several different levels, including automatic protection switching, line switching and path switching. The most basic protection mechanism is 1 :N automatic protection switching (APS). APS can be used when there are at least N+l links between two points in a network. N of these links are active while one is a spare that is automatically put in service when one of the active links fails. APS is a local action that involves no changes elsewhere in the network.
Line switching is another protection mechanism which is similar to APS except that the protection "line" is actually a multi-hop "virtual line" through the network. In -.?-.
the case of line switching, all of the traffic using the failed line is switched over the protection "virtual line", which can potentially cause traffic loops in the network. An example of line protection switching occurs in the case of a SONET (synchronous optical network) bidirectional line switched ring (BLSR). A third protection mechanism is path switching. In path switching, the protection that is provided in the network is path specific and generally traffic loops can be avoided. Path switching is generally the most bandwidth efficient protection mechanism; however, it suffers from the so-called "failure multiplication" problem wherein a single link failure causes many path failures. There are two approaches to path protection: passive and active.
In the passive approach, data is transmitted in parallel on both a working path and a protection path. The destination node selects between the two paths, without requiring any action from upstream nodes. Passive path switching is prevalent in the case of a SONET unidirectional path switched ring (UPSR) in which all of the traffic goes to (or comes from) a hub node. One drawback with the passive approach is that it wastes line and switch capacities.
In the active approach, a message is sent toward the source (starting from the point of failure) to signal the failure and to request a switchover to a protection path at some recovery point. There are two basic ways of signaling the failure: explicit and implicit.
In the explicit method, the node discovering the failure sends a message upstream on all paths that use the failed element. This message should eventually reach a recovery point. Unfortunately, the process of scanning lists and sending numerous distinct messages (possibly thousands in a large network) can be time consuming. In the implicit method, the node discovering the failure broadcasts a notification message to every node in the network. That message contains the identity of the failed element. Upon receiving such a message, a node scans all the protection paths passing through it and takes appropriate actions for paths affected by the failure. Except in very large networks where the number of links vastly exceeds the number of paths per link, the implicit method is generally faster because it requires fewer sequential message transmissions and because the propagation of messages takes place in parallel with recovery actions. However, having a node find out which of its paths uses a failed network element can be a lengthy process, potentially more demanding than finding all paths using a failed network element.
SUMMARY
A need exists for a capability for accelerating implicit failure notification in a network. There is a further need for a failure notification mechanism that provides for reliable broadcast of failure messages.
The approach of the present system and method provides for fast and reliable failure notification and accelerated switchover for path protection. Accordingly, the present system for path protection includes a method of failure notification in a communications network in which there can be several overlapping areas of nodes interconnected by communications links. In the system and method for path protection described herein, a "failure event" contemplates and includes failed communications links and failed nodes. In particular, if a node fails, adjacent nodes can detect the node failure as one or more failed links. Upon a failure event involving one of the communications links, a failure message is broadcast identifying the failed link, the broadcast being confined within the areas which include the failed link. The broadcasting includes detecting the link failure at one or both of the nodes connected to the failed link, identifying nodes connected to the one or both detecting nodes that belong to the same area as the failed link and sending the failure message only to such identified nodes. At each node that receives the broadcast failure message, nodes connected thereto which belong to the same areas as the failed link are identified and the failure message is sent only to such identified nodes. According to another aspect of the svstem, a reliable transmission protocol is provided wherein at one or more of the nodes, a LAPD (link access protocol - D channel) protocol unnumbered information frame containing the failure message is sent to connected nodes. The failure message is resent in another unnumbered information frame after a time interval unless an unnumbered acknowledgment frame containing or referencing the failure message is received from the connected node
According to yet another aspect of the system, each node includes plural line cards each of which terminate a link to another node Link failures are detected at one of the line cards connected to the failed link and a failure message is sent to the other line cards on a message bus within the node of the detecting line card At each of the other lme cards, the failure message is sent to the associated connected node
According to still another aspect of the present system, a method of path protection in a network of nodes interconnected by communications links includes establishing a plurality of working paths through the nodes, each working path comprising logical channels of a seπes of links For each working path, an associated protection path comprising logical channels of a different seπes of links is precalculated and a pπoπty is assigned to each working path and associated protection path The assigned pπoπty can differ between the working path and its associated protection path In a network having overlapping areas of nodes interconnected by links, a protection path is precalculated for each area through which a particular working path traverses. Each protection path is assigned a bandwidth In an embodiment, the assigned protection path bandwidth is a fixed amount that can range from 0 to 100 percent of the bandwidth associated with the corresponding working path In an alternate embodiment, the relationship between the protection path bandwidth and the corresponding working path is statistical or vaπable Upon a failure event involving at least one of the links, the working paths that include the at least one failed link are switched to their respective protection paths, with a higher pπoπty protection path preempting one or more lower pπoπty paths that share at least one link if the link capacity of the at least one shared link is otherwise exceeded by addition of the preempting protection path. The higher pπority protection paths can preempt lower priority protection paths and lower priority working paths that share at least one link. In accordance with another aspect, a method of protection path switching includes establishing a plurality of working paths, each working path including a working path connection between ports of a switch fabric in each node of a series of interconnected nodes. At each node, a protection path activation list is maintained for each communications link in the network, each list comprising an ordered listing of path entries, each path entry associated with a particular working path for that communications link and including at least one path activation command for effecting activation of a protection path connection between ports of the switch fabric. Upon a failure event involving one of the communications links, the method includes implementing the path activation commands for each of the path entries of the particular protection path activation list associated with the failed link. In a further aspect, a working path deactivation list is maintained for each communications link in the network, each list comprising an ordered listing of path entries, each path entry associated with a particular working path for that communications link and including at least one path deactivation command for effecting deactivation of one of the working path connections between ports of the switch fabric. Upon a failure event involving one of the communications links, the method includes implementing the path deactivation commands for each of the path entries of the particular working path deactivation list associated with the failed link prior to implementing the path activation commands of the corresponding protection path activation list. In yet another aspect, a drop list is maintained for each switch fabric output port, each drop list comprising an ordered listing of path entries, each path entry including at least one path deactivation command for effecting deactivation of a path connection usmg that switch fabnc output port if the protection path data rate is greater than the available port capacity.
According to another aspect of the present system, a method of path protection in a network of nodes interconnected by communications links includes establishing a working path through a first seπes of nodes, the working path having a working path bandwidth. A protection path is assigned to the working path through a second seπes of nodes, the protection path having a protection path bandwidth in relation to the working path bandwidth. Upon a failure event involving at least one node of the first seπes, the working path is switched to the assigned protection path According to an aspect, the working path can include a working path established from a customer node over a pnmary communications link and through the first seπes of nodes. The protection path can include a protection path assigned from the customer node over a secondary communications link and through the second seπes of nodes. The pnmary and secondary links compnse different media, for example, optical fiber, copper wire facilities, wireless and free-space optics
According to another aspect, the working path can be established between one of the nodes of the first senes of nodes and a node of a third senes of nodes m a second network over a pnmary communications link The protection path can be assigned between one of the nodes of the second series of nodes and a node of a fourth senes of nodes in the second network over a secondary communications link.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features and advantages will be apparent from the following more particular descnption of preferred embodiments of the method and system for path protection in a communications network, as illustrated m the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessaπly to scale, emphasis instead being placed upon illustrating the principles of the invention. FIG. 1 shows a communications network of switching nodes with several working paths configured through the network.
FIGs. 2A and 2B show the network of FIG 1 reconfigured with protection paths to handle particular link failures in the working paths. FIG. 2C shows the network of FIG 1 reconfigured with protection paths to handle link failures with preemption.
FIG. 3 A shows a communications network of switching nodes connected to a customer node with a configured working path.
FIG. 3B shows the network of FIG. 3 A reconfigured with a protection path over a secondary network facility.
FIG. 4A shows a pair of communications networks interconnected by a pnmary link with a configured working path.
FIG. 4B shows the network arrangement of FIG. 4A reconfigured with a protection path over a secondary network facility FIG. 5 is a block diagram showing a prefeπed embodiment of a switching node.
FIG. 6 A is a schematic block diagram showing the switching node of FIG. 5.
FIG. 6B is a schematic block diagram of the control module portion of the fabric controller card in FIG. 6A.
FIG. 6C is a schematic block diagram of the message bus interface logic. FIG. 6D illustrates a message bus frame format.
FIG. 6E is a timing diagram relating to message bus arbitration.
FIG. 6F is a timing diagram relating to message transfer.
FIG. 7 shows a network of nodes arranged in overlapping areas.
FIG. 8 shows the network of FIG 7 reconfigured with protection paths to handle link failures in two areas.
FIG. 9 shows another network node arrangement using overlapping areas.
FIG. 10 shows the network of FIG. 9 reconfigured with a protection path to handle a link failure in one of the two areas FIG. 11 shows the network of FIG 9 reconfigured with a protection path to handle a link failure in the other of the two areas
FIG. 12 illustrates a flow diagram of a reliable transmission protocol
FIGs. 13A-13C illustrate the broadcast algonthm in the network of FIG. 7. FIG 14 is a schematic diagram illustrating the relationship between working paths and linked lists for the switchover mechanism
FIG. 15 is a schematic diagram illustrating an embodiment of linked lists for squelching and activating paths.
FIG. 16 is a schematic diagram illustrating an embodiment of a table and linked list for dropping paths
FIG. 17 is a table indicating the structure for keeping port capacities and drop pointers associated with the table and drop list of FIG. 16
DETAILED DESCRIPTION
FIG. 1 illustrates in schematic form a communications network which includes several switching nodes denoted A, B. C, D. E, F, G and H. The nodes are interconnected by physical communications links 12, 14, 18, 20, 24, 28, 30, 34 and 36. The network further includes endpoints U. . W, X. Y and Z which are connected to corresponding nodes A, C, D, E, F and H by links 10, 16, 22, 26, 32 and 38, respectively. An embodiment of the switching node is descπbed further herein. The network is used to configure logical connections or working paths between endpoints. Each working path begins at one endpomt, traverses one or more nodes and communications links and terminates at a second endpomt. Three such working paths WP,, WP2 and WP3 are shown in FIG 1 These three paths are shown as examples, and it should be evident that other working paths can be configured through different combinations of nodes. The first working path WP, begins at endpomt U, traverses nodes A, B, C and links 10, 12, 14. 16 and terminates at endpomt V. The second working path WP2 starts at endpomt W and passes through nodes D, E and links 22, 24, 26 and terminates at endpomt X The third working path WP3 begins at endpomt Y and traverses nodes F, G, H and links 32, 34, 36, 38 and terminates at endpomt Z.
The communications links each have a fixed capacity or bandwidth for carrying logical channels. Each working path uses a logical channel on each of the links along the particular path. In general, the number of working paths passing through any particular link should not exceed the link capacity As indicated in FIG. 1, working paths WP, and WP2 each require a bandwidth of 75 Mbps while working path WP3 requires a bandwidth of 50 Mbps The bandwidth capacity of communications link 24 is shown as 150 Mbps. Thus, link 24 can accommodate additional working paths having bandwidth requirements up to 100 Mbps These particular bandwidths are given only by way of example and are not meant to limit the invention.
It should be noted that for simplicity and ease of explanation, only a single communications link is shown between nodes In certain embodiments, multiple links can be used between nodes, each such link carrying one of many possible optical wavelengths or "colors" In such a case, the multiple links are earned m one or more optical fiber cables. Thus, a fiber cable cut or failure can result in several simultaneous optical link failures. It should also be noted that principles of the approach descπbed herein can be applied m embodiments in which the communications links include wired and wireless links. In accordance with an aspect of the present system, each of the working paths and protection paths is assigned a pπoπty level A protection path and its associated working path are not necessaπly assigned the same pnoπty Those working paths and protection paths having low pnoπty are deemed preemptable by higher pπoπty protection paths. A path that cannot be preempted is also referred to as being non- preemptable. As descπbed further herein, a high prioπty protection path can preempt one or more low pnoπty paths that share a communications link if the link capacity of the shared link would otherwise be exceeded by addition of the preempting protection path. In the exemplary network of FIG 1, working paths WP, and WP2 are assigned high pπoπty and the working path WP is assigned low pπoπty It should be understood that there can also be a range of pπontv levels such that one protection path can have a higher pπoπty than another protection path
FIG 2 A illustrates the network of FIG 1 reconfigured to handle a failure in the first working path WP, In this example, a failure has occurred on communications link 14 and the logical connection that traversed the path defined by working path WP, is now provided using a protection path PP, In accordance with another aspect of the system, a mechanism for effecting fast switchover to the protection path is descπbed further herein As descπbed further herein, the protection path PP, is precalculated at the time the working path WP, is configured in the network The bandwidth for the protection path can be provisioned in a range from 0 to 100% of the working path bandwidth. In this case, the bandwidth of protection path PP, is provisioned as 70 Mbps The protection path PP, starts at endpomt U, traverses nodes A, D, E, C and links 10, 18, 24, 20, 16 and terminates at endpomt V As shown in FIG 2A. the protection path PP, shares the communications link 24 between nodes D and E that is used to carry working path WP3 Since the total bandwidth ( 120 Mbps) required to handle protection path PP, and working path WP3 is less than the capacity of link 24. no preemption is needed
FIG 2B illustrates the network of FIG 1 leconfigured to handle a failure in the second working path WP2. In this example, a failure has occurred on communications link 34 and the logical connection that traversed the path defined by working path WP2 is now provided using a protection path PP: The protection path PP2 is precalculated at the time the working path WP2 is configured in the network and the provisioned bandwidth is 70 Mbps The protection path PP2 starts at endpomt Y, traverses nodes F, D, E, H and links 32, 28, 24, 30, 38 and terminates at endpomt Z As shown in FIG. 2B, the protection path PP2 shares the communications link 24 between nodes D and E that is used to carry working path WP*, Again, since the total bandwidth (120 Mbps) -ι :
required to handle protection path PP2 and working path WP3 is less than the capacity of link 24, no preemption is needed
FIG 2C illustrates the network of FIG 1 which has been reconfigured to handle multiple failures in the links. In particular, a failure on links 14 and 34 has occurred As was shown m FIGs 2A and 2B, these failures are handled by switching the working paths WP,, WP2 to protection paths PP,. PP However, since the capacity of link 24 would be otherwise exceeded by the addition of the high pnonty protection paths PP* and PP2, working path WP3 is preempted, that is. the path is dropped and the associated bandwidth is made available to protection paths PP, and PP2 It should be understood that, if protection path PP, instead has a higher priori tv than protection path PP2, then protection path PP, can also preempt protection path PP2 should the need anse due to diffenng capacity constraints on the shared link 24
To configure paths, a centralized network management system (not shown) attempts to find routes with enough capacity for all working and protection paths The network management system also finds routes for the preemptable paths, reusing the protection capacity of non-preemptable paths
In the network of FIG 1 descπbed above, a working path and its corresponding protection path(s) are diverse with respect to routing For example, node A can be reached over both communications links 12 and 18 Likewise, node F can be reached over links 28 and 34
Refernng now to FIG 3 A, a network arrangement is shown In the arrangement, a communications network 722 includes several switching nodes denoted BB, CC. DD and EE The nodes are interconnected by phvsical communications links 710, 712, 714 and 716 Endpoints U and V are connected to corresponding nodes AA and DD by links 702, 718, respectively Node AA is a customer node which is connected to network node BB over pnmary communications link 704
In a typical application, the links 704, 710, 712, 714, 716 are provided usmg optical fiber facilities The pnmary communications link 704 is referred to as a "tail circuit" or "spur circuit" since it comprises a circuit connection at the edge of the network 722. The link 704 is non-diverse in that a failure in the link would leave the customer node AA without service to the network 722. To avoid this problem, diverse routing is provided between the customer node AA and the network 722 by connecting node AA to network node CC via secondary links 706, 708 of a secondary or alternate network 720. The links 706, 708 of the secondary network 720 can be, for example, DS3 copper lines, additional optical fiber facilities or other media such as wireless or free-space optics. The secondary network 720 may belong to the same or a different network service provider. A working path WP, , begins at endpoint U. traverses nodes AA, BB, CC, DD and links 702, 704, 710, 712, 718 and terminates at endpoint V. As indicated, working path WP,, requires a bandwidth of 75 Mbps. This particular bandwidth is given only by way of example and is not meant to limit the invention.
FIG. 3B illustrates the network arrangement of FIG. 3 A reconfigured to handle a failure in working path WP,,. In this example, a failure has occurred on primary link 704 and the logical connection that traversed the path defined by working path WPπ is now provided using a protection path PP, , . Switchover to the protection path can be provided in accordance with the fast switchover mechanism described further herein. The protection path PP,, is assigned to the working path WPπ. The bandwidth for the protection path can be provisioned in a range from 0 to 100% of the working path bandwidth. In this case, the bandwidth of protection path PP, , is provisioned as 45 Mbps based on a given capacity provisioned on secondary network 720. Again, this bandwidth is given as an example only and is not meant to limit the invention. The protection path PPU starts at endpoint U. traverses nodes AA, CC, DD and links 702, 706, 708, 712, 718 and terminates at endpoint V. Note that preemptable, lower priority traffic can share the communications bandwidth provided by the secondary links 706, 708. - i j -
A network arrangement bridging two networks is shown in FIG. 4A. In this arrangement, communications networks 722A, 722B each include switching nodes denoted BB, CC, DD and EE. The nodes are interconnected by physical communications links 710A, 712A. 714A. 716A and 710B, 712B, 714B, 716B, respectively. Node DD of network 722A is connected to node BB of network 722B over primary communications link 724. The link 724 is non-diverse in that a failure in the link would leave service between the networks 722A, 722B incomplete. To avoid this problem, diverse routing is provided between the networks by connecting node DD of network 722 A to node CC of network 722B via secondary links 726, 728 of secondary network 720.
A working path WP22 traverses nodes EE, DD of network 722A and nodes BB, EE, DD of network 722B over links 716A, 724, 714B, 716B.
FIG. 4B illustrates the network arrangement of FIG. 4A reconfigured to handle a failure in working path WP22. In this example, a failure has occurred on primary link 724 and the logical connection that traversed the path defined by working path WP22 is now provided using a protection path PP22. The protection path switchover can be provided according to the fast switchover mechanism described further herein.
The protection path PP22 traverses nodes EE. DD of network 722A and nodes CC, DD of network 722B over links 716A, 726. 728. 712B. Note that preemptable, lower priority traffic can share the communications bandwidth provided by the secondary network links 726, 728.
An embodiment of a switching node 100 is now described at a high level with reference to FIGs. 5, 6 A and 6B.
In FIG. 5 a block diagram of a system arrangement for switching node 100 is shown. The switching node 100 provides cell and packet switching and includes a system midplane 102 to which are connected different types of system cards. The system cards include line cards 104, fabric controller cards 106, system controller cards 108 and fabric memory cards 110. FIG. 6A shows a schematic block diagram of the switching node 100. For simplicity of discussion, only one line card 104 is shown. Each line card 104 includes a physical interface 104A for an I/O port that connects to an external communications link. The line card 104 further includes port interface circuits 104B for buffering cells, a message bus interface 104C which is used to communicate over a message bus that is carried on the midplane 102 and a processor 104D. The system controller 108 also includes a message bus interface 108C, port interface circuits 108B and a processor 108A.
The terms "fabric" and "switch fabric" are used interchangeably herein to refer to the combined control and cell/packet buffer storage components of the system. The fabric memory card 110 provides the cell buffer storage and includes static RAM 1 ION address generation logic HOB, memory buffers 1 10C and clocking 110D. The memory buffers 110C buffer cells between memory 1 10A and the port interface circuits 104B, 108B on the line cards 104 and system controller 108, respectively. The address generation logic HOB derives the physical addresses for cell storage by snooping control messages transported on the midplane 102. The memory card 110 further includes multiplexers HOE which multiplex the cell data paths between the midplane 102 and the memory buffers 110A.
In an embodiment, the port interface circuits 104B, 108B each use a PIF2 chip, the memory buffers 110C each use a MBUF2 chip, and the multiplexers HOE use ViX™ interconnect logic, all of which are provided by MMC Networks.
The fabric controller card 106 performs many of the functions that relate to aspects of the present invention. The fabric controller includes four control modules 120A, 102B, 120C, 120D and a control module interface 1 18 for interfacing the control modules to the midplane 102. Each control module manages cell flows for a subset of the I O ports.
System-wide messaging paths exist between the fabric controller card 106, system controller 108, and the line cards 104. Normal cell data paths are between the hne cards and the fabnc memory card 1 10 CPU cell data paths are between the fabnc controller card and the fabnc memory or between the system controller and the fabnc memory. Finally, cell header paths are between the line cards and the fabnc controller card, or between the system controller and the fabric controller card. In an embodiment, the fabnc controller card 106 uses the controller portion of the AnyFlow 5500™ chip set provided by MMC Networks These five chips completely determine the behavior of the fabnc Each control module (CM) 120A- 120D includes 4 of the 5 chips, and manages 16 I/O ports of the switching node 100. Each CM pair is cross-coupled using the 5th chip of the set, the CMI 118, which provides a hierarchical communication path between CMs A single fabnc controller card 106 has four complete CMs, allowing it to control up to 64 ports of the fabnc. When two FCCs 106 are installed, 128 fabric ports are supported.
Referring now to FIG. 6B, a block diagram is shown of a layout and interconnect scheme for the MMC chip set Each of the control modules 120A-120D includes two different modular switch controllers (MSCl) 204A-204D and (MSC2)
208A-208D, respectively, a per- flow queue controller (PFQ) 212A-212D and a per- flow scheduler (PFS) 216A-216D. The CMI 1 18 A. 118B are shared between CM pairs 120A, 120B and 120C, 120D, respectively The chip set runs synchronously at 50 MHZ. Each MSCl, MSC2 pair communicates with other MSC pairs in the system via the CMIs 118A, 118B using dedicated internal buses 220 The messages passed between MSCs contain the information needed for each CM to maintain its own set of captive data structures, which together compπse the complete state of the cell switching fabnc. Each MSCl 204A-204D has a CPU port (not shown) for internal register access. Both the MSCl and the MSC2 have interfaces to the cell header portion of the fabnc interconnect matnx 110 (FIG. 6A), but only the MSC2 dπves this bus. Both devices have unique captive memones 202A-202D and 206A-206D, respectively, for their own data structures. The PFQ 212A-212D manages the cell queues for each output flow associated with its 16 output ports It connects to the MSC2 and its own local memones 210A- 210D The PFS 216 A- 216D supports an assortment of scheduling algoπthms used to manage Quality of Service (QoS) requirements The PFS has its own local memones 214A-214D and its own CPU register interface The PFQ and PFS communicate via flow activation and deactivation messages
The CMIs 118 A, 118B route messages between MSCs in CM pairs The CMIs are meshed together m a specific fashion depending on the number of CM pairs, and therefore the total number of supported ports and fabnc bandwidth Referπng again to FIG 6A, the fabric controller card 106 further includes a control processor 116 The control processor 1 1 which is, for example, a Motorola MPC8x0, provides for setup of the MMC data structures and the internal registers of the CM chip set The control processor 1 16 has a path to the system-wide message bus provided on the midplane 102 through message interface 106C for communication with the mam processor 108 A on the system controller card 108
The fabnc controller card 106 further includes local Flash PROM 136 for boot and diagnostic code and local SDRAM memory 134 into which its real-time image can be loaded and from which it executes The card supports a local UART connection 140 and an Ethernet port 142 which are used for lab debugging In addition, the card includes system health momtoπng logic 138, stats engine
132, stats memory 130, path protection accelerator 122, path protection memory 124, registers 126 and switch command accelerator 128
The path protection accelerator 122, which in an embodiment is implemented as an FPGA, is used to speed-up the process of remapping traffic flows in the fabnc and is descπbed in further detail herein below The switch command accelerator 128 facilitates the sending and receiving of certain types of cells (e g , Operations, Admimstration and Management cells) between the fabnc control processor 1 16 and the MSC1 204A-204D (FIG. 6B). The stats engine 132 and stats memory 130 are used for accumulating statistics regarding the cell traffic through the switching node 100.
As noted herein above, the processors 108 A, 104D, and 1 16 (FIG. 6A) in the system controller card 108, the line card 104 and the fabric controller card 106, respectively, communicate via a redundant message bus carried on the midplane 102 through corresponding message bus interfaces 108C, 104C, 106C. The message bus interface 108C, which can be implemented in an FPGA, is shown connected to message bus 102A, 102B in FIG. 6C and includes the following features:
Packet based data transfers on two independent rails (102A, 102B); Peak transmit rate of 400Mbit/sec (16 bits * 25Mhz) using one rail;
Peak receive rate of 800Mbit/sec (both rails active);
CRC based error detection;
Flow control on both rails.
The message bus interface 108C includes a 60x Bus Interface 402; descriptor engines 404, 406, 408 and 410; DMA engines 414, 416, 418 and 420; FIFOs 424, 426, 428 and 430; receive (RX) engines 432A, 432B and transmit (TX) engine 434. In addition, the message bus interface 108C includes slave registers 412, arbiter 422 and arbiter/control 436. Note that the message bus interfaces 104C and 106C are configured similarly. The 60x bus interface logic 402 interfaces an external 60x bus to the internal
FPGA logic of the message bus interface 108C. Primary features of the 60x bus interface logic include support of single and burst transfers as a master and support of single beat slave operations. The latter are required to access internal registers for initialization and to read interrupt status. The message bus interface 108C supports four external memory-resident circular queues (not shown). The queues contain descriptors used for TX and RX operations. The descriptor engines, which include high-priority RX and TX descriptor engines 404, 408 and low-priority RX and TX descriptor engines 406, 410, respectively, fetch from 1 !
these external memory queues and initiate DMA operations whenever they have a valid descπptor and there is data to be transferred
The DMA engines, which include high-pπonty RX and TX DMA engines 414, 418 and low-pnoπty RX and TX DMA engines 416, 420, respectively, transfer data between FIFOs 424, 426, 428 and 430 and the external 60x bus When a valid descnptor is present, the address and byte count are loaded in the corresponding DMA engine The byte count is sourced from the descriptor duπng TX and sourced from a frame header dunng RX The high and low pπonty TX DMA engines 418, 420 read data from external memory and the high and low prioπty RX DMA engines 414, 416 wnte data to external memory
The RX DMA engines 414. 416 include a special feature to prevent stuck flow controls if the data bus is not available to the coπespondmg DMA engine or if the corresponding descπptor engine is idle Normally the associated FIFO will fill to its watermark and then assert flow control DMA transfers to memory or FIFO flushing can clear the almost full indication and thus turn off flow control Whenever the descnptor engine is idle and new message bus data is arnving, the DMA engine will drain the FIFO until an EOF (end of frame) or SOF (start of frame) condition occurs The latter indicates a dropped EOF This continues until the descnptor engine goes non-idle The transition to non-idle is only checked mter-frame, therefore partial frames are never transferred into memory
The TX DMA engines 418. 420 support descnptor chaining At the end of a normal (not chained) transfer, the DMA engine places a CRC word and an EOF marker m the FIFO This marker informs the TX engine that the message is over. If the descπptor's chain bit is set, upon completion of the DMA transfer, no CRC word or EOF marker is placed in the FIFO Once a descπptor without the chain bit set is encountered, completion of the DMA transfer results in the wπtmg of a CRC word and EOF marker The arbiter 422 determines which master is allowed to use the 60x bus next.
Highest priority is given to descriptor accesses since requiring a descriptor implies no data transfer can take place and descriptor accesses should be more rare than data accesses. Receive has priority over transmit and of course, higher priority queues are serviced before low priority queues. CPU accesses ultimately have the highest priority since ownership of the 60x bus is implied if the CPU is trying to access this logic.
Overall priority highest to lowest is:
CPU slave Accesses
Hi-priority RX descriptor fetch Hi-priority TX descriptor fetch
Low-priority RX descriptor fetch
Low-priority TX descriptor fetch
Hi-priority RX DMA
Hi-priority TX DMA Low-priority RX DMA
Low-priority TX DMA
The TX engine 434 monitors the status of FIFOs 428, 430 and initiates a request to the message bus logic when a SOF is present in the FIFO. Once granted access to one of the message buses 102A, 102B. the TX engine streams the FIFO data out in 16 bit quantities until an EOF condition occurs. Two events can inhibit transmission
(indicated by lack of a valid bit on the message bus), namely an empty FIFO or flow control from a receiver.
The RX engines 432 A, 432B monitor the message bus and begin assembling data into 64 bit quantities prior to storing them in the corresponding FIFOs 424, 426. The RX engine simply loads the FIFO until an almost full watermark occurs. At that point, the RX engine asserts flow control and prevents the transmitter from sending new data until the FIFO drains. The arbiter/control logic 436 arbitrates for the message buses 102A, 102B and controls external transceiver logic Normally this logic requests on both message buses 102 A, 102B and uses whichever one is granted Slave register bits (and also the descnptor header) can force usage of a single message bus to prevent requests to a broken bus Also present m the logic 436 is a timer that measures bus request length If the timer reaches a terminal count, the request gets dropped and an error is reported back to the associated processor
Each message bus 102A. 102B requires a centralized arbitration resource. In an embodiment having 16 pnmary card slots, the system requires 32 request lines (for high and low pnoπty) and 16 grant lines per message bus Arbitration is done in a round-robm fashion m a centralized arbitration resource located on the system controller card 108, with high-pnoπty requests given precedence over low pnoπty requests.
Each message bus includes the following signals FR Frame 604 VALID Valid bit 612
SOF Start-of-frame 614
EOF End of frame 616
D ATA[ 15 0] Data bus signal 618 FC Flow Control 620 Messages sent over the message bus 102A, 102B have the frame format shown m FIG 6D The message frame includes start of frame (SOF) 502, a reserved field 504, a pnonty bit 506, a source ID (SID) 508, a count/slot mask (SM) 510, payload bytes 512, CRC 513 and an end of frame (EOF) 514 The SOF 502 is always asserted with the first byte of a frame The pπoπty bit 506 and SID 508 are valid duπng SOF The next four bytes are the remainder of the header count and slot mask 510 The next byte s) are the vanable size payload 512, with a minimum size of one byte The final two bytes are the CRC 513, followed by EOF 514 The CRC covers all header and payload bytes. The length of messages on the message bus 102A. 102B is bounded such that a deterministic latency is achieved to ensure priority accesses of the bus.
Message bus arbitration signaling for the message bus 102 A. 102B, as seen by a bus requestor using message bus interface 108C. is shown in FIG. 6E wherein the following signals are used: CLK - 25Mhz clock signal 602; FR - message bus frame signal 604; REQ - message bus request signal 606; GNT - message bus grant signal 608 and qualified grant signal 610. Note that the FR signal 604 indicates the message time inclusive of SOF and EOF. The requestor must ignore the GNT signal 608 until FR de-asserts, e.g., at time t=D. Once the grant is qualified by FR de-assertion at time t-=D, with corresponding qualified grant signal 610 assertion, the new master may drive FR and other signals one cycle later. This allows one dead cycle between frames.
Message bus transfer signaling for the message bus 102 A, 102B is shown in FIG. 6F wherein a typical (but very short) message bus transfer is illustrated. The DATA bus signal 618 is shown with HI. H2 indicating header bytes, PI, P2, P3, P4, P5, P6, P7, P8 indicates payload bytes, C indicating CRC byte, and X indicating invalid data. Note that the valid signal 612 can be de-asserted autonomously, e.g., at time t=A in any non SOF/EOF cycle. This indicates, for example, that the TX FIFO (428, 430, FIG. 6C) went empty during the transfer and is awaiting new data. Some internal FPGA pipelining is allowed to occur such that the FC signal 620 does not need to be responded to immediately. The second de-assertion of the valid signal 612 at time t=B is the result of the assertion of the FC signal 620 at time t=Y two cycles earlier.
Broadcast Algorithm
The present invention includes a scheme for implicit failure notification which features fast and reliable distributed broadcast of failure messages both between and within nodes.
Another important aspect of the broadcast notification according to the present invention is the notion of confining broadcast messages within a network area. The task -? ?.
of computing paths, either in a centralized or in a decentralized manner, becomes complex in large networks In order to effectively manage large networks, it is helpful to divide them into smaller areas The need to limit area size stems from considerations relating to network manageability, protection algorithm scaleabihty, and the need to reduce switching delays A related issue is that of reducing the number of notification messages by limiting them to a local area In order to do that, the segment of a working path m a particular area is protected by a protection path m the same area Thus, adjacent areas may overlap somewhat Another requirement is that each area must provide enough internal connectivity to provide the necessary protection elements. It is generally preferably to divide the network nodes into doubly-connected areas that overlap as little as possible, with just enough overlap to guarantee double connectivity. These concepts find application in SONET, wherein areas can be mapped to UPSR and BLSR nngs.
Referπng now to FIG 7, a network arrangement is shown which includes two overlapping node areas 40 and 42 In particular, node area 40 includes nodes Al , B 1 , C1. F1, Gl and HI. Node area 42 includes nodes Cl, Dl, El, HI, Jl and Kl Note that the overlap occurs such that nodes Cl and HI and link 56 are fully included m both areas A working path WP4 is also shown which starts at node Al. traverses nodes Bl, Cl, Dl and links 44, 46, 58, 60 and terminates at node El As noted, it is preferable to define a protection path within each area Thus, as shown in FIG. 8, protection path PP4A, which starts at node Al, traverses nodes FI, Gl and links 48, 52, 50 and terminates at node C l, provides protection against a failure event, e.g., failed link 44, for working path WP in area 40 Likewise, protection path PP4B, which starts at node Cl, traverses nodes HI, 11 and links 56, 64, 62 and terminates at node El, provides protection against a failure event, e g , failed link 60 for working path WP4 in area 42 Note that the termination of protection path PP4A m node Cl is connected to the start of protection path PP B Refeinng now to FIGs. 9-1 1, another network arrangement is shown which includes two overlapping node areas 40' and 42. Node area 40' includes nodes Al, Bl, C1, D1, F1, G1 and HI. Node area 42 includes nodes Cl, Dl, El, HI, Jl and Kl as described in the example shown in FIGs 7 and 8 In this example, the overlap occurs such that nodes Cl, Dl and HI and links 56. 58 are fully included in both areas.
A protection path PP4A, which starts at node Al, traverses node FI, Gl and links 48, 52, 50' and terminates at node Dl, provides protection against a failure event, e.g., failed link 44, for working path WP4 in area 40' as shown in FIG. 10 Note that the termination of protection path PP4A in node Dl is connected to working path segment WP4B which represents that portion of working path WP4 m area 42
Likewise, protection path PP4B, which starts at node Cl, traverses nodes HI, Jl and links 56, 64, 62 and terminates at node E 1 , provides protection against a failure event, e.g., failed link 60, for working path WP4 in area 42 as shown m FIG. 11. Note that the start of protection path PP4B in node Cl is connected to working path segment WP4A which represents that portion of working path WP4 m area 40' Also note that link 58 connecting nodes Cl and Dl belongs to both areas 40', 42 A failure of link 58 is protected by one of the two protection paths PP4 X . PP4B
From the preceding description, it should be understood that the network arrangement shown in FIGs. 7 and 8 provides protection against double link failures, one m each area. However, such an arrangement cannot protect against a failure in node Cl. The network arrangement in FIGs 9-1 1 provides protection against a single failure in either area and is resilient to failure of node C 1
While only one protection path is associated with a particular working path per area for the particular embodiment described herein above, it should be understood that in other embodiments, there can be multiple protection paths per area that are associated with a working path.
A broadcast algoπthm for fast failure notification and protection switching according to the present invention is now descnbed The broadcast algonthm is mtended for use m link failure notification A circuit management service responsible for managing the pair of working/protection paths can handle such matters as revertive or non-revertive restoration by using other signaling mechanisms
The broadcast notification has two aspects notification withm a node and broadcast messaging between nodes
The dissemination of failure notification messages withm the node has three key characteπstics:
1) multicast transmission to a selected set of node elements over a pair of redundant message buses; 2) two levels of non-preemptive pnonty, with the maximum message length being limited to ensure small delays for the high pπoπty messages; and 3) reliable transmission using a retransmission protocol descπbed herein below. As descπbed above, each network switching node includes one or more line cards for terminating a particular communications link to another node A link failure is detected by one or both of the line cards which terminate the failed link The line card uses a message bus withm the node to notify other elements of the node with a high pnoπty multicast message. These other node elements, descπbed above, include: 1 The other line cards processors, which then disseminate the broadcast inside the appropnate network area(s), using the fast (line layer) SONET data communication channel (DCC);
2. The fabnc controller card 106 (FIG 6A), which activates the protection switchover mechanism descπbed further herein, and
3. The system controller card (FIG. 6 A), which performs a high level cleanup and alarming. Note that in case of a line card processor failure, the system controller sends the message on its behalf. If the system controller fails, an alternate controller takes over. The format of the broadcast message is shown in the following table:
Figure imgf000027_0001
The first two bytes identify the protocol ID The next two bytes are used to indicate a failure counter The following six bytes are used to indicate the node ID The identification of the failed link is provided by the remaining two bytes
The broadcast of failure notification messages between nodes is now descnbed In the preferred embodiment, the line cards send and recen e broadcast messages over the SONET DCC The line cards have local information available to determine if the broadcast is about an already known failure or about a new failure, and whether the link is in their local area. In the case of a known failure, the broadcast is extinguished. If the line card determines that the link failure is a new failure, the same process for disseminating the message over the message bus occurs Note that a fiber cable cut can result in several (almost simultaneous) broadcasts, one per affected optical avelength or color
To ensure extinction of the broadcast, the broadcast messages are numbered with a "failure counter" The counter value can be modulo 2 (a single bit), although it is preferable to number the counter values modulo 255, reserving OXFF In the latter case, the companson can be done m aπthmetic modulo 255 That is, numbers m [ι-127, l-l] mod 255 are "less that I" and those in [I+ I , I— 127] mod 255 are "greater than I" The failure counter can be either line card specific or node specific The trade-off is between table size (larger for line card counters) and complexity (race condition: two simultaneous failures mside a node must have distinct numbers) The following descnbes the case of a single network area Descπption of the multi-area case follows When a line card receives an update oπginatmg at a link L, the line card compares a previously stored failure counter value for link L with the value in the broadcast message The line card discards the message if the values match or if the value in the broadcast message is less than the previously stored value If there is not a match, the line card updates the stored failure counter value and propagates the message
Broadcasts must occur only in the network area(s) of the failed link There are several ways to limit the broadcast including
1 Selective broadcast at the receiving line card
On reception, a line card only multicasts the message to the correct outgoing line cards
2 Selective discard at the transmitting line card
A line card broadcasts the message throughout the node, but the outgoing line cards only forwards the message within the correct areas Note that since the outgoing line cards may want to look at the failure counter in the message in order not to send a duplicate, the extra processing associated with this option is not significant
3 Selective discard at the receiving line card
The message is broadcast on all line cards, however, on reception a line card checks that it belongs to the proper area, discarding the message if necessary Note that discarded messages must still be acknowledged per the transmission protocol descπbed below To disseminate detailed information about links, a protocol such as the Open Shortest Path First (OSPF) routing protocol can be used (J Moy. "OSPF Version 2", RFC2328, Apπl 1998) Since OSPF propagation is independent of the broadcast protocol of the present invention, it may not be in svnch with the broadcast information To remedy this problem, the OSPF messaging can include the latest failure counter sent by each link When receiving an OSPF message, the system controller will compare failure counters (m the modulo 255 sense) in the message with those values stored locally If the OSPF message appears to be late, the information contained therein is discarded OSPF includes a mechanism (time out) to determine that a node has become disconnected When such an event occurs, the svstem controller will set the failure counters associated with all links of disconnected nodes to the reserved value (0XFF) in an internal table and m the tables of the line cards in the node Reliance on the OSPF timeout simplifies the broadcast protocol It should be understood that other routing protocols, such as pπvate network-to-network interface (PNNI), can also be used
A protocol for reliable transmission of the broadcast failure notification messages is now descπbed SONET links are normally very reliable, but the network must still be able to deal with errors m the broadcast The present system employs the standard protocol known as LAPD (link access protocol - D channel) which is specified in ITU Recommendation Q.921 In LAPD. data transmission can either occur in one of two formats Information (I) frames (numbered & with reliable ARQ) or Unnumbered Information (UI) frames (unnumbered and without reliable ARQ) The I frames are only numbered modulo 8, which is not good enough for the broadcast mechanism as there could easily be more than 7 short frames outstanding on a link
A reliable transmission protocol is made possible by using the unnumbered mode of LAPD and taking advantage of the fact that the failure message format provides for messages that are already numbered The protocol can be understood with reference to the flow diagram of FIG 12 A line card sends a broadcast message in a UI frame at block 80 and initializes a timer at block 82 It is preferable to have a timer m the line card dedicated to each link in the network A node receiving a UI frame replies with a UA frame containing the same information as contained in the UI frame If the line card receives such a UA frame at block 84, the timer is disabled at block 86 If no UA frame is received at block 84, then the timer is incremented at block 88 and the line card checks for time out of the timer at block 90 On time out, the line card retransmits the broadcast message at block 80 The time out can be less than the link round tπp delay, but m that case retransmitted messages can have lower pnoπty The number of retransmissions is specific to the network implementation The link is declared down upon lack of acknowledgment.
Note that the LAPD protocol adds 6 bytes (reusing the closing flag as an opening flag) to the failure message format, so that the overall length of the message is 18 bytes (before possible bit stuffing) The same basic retransmission algoπthm without LAPD formatting can be used to provide reliable transmission on the message bus inside a node as descπbed herein above.
Having descnbed aspects of the broadcast algonthm of the present invention, an example of the broadcast algoπthm is now descnbed with reference to FIGs. 13A-13C. In FIG. 13 A, a failure is shown having occurred m communications link 44 which spans nodes Al and Bl. The respective line cards of nodes Al and Bl which terminate the link 44 detect the failure. Upon such detection, a failure message is formatted by the detecting line cards and multicast over the message bus of the respective node in accordance with the procedures descπbed herein above. In this example, each of the nodes Al and Bl happens to only have one additional link, namely link 48 from node Al to node F 1 and link 46 from node B 1 to node C 1 Accordingly, a broadcast message BMAF is sent from node Al to node FI and a broadcast message BMBC is sent from node Bl to node Cl by the respective line cards. At nodes Cl and FI, reception of the respective broadcast messages BMAF,
BMBC are acknowledged as shown m FIG 13B. The failure message is further multicast on the message bus of each of nodes Cl and FI to other line cards within these nodes. Node Cl has three additional links, namely link 50 to node Gl, link 56 to node HI and link 58 to node Dl . Since link 58 terminates outside area 40. node Cl only sends a broadcast message BMCG to node Gl and a broadcast message BMCH to node HI Node FI has only one additional link, namely link 52 to node Gl . Accordingly, node FI sends a broadcast message BMFG to node Gl
Node Gl receives two broadcast messages BMFG and BMCG and will extinguish whichever message is received later m accordance with the procedure for extinction descπbed herein above. Both messages are also acknowledged as shown m FIG. 13C. Node Gl multicasts the message on its message bus to all of its line cards. The only remaining link at node Gl is link 54 Accordingly, node Gl sends a broadcast message BMGH to node HI . It should be noted that it is possible for node Gl to also send a broadcast message to either node FI or node Cl depending on the timing and order of message receipt from those nodes.
Node HI acknowledges reception of message BMCH and multicasts the message on its message bus. Since link 64 terminates outside area 40, node HI only sends a broadcast message BMHG to node Gl on link 54. Nodes Gl and HI each will acknowledge and extinguish the respective messages BMHG and BMGH since such messages will contain the same failure counter value as previously received in messages BMCG and BMCH respectively.
An alternate broadcast algorithm omits the failure counter and relies only on time-outs. In this method, a line card maintains a list of link failures received for the first time within the N previous seconds (or other time unit). When receiving a broadcast message about a link that is not on the list, the link ID and the time of reception are entered on the list, and the message is acknowledged and forwarded as described previously. On the other hand if the link is already on the list, the message is acknowledged but not forwarded, thus extinguishing the broadcast. The list entry is deleted N seconds after being posted.
For this method to work correctly, N must be chosen to be longer than the period of time in which broadcast messages about a link might be in transit in the network. Also, a link that has failed should not be brought back up until N seconds plus the maximum broadcast propagation time have elapsed since the failure. This extra delay is the tradeoff for the elimination of the failure counter.
The interaction with OSPF is also modified for the alternate broadcast algorithm. Firstly, the OSPF messages do not include failure counters. Secondly, OSPF messages announcing that a link is up are discarded if they arrive less than M seconds after a link has been placed on the list described above. Indeed, this UP value must refer to the state of the link before the latest failure announced by the broadcast algorithm. This event happens when a failure broadcast overtakes an OSPF broadcast. The time M must be chosen to be less than N minus the maximum failure broadcast propagation time, but greater than the OSPF broadcast and processing time.
Protection Path Switchover Mechanism
Having described the aspects of the invention relating to broadcast failure notification, the switchover mechanism for activating protection paths is now described. The goal of the path protection switchover mechanism is to terminate traffic which was using paths affected by a failure, and to activate the new paths that allow the traffic to once again flow through the switching node. In the process, it may be necessary to terminate lower priority, preemptable traffic that had been using the paths that were designated as the protection paths. The operations are time-critical, and somewhat, computationally intense.
To provide for fast processing of an activation request, several linked list structures are used. While the following describes single-linked lists, it should be understood that double-linked lists can also be implemented. Three kinds of linked lists are maintained:
1) To avoid briefly oversubscribing output links at nodes where the working and protection paths merge (e.g., node H in FIG. 2B), the working path output is disabled or "squelched" before enabling the protection path using a "squelch" list for each link in the local area. 2) For each network link in its local network area, the switching node maintains an "activate" list for protection paths that have a working path using that link (using the information carried in the path establishment messages described above). The relationship between the activation list for different links and working paths is illustrated in FIG. 14. As shown, working path WP 1 has an entry 302 in the linked list 300 for each of links a, b and c. Working path WP2 has an entry in the linked list for links a and b. Similar observations can be made concerning working paths WP3, WP4 and WP5 as each working path traverses several links. As described further herein below, the activation list entries include commands for quickly activating the protection paths. The position of a path on any of the lists can be determined by the priority assignments noted herein above. Further, to avoid poor capacity utilization in case of multiple failures, if working path WPI appears before working path WP2 at one node, it should appear before working path WP2 at all common nodes. Otherwise, it is possible for two protection paths that exhaust bandwidth on different links to prevent each other from being activated.
3) For each port, the switching node maintains a "drop" list of preemptable paths. The list entries include commands for quickly disabling the output flow. The position of a path on the list can be determined by a priority scheme.
When a switch learns through broadcast that a link has failed, commands driven by the path protection accelerator 122 (FIG. 6A) activate the protection paths on the corresponding list. As each path is activated, the associated bandwidth is subtracted from the available capacity for the coπesponding link. If the available capacity on a link becomes negative, enough preemptable paths of lower priority than the path to be activated are dropped to make the capacity positive again. If the available capacity cannot be made positive, which should only happen for multiple major failures, an error message is sent from the node to a central management system.
The particular details of an embodiment for providing the path protection switchover mechanism are now given.
FIG. 15 illustrates two linked lists that are maintained by software in path protection memory 124 of the fabric controller card 106 (FIG. 6A). The first list is known as the squelch list 310. It represents those paths that should be disabled upon notification of a corresponding failure. The second list is the activate list 312, which lists those previously provisioned paths that should be activated to complete the switchover. There is one pair of lists for each possible failure that is protected by a predetermined path (only one list pair is shown in FIG. 15). Each list contains a series of paths 318, 320 respectively, with each path in the lists containing data structures 322, 324 that include an input port number, output port number, a list of fabric switch commands, a data rate for that path, and status. The input and output port numbers identify physical ports in the fabnc which conespond to the input and output of the path, respectively. In addition to the squelch and activate lists shown in FIG. 15, software also keeps a table 330 with two entnes per port as shown in FIG. 16 The first entry 332 is the port capacity, which is updated each time software adds or deletes a connection using that output port. It represents the current working utilization as an absolute number. The second entry 334 is a pointer to the head of a drop list 336 for that output port. The drop list 336 is a linked list of preemptable traffic paths which hardware is allowed to disable to free-up output port capacity for a protection switchover. The drop list 336 has a format 338 similar to that of the squelch list 310 and the activate list 312, although the output port field points only to itself in this case.
The output port capacity table 330 and the drop list 336 are organized as adjacent entries 350, 352 for each of the 128 output ports of the system as shown in the table structure of FIG. 17.
An example of the path protection switchover mechanism is now described. Upon notification that there has been a failure from which to recover, the initial action is to "walk" the squelch list. These paths are already considered broken, but the switching node does not know it, and they are still consuming switch bandwidth and cell buffers. The squelch function first invalidates the VPI/VCI mapping, which causes the switch to discard these cells at the output port Next, it adds the output flow to the reset queue of the scheduler. Using FIG. 15 as an example, assume that Failure A has been identified. Software sets the squelch pointer 31 1 to the head of the list containing Paths denoted SP[0], SP[1], and SP[2]. The path protection accelerator 122 (FIG. 6A) reads the SP[0] structure from memory, and executes the squelch commands, which consist of CPU port wntes to the MSC2 and PFS chips (208A-208D, 216A-216D in FIG. 6B) that control the output port for that path. It also looks-up the port capacity for the output port and 53-
modifies it, adding the data rate (SPDR[n]) of the path being disabled. Path status (SPSF[n]) is updated to reflect the newly squelched state. The process is repeated for paths SP[1] and SP[2]. After updating the SP[2] structure, the nil pointer 313 indicates the end of the squelch list 310. The next step is to walk the activate list 312. which in this example contains three paths AP[0], AP[1], and AP[2]. As with the squelch pointer, software sets the activate pointer 323 to the head of the list containing AP[0:2]. For each path in the active list it may or may not be possible to perform the activation without freeing up additional capacity. Before activating a path, path protection accelerator 122 compares the current port capacity indexed by the output port in APOP[n] against the required path rate of the activation path found in APDR[n]. Assume for this example that paths AP[0] and AP[2] do not need extra capacity freed.
Using the output port in APOP[0] as an index into the capacity table 330, path protection accelerator 122 finds that this capacity is already greater than that required by APDR[0], meaning it is safe to activate protection path AP[0]. The switch commands are executed, consisting of CPU port writes to the particular MSCl chips (204A-204D in FIG. 6B) controlling the input translation for that path and the corresponding PFS chips (216A-216D in FIG. 6B) controlling the scheduling. In this case, the proper MSCl to access must be supplied as part of the switch commands. Since there are more paths on the activate list, the path protection accelerator 122 moves on to AP[1]. For this path the comparison of the capacity table entry indexed by APOP[l] shows that APDR[1] is greater, meaning there is not enough output port capacity to completely activate the protection path. More output port bandwidth must be freed by removing low-priority output traffic. Path protection accelerator 122 uses APOP[l] to point to the head of the appropriate drop list 336. The process of dropping lower priority output traffic is similar to the squelch process, except that the drop list is only traversed as far as necessary, until the capacity of that output port exceeds APDR[1]. As when squelching broken paths, each dropped path status DPSF[port,m] is updated along the way to reflect its deactivation and its data rate DPDR[port,m] is added to the capacity for APOP[l] If the path protection accelerator 122 reaches the end of the drop list and APDR[1] still exceeds the newly computed capacity of the output port APOP[l], the attempted protection switchover has failed and is terminated Assuming that activation of AP[1] was successful, path protection accelerator 122 repeats the process for AP[2], after which it reaches the end of the activate list, indicating the successful completion of the switchover The network management system may subsequently reroute or restore the paths that have been dropped The data structures that have been referred to above in connection with the squelch, activate and drop lists are now described
The Path Output Port (POP) is a 7-bit number, ranging from 0 to 127, which represents the range of line card ports, per the MMC numbenng convention used m the fabnc The Path Input Port (PIP) is a 7-bit number, ranging from 0 to 127, which represents the range of line card ports, per the MMC numbenng convention used in the fabnc
The Path Data Rate (PDR) represents the data rate where all 0's indicates zero data rate Each increment represents a bandwidth increment The Path Status Flags (PSF) reflect the state of a path that can be, or has been, squelched, dropped, or activated States can include the following bits Working Protecting Failed Dropped Squelched The Switch Commands give the hardware directions about the exact operations it must perform at the CPU interface to the Control Module (MSCl and PFS) For purposes of the switchover mechanism, the following accesses are required wπtes to the Input Translation Table (ITT) via the MSCl controlling the input port (activate) wπtes to the Output Translation Table (OTT) via the MSC2 controlling the output port (squelch, drop) wπtes to the Scheduler External Memory (SEM) via the PFS controlling the output port (squelch, drop) In order to denve the command structure for the protection switchover, it helps to understand the mechanism used by the MMC chip set to access internal fabnc registers and tables The data structures that must be managed are the Output Translation Table (OTT), which is a captive memory accessed only by the MSC2, the Scheduler External Memory, associated with the PFS, and the Input Translation Table (ITT), attached to the MSCl None of these memones can be accessed directly by software (or non-MMC hardware) The MSC 1 and PFS, which are the only devices that have CPU ports, provide an indirect access mechanism through registers that are accessible from the respective CPL ports The MMC chips control the accesses using their internal switch cycle and chip-to-chip communication paths For path squelch and path drop operations, the first access required is a modification of the OTT This is done using the Wπte MSC Tables command m the MSCl , which requires multiple wntes to the General Purpose Registers (R0-R8) followed by a wnte to the Command Register (CMR) Four (4) 16-bit wntes are needed, plus the wπte for the CMR The address in the OTT must be determined by software and is a function of the Connection ID (CID) All other values are fixed and can be supplied by hardware
The second operation for path squelching and dropping is to put a flow on the Reset Queue, by accessing the Scheduler External Memory attached to the output PFS, which has its own CPU interface, Command Register, and General Purpose Registers (G0-G2) Two (2) 16-bit wntes are needed, plus the wnte of the CMR The Output Flow ID and Scheduler Address must be supplied by software The other values are fixed and can be supplied by hardware The third hardware-assisted access into the Control Module involves modifying an Input Translation Table (ITT) entry via the MSCl associated with the input port This access is used to activate the protection path, and it is similar to the one used to squelch a path Five (5) 16-bit writes are required, plus the wnte of the CMR The values in R0-R4 must be supplied by software Hardware can supply the CMR value Software builds the linked lists of path structures in the memory 124 attached to the path protection accelerator 122 which is implemented as an FPGA (FIG 6 A) Each structure must be aligned to a 16-byte boundary The path data structures for activate, squelch and drop operations include a path status which uses at least three (3) bits- [0] = path is in use, l e working [ 1 ] = path is reserved for protection [2] = path activation failed
An algonthm for the switchover mechanism is descπbed in the following pseudo-code, wπtten from the pomt-of-view of memory operations Synchronization requirements relative to the other FCC and to the MMC switch cycle are not shown
Initiate CPU_wπte (Next Squelch Pointer. CPU Data Input Register)
CPU_wπte (Next Activate Pointer, CPU Data Input Register) Goto Squel_Strt Squel_Strt if (Next Squelch Pointer = nil) goto Actv_Strt mem_read (Next Squelch Pointer) reg_save (Current Squelch Pointer) reg_save (Next Squelch Pointer) mem_read (Current Squelch Pointer) -j / -
reg_save (Squelch Path Flags) reg_save (Squelch Output Port) reg_save (Squelch Path Rate) reg_save (Switch Parameter 0)
if (Squelch Path Flags = Not_Workιng) goto Squel_Strt mem_read (Capacity Table [Squelch Output Port]) reg_save (Output Path Capacity) mem_read (Current Squelch Pointer) reg_save (Switch Parameters 1 :0) do_squelch (Squelch Output Port, Switch Parameters 2:0) update_flags (Squelch Path Flags to Not_Working) add (Output Path Capacity, Squelch Path Rate) reg_save (Output Path Capacity) mem_write (Capacity Table [Squelch Output Port], Output Port Capacity) mem_write (Current Squelch Pointer. Squelch Path Flags) Goto Squel_Strt
Actv_Strt: if (Next Activate Pointer = ml) Goto Complete mem_read (Next Activate Pointer) reg_save (Current Activate Pointer) reg_save (Next Activate Pointer) mem_read (Current Activate Pointer) reg_save (Activate Path Flags) reg_save (Activate Output Port) reg_save (Activate Path Rate) reg_save (Activate Input Port) if (Activate Path Flags = Is_Protectιng) goto Actv Strt mem_read (Capacity Table [Activate Output Port]) reg_save (Output Path Capacity) reg_save (Next Activate Pointer)
Compare: if (Output Path Capacity >= Activate Path Rate) goto Actv_Path
Drop_Strt: if (Next Drop Pointer = nil) goto Actv_Path mem read (Next Drop Pointer) reg_save (Current Drop Pointer) reg_save (Next Drop Pointer) mem_read (Current Drop Pointer) reg_save (Drop Path Flags) reg_save (Drop Output Port) reg_save (Drop Path Rate) reg_save (Switch Parameter 0) if (Drop Path Flags = Not_Workιng) goto Drop_Strt mem_read (Current Drop Pointer) reg_save (Switch Parameters 1.0) do_squelch (Drop Output Port. Switch Parameters 2:0) update_flags (Drop Path Flags to Not_Workιng) add (Output Path Capacity, Drop Path Rate) reg_save (Output Path Capacity) mem_wnte (Current Drop Pointer, Drop Path Flags) Goto Compare
Actv_Path: if (Output Path Capacity < Activate Path Rate) goto Fail mem_read (Current Activate Pointer) reg_save (Switch Parameters 3 0) mem_read (Current Activate Pointer) reg_save (Switch Parameters 4) do_actιvate (Activate Input Port, Switch Parameters 3:0) update_flags (Activate Path Flags to Is_Protectmg)
subtract (Output Port Capacity, Activate Path Rate) reg_save (Output Port Capacity) mem_wπte (Capacity Table [Activate Output Port], Output Port Capacity) mem_wπte (Current Activate Pointer, Activate Path Flags)
Goto Actv_Strt
Fail update_flags (Activate Path Flags to Add_Faιled) mem_wnte (Current Activate Pointer, Activate Path Flags)
Complete: Set status bit, and interrupt if enabled
The pseudo-code disclosed herein above provides a framework for the protection hardware, and allows bookkeeping of the memory operations that are required.
While this invention has been particularly shown and descπbed with references to preferred embodiments thereof, it will be understood by those skilled in the art that vanous changes in form and details may be made therein without departing from the spmt and scope of the invention as defined bv the appended claims It will be apparent to those of ordinary skill in the art that methods involved in the present invention may be embodied m a computer program product that includes a computer usable medium For example, such a computer usable medium can include a readable memory device, such as a hard dnve device, a CD-ROM, a DVD-ROM, or a computer diskette, havmg computer readable program code segments stored thereon. The computer readable medium can also include a communications or transmission medium, such as a bus or a commumcations link, either optical, wired, or wireless, having program code segments earned thereon as digital or analog data signals.

Claims

CLAIMSWhat is claimed is:
1. A method of path protection in a network of nodes interconnected by communications links, the method comprising: establishing a working path through a first series of interconnected nodes, the working path having a working path bandwidth; assigning to the working path a protection path through a second series of interconnected nodes, the protection path having a protection path bandwidth in relation to the working path bandwidth; upon a failure event involving at least one node of the first series, switching the working path to the assigned protection path.
2. The method of Claim 1 wherein establishing includes establishing the working path from a customer node over a primary communications link and through the first series of interconnected nodes and wherein assigning includes assigning the protection path from the customer node over a secondary communications link and through the second series of interconnected nodes.
3. The method of Claim 2 wherein the primary and secondary communications links comprise different media.
4. The method of Claim 3 wherein the pnmary and secondary communications links comprise different media selected from the group consisting of optical fiber, copper wire, wireless and free-space optics.
The method of Claim 1 wherein establishing includes establishing the working path between one of the nodes of the first seπes and a node of a third seπes of interconnected nodes in a second network over a pnmary commumcations link and wherein assigmng includes assigning the protection path between one of the nodes of the second senes and a node of a fourth senes of interconnected nodes m the second network over a secondary communications link
The method of Claim 1 wherein the working path includes a working path connection between ports of a switch fabnc in each node of the first seπes of interconnected nodes; and at each node maintaining a protection path activation list for each communications link in the network, each list composing an ordered listing of path entoes, each path entry associated with a particular working path for that communications link and including at least one path activation command for effecting activation of a protection path connection between ports of the switch faboc, upon a failure of one of the communications links, implementing the at least one path activation command for each of the path entnes of the particular protection path activation list associated with the failed link
The method of Claim 6 further comprising at each node maintaining a working path deactivation list for each communications link m the network, each list compπsing an ordered listing of path entnes, each path entry associated with a particular working path for that communications link and including at least one path deactivation command for effecting deactivation of one of the working path connections between ports of the switch fabnc, upon the failure of one of the communications links, implementing the at least one path deactivation command for each of the path entnes of the particular workmg path deactivation list associated with the failed link pnor to implementing the at least one path activation command of the cooespondmg protection path activation list
8. The method of Claim 6 further compπsing at each node specifying a path data rate and the particular input and output ports for the protection path connection in each path entry of the path protection lists; momtoπng available capacity of each switch fabnc output port, maintaining a drop list for each switch fabnc output port, each drop list compπsing an ordered listing of path entries, each path entry including at least one path deactivation command for effecting deactivation of a path connection using that switch fabnc output port, wherein implementing the at least one path activation command includes companng the path data rate with the monitored available capacity for the cooespondmg switch fabnc output port, if the protection path data rate is greater than the available port capacity, implementing the at least one path deactivation command for path entnes of the drop list until either the drop list terminates or the available port capacity exceeds the path data rate
9 A method of path protection compπsing connecting a customer node to a network of interconnected nodes over a pnmary communications link and over a secondary communications link, establishing a working path from the customer node over the pnmary communications link and through a first senes of network nodes; assigning to the working path a protection path from the customer node over the secondary communications link and through a second senes of network nodes; upon a failure event affecting the pnmary commumcations link, switching the working path to the assigned protection path
The method of Claim 9 wherein assigning includes assigning an associated protection path bandwidth as a percentage of a working path bandwidth associated with the working path
The method of Claim 10 wherein the pnmarv and secondary communications links compnse different media
The method of Claim 11 wherein the pnmarv and secondary communications links compnse different media selected from the group consisting of optical fiber, copper wire, wireless and free-space optics
The method of Claim 9 wherein the working path includes a working path connection between ports of a switch fabnc in each node of the first senes of interconnected nodes, and at each node maintaining a protection path activation list for each communications link in the network, each list compπsing an ordered listing of path entnes, each path entry associated with a particular working path for that communications link and including at least one path
Figure imgf000046_0001
ation command for effecting activation of a protection path connection between ports of the switch fabnc, upon a failure of one of the communications links, implementing the at least one path activation command for each of the path entnes of the particular protection path activation list associated with the failed link
14. The method of Claim 13 further comprising at each node: maintaining a working path deactivation list for each communications link in the network, each list comprising an ordered listing of path entries, each path entry associated with a particular working path for that communications link and including at least one path deactivation command for effecting deactivation of one of the working path connections between ports of the switch fabric; upon the failure of one of the communications links, implementing the at least one path deactivation command for each of the path entries of the particular working path deactivation list associated with the failed link prior to implementing the at least one path activation command of the cooesponding protection path activation list.
15. The method of Claim 13 further comprising at each node: specifying a path data rate and the particular input and output ports for the protection path connection in each path entry of the path protection lists; monitoring available capacity of each switch fabric output port; maintaining a drop list for each switch fabric output port, each drop list comprising an ordered listing of path entries, each path entry including at least one path deactivation command for effecting deactivation of a path connection using that switch fabric output port; wherein implementing the at least one path activation command includes comparing the path data rate with the monitored available capacity for the cooesponding switch fabric output port; if the protection path data rate is greater than the available port capacity, implementing the at least one path deactivation command for path entries of the drop list until either the drop list terminates or the available port capacity exceeds the path data rate.
16. A method of path protection in a network of nodes interconnected by communications links, each link having a capacity for carrying logical channels between nodes, the method comprising: establishing a plurality of working paths through the nodes, each working path comprising logical channels of a series of links; for each working path, precalculating an associated protection path comprising logical channels of a different seπes of links; assigning a priority to each working path and associated protection path; upon a failure event involving at least one of the links, switching the working paths that include the at least one failed link to their respective protection paths, with a higher priority protection path preempting one or more lower priority paths that share at least one link if the link capacity of the at least one shared link is otherwise exceeded by addition of the preempting protection path.
17. The method of Claim 16 wherein the higher priority protection paths preempt lower priority protection paths that share at least one link.
18. The method of Claim 16 wherein precalculating an associated protection path includes assigning an associated protection path bandwidth as a percentage of a working path bandwidth associated with the cooesponding working path.
19. The method of Claim 16 wherein the network comprises at least two overlapping areas of nodes and wherein establishing includes establishing a working path which traverses one or more areas of nodes and precalculating includes precalculating an associated protection path for each area through which the working path traverses; and upon a failure event involving the working path, switchmg a portion of the working path to the associated protection path for one of the areas that includes the failure event
A method of failure notification in a communications network, the method composing providing a communications network havmg at least two overlapping areas of nodes interconnected by communications links, upon a failure event involving one of the communications links, broadcasting a failure message identifying the failed link, the broadcast bemg confined withm the areas which includes the failed link
The method of Claim 20 wherein broadcasting includes detecting the link failure at one or both of the nodes connected to the failed link, identifying nodes connected to the one or both detecting nodes that belong to the same areas as the failed link and sending the failure message only to such identified nodes
The method of Claim 21 wherein broadcasting further includes at each node that receives the broadcast failure message, identifying nodes connected thereto which belong to the same areas as the failed link and sending the failure message only to such identified nodes
The method of Claim 22 further composing at each node that receives the broadcast failure message, maintaining a list of link failures, each list entry of the list compπsing a link identifier and a time of reception wherein a list entry is deleted a timeout interval after the time of reception, upon receiving a broadcast failure message concerning a link not on the list, creating a list entry and sending the broadcast failure message to identified nodes; otherwise, upon receiving a broadcast failure message concerning a link on the list, extinguishing the broadcast failure message.
24. The method of Claim 22 wherein the broadcast failure message includes a failure counter associated with the failed link, and wherein the method further includes: at each node that detects the link failure, updating the failure counter for the failed link and inserting the updated failure counter value into the broadcast failure message; at each node that receives the broadcast failure message, comparing a stored failure counter value for the failed link with the updated failure counter value in the received broadcast failure message and, if the updated failure counter value is less than or equal to the stored failure counter value, discarding the broadcast failure message; otherwise, replacing the stored failure counter value with the updated failure counter value and sending the broadcast failure message to identified nodes
25. The method of Claim 24 further composing synchronizing the failure message with a routing protocol message by including the updated failure counter in the routing protocol message.
26. The method of Claim 25 further compπsing at each node that receives the routing protocol message, companng a stored failure counter value for the failed link with the updated failure counter value in the received routing protocol message to determine whether the routing protocol message is synchronized with the broadcast failure message and discarding the routing protocol message if not synchronized.
The method of Claim 20 wherein broadcasting includes: at one or more of the nodes, sending to a connected node a LAPD protocol unnumbered information frame containing the failure message and resending the failure message in another unnumbered information frame after a time interval unless an unnumbered acknow ledgment frame containing the failure message is received from the connected node
The method of Claim 20 wherein broadcasting includes: at one or more of the nodes, sending to a connected node a LAPD protocol unnumbered information frame containing the failure message and periodically resending the failure message until an unnumbered acknowledgment frame containing the failure message is received from the connected node.
The method of Claim 20 wherein each node includes plural line cards, each line card terminating a link to another node, and wherein broadcasting includes: detecting the link failure at one of the line cards connected to the failed link; sending a failure message to the other line cards on a message bus within the node of the detecting line card, at each of the other line cards, sending the failure message to the associated connected node.
The method of Claim 29 wherein sending at the detecting line card includes multicasting the failure message and peoodically resending the failure message until an acknowledgment message is received from each of the other line cards.
31. The method of Claim 29 wherein the message bus carries high and low priority messages and wherein sending at the detecting line card includes sending the failure message at high priority.
32. The method of Claim 20 further comprising: establishing a working path which traverses one or more areas of nodes, the working path comprising a series of links; for each area through which the working path traverses, precalculating an associated protection path comprising a different series of links; and if the working path includes the failed link, switching the working path to the associated protection path for one of the areas that includes the failed link.
33. In a network of nodes interconnected by communications links, apparatus at a node comprising: a message bus; and plural line cards connected to the message bus. each line card including a message bus interface circuit for sending and receiving messages on the bus, the messages comprising high and low priority messages having a message length that is bounded such that latency on the message bus is bounded.
34. The apparatus of Claim 33 wherein the message bus comprises a pair of redundant buses and the node includes an arbitration circuit for arbitrating access by the line cards to the redundant buses in a round-robin fashion.
35. The apparatus of Claim 33 wherein each line card includes an interface port for terminating a commumcations link to another node, the port having means for detecting a failure event involving the associated link, and wherein in response i -
to such failure detection, a detecting line card sends a failure message at high pnonty on the message bus to other line cards
In a network of nodes interconnected by communications links, a method of protection path switching compπsing establishing a plurality of working paths, each working path including a working path connection between ports of a switch fabnc in each node of a senes of interconnected nodes, at each node maintaining a protection path activation list for each communications link in the network, each list compnsmg an ordered listing of path entnes, each path entry associated with a particular working path for that communications link and including at least one path activation command for effecting activation of a protection path connection between ports of the switch fabnc, upon a failure of one of the communications links, implementing the at least one path activation command for each of the path entnes of the particular protection path activation list associated with the failed link
The method of Claim 36 further compnsmg at each node maintaining a working path deactivation list for each communications link in the network, each list compnsmg an ordered listing of path entnes, each path entry associated with a particular working path for that communications link and including at least one path deactivation command for effecting deactivation of one of the working path connections between ports of the switch fabnc, upon the failure of one of the communications links, implementing the at least one path deactivation command for each of the path entries of the particular working path deactivation list associated with the failed link pnor to implementing the at least one path activation command of the cooesponding protection path activation list.
38. The method of Claim 36 further composing at each node. specifying a path data rate and the particular input and output ports for the protection path connection in each path entry of the path protection lists; monitoπng available capacity of each switch fabnc output port; maintaining a drop list for each switch fabnc output port, each drop list comprising an ordered listing of path entnes. each path entry including at least one path deactivation command for effecting deactivation of a path connection using that switch fabric output port; wherein implementing the at least one path activation command includes companng the path data rate with the monitored available capacity for the cooesponding switch fabnc output port; if the protection path data rate is greater than the available port capacity, implementing the at least one path deactivation command for path entnes of the drop list until either the drop list terminates or the available port capacity exceeds the path data rate.
39 In a network of nodes interconnected by communications links and having a plurality of working paths, each working path including a working path connection between ports of a switch fabnc in each node of a senes of interconnected nodes, apparatus in a node for protection path switching, the apparatus comprising: a memory; a protection path activation list stored in the memory for each communications link m the network, each list compnsmg an ordered listing of path entnes, each path entry associated with a particular working path for that communications link and including at least one path activation command for effecting activation of a protection path connection between ports of the switch fabnc; a path protection accelerator for retπevmg the protection path activation list from the memory and implementing, upon a failure of one of the communications links, the at least one path activation command for each of the path entnes of the particular protection path activation list associated with the failed link.
The apparatus of Claim 39 further compnsmg a working path deactivation list stored in the memory for each communications link in the network, each list compπsing an ordered listing of path entnes, each path entry associated with a particular working path for that communications link and including at least one path deactivation command for effecting deactivation of one of the working path connections between ports of the switch fabnc, wherein the path protection accelerator is operable to retneve the working path deactivation list from the memory upon the failure of one of the communications links, and implement the at least one path deactivation command for each of the path entnes of the particular working path deactivation list associated with the failed link pnor to implementing the at least one path activation command of the cooesponding protection path activation list
PCT/US2000/015457 1999-06-02 2000-05-31 Method and system for path protection in a communications network WO2000074310A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU53235/00A AU5323500A (en) 1999-06-02 2000-05-31 Method and system for path protection in a communications network

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/324,454 US6992978B1 (en) 1999-06-02 1999-06-02 Method and system for path protection in a communications network
US09/324,454 1999-06-02
US52447900A 2000-03-13 2000-03-13
US09/524,479 2000-03-13

Publications (2)

Publication Number Publication Date
WO2000074310A2 true WO2000074310A2 (en) 2000-12-07
WO2000074310A3 WO2000074310A3 (en) 2001-06-07

Family

ID=26984465

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/015457 WO2000074310A2 (en) 1999-06-02 2000-05-31 Method and system for path protection in a communications network

Country Status (2)

Country Link
AU (1) AU5323500A (en)
WO (1) WO2000074310A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1213879A2 (en) * 2000-12-08 2002-06-12 Alcatel Canada Inc. An MPLS implementation on an ATM platform
EP1330060A2 (en) * 2001-12-26 2003-07-23 Akara Corporation Service protection method and apparatus for TDM or WDM communications networks
EP1428133A1 (en) * 2001-06-05 2004-06-16 Marconi Intellectual Property (Ringfence) Inc. Ethernet protection system
US7330424B2 (en) * 2001-08-02 2008-02-12 Fujitsu Limited Node device in network, and network system
US7796503B2 (en) 2002-09-03 2010-09-14 Fujitsu Limited Fault tolerant network routing
CN103931227A (en) * 2011-11-11 2014-07-16 日本电气株式会社 Wireless transmission device, failure-information forwarding method, and failure-information notification method
CN113632558A (en) * 2019-03-29 2021-11-09 华为技术有限公司 Wi-Fi communication method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0828400A1 (en) * 1996-08-20 1998-03-11 Nec Corporation Communication network recoverable from link failure using prioritized recovery classes
EP0836344A2 (en) * 1996-08-19 1998-04-15 Nec Corporation ATM virtual path switching node

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0836344A2 (en) * 1996-08-19 1998-04-15 Nec Corporation ATM virtual path switching node
EP0828400A1 (en) * 1996-08-20 1998-03-11 Nec Corporation Communication network recoverable from link failure using prioritized recovery classes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RYUTARO KAWAMURA ET AL: "SELF-HEALING VIRTUAL PATH ARCHITECTURE IN ATM NETWORKS" IEEE COMMUNICATIONS MAGAZINE,US,IEEE SERVICE CENTER. PISCATAWAY, N.J, vol. 33, no. 9, 1 September 1995 (1995-09-01), pages 72-79, XP000528012 ISSN: 0163-6804 *
VEITCH P ET AL: "ATM NETWROK RESILIENCE" IEEE NETWORK: THE MAGAZINE OF COMPUTER COMMUNICATIONS,US,IEEE INC. NEW YORK, vol. 11, no. 5, 1 September 1997 (1997-09-01), pages 26-33, XP000699938 ISSN: 0890-8044 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1213879A3 (en) * 2000-12-08 2003-06-25 Alcatel Canada Inc. An MPLS implementation on an ATM platform
US8018939B2 (en) 2000-12-08 2011-09-13 Alcatel Lucent MPLS implementation of an ATM platform
EP1213879A2 (en) * 2000-12-08 2002-06-12 Alcatel Canada Inc. An MPLS implementation on an ATM platform
US7260083B2 (en) 2000-12-08 2007-08-21 Alcatel Canada Inc.; MPLS implementation on an ATM platform
EP1428133A4 (en) * 2001-06-05 2007-12-26 Ericsson Ab Ethernet protection system
EP1428133A1 (en) * 2001-06-05 2004-06-16 Marconi Intellectual Property (Ringfence) Inc. Ethernet protection system
US7330424B2 (en) * 2001-08-02 2008-02-12 Fujitsu Limited Node device in network, and network system
EP1330060A3 (en) * 2001-12-26 2004-05-12 Akara Corporation Service protection method and apparatus for TDM or WDM communications networks
US7130264B2 (en) 2001-12-26 2006-10-31 Ciena Corporation Service protection method and apparatus for TDM or WDM communications networks
EP1330060A2 (en) * 2001-12-26 2003-07-23 Akara Corporation Service protection method and apparatus for TDM or WDM communications networks
US7796503B2 (en) 2002-09-03 2010-09-14 Fujitsu Limited Fault tolerant network routing
CN103931227A (en) * 2011-11-11 2014-07-16 日本电气株式会社 Wireless transmission device, failure-information forwarding method, and failure-information notification method
CN113632558A (en) * 2019-03-29 2021-11-09 华为技术有限公司 Wi-Fi communication method and device

Also Published As

Publication number Publication date
WO2000074310A3 (en) 2001-06-07
AU5323500A (en) 2000-12-18

Similar Documents

Publication Publication Date Title
US6992978B1 (en) Method and system for path protection in a communications network
US11916722B2 (en) System and method for resilient wireless packet communications
US7630300B2 (en) Methods and apparatus for trunking in fibre channel arbitrated loop systems
US7382790B2 (en) Methods and apparatus for switching fibre channel arbitrated loop systems
US7660316B2 (en) Methods and apparatus for device access fairness in fibre channel arbitrated loop systems
US7397788B2 (en) Methods and apparatus for device zoning in fibre channel arbitrated loop systems
US7664018B2 (en) Methods and apparatus for switching fibre channel arbitrated loop devices
JP2577269B2 (en) High-speed mesh connection local area network
JPH0817385B2 (en) High-speed mesh connection type local area network reconfiguration system
CN105324960A (en) Can fd
JPH0222580B2 (en)
WO2000074310A2 (en) Method and system for path protection in a communications network
GB2401518A (en) Efficient arbitration using credit based flow control
Cisco System Error Messages
Perlis et al. 21. Distributed Systems
JPH0417425A (en) Data transmission system and network system
RAISES et al. 27. Distributed Transactions

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP