US20120281705A1 - Protection switching in multiprotocol label switching (mpls) networks - Google Patents

Protection switching in multiprotocol label switching (mpls) networks Download PDF

Info

Publication number
US20120281705A1
US20120281705A1 US13/464,229 US201213464229A US2012281705A1 US 20120281705 A1 US20120281705 A1 US 20120281705A1 US 201213464229 A US201213464229 A US 201213464229A US 2012281705 A1 US2012281705 A1 US 2012281705A1
Authority
US
United States
Prior art keywords
label
working
packet
merge
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/464,229
Other versions
US8989195B2 (en
Inventor
Jinrong Ye
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hangzhou H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou H3C Technologies Co Ltd filed Critical Hangzhou H3C Technologies Co Ltd
Assigned to HANGZHOU H3C TECHNOLOGIES CO., LTD. reassignment HANGZHOU H3C TECHNOLOGIES CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YE, Jinrong
Publication of US20120281705A1 publication Critical patent/US20120281705A1/en
Application granted granted Critical
Publication of US8989195B2 publication Critical patent/US8989195B2/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: H3C TECHNOLOGIES CO., LTD., HANGZHOU H3C TECHNOLOGIES CO., LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L12/437Ring fault isolation or reconfiguration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery

Definitions

  • Multiprotocol label switching is a label switching protocol designed to transport data packets from a source node to a destination node based on short, fixed-length path labels. Since nodes in the network are not required to perform complex network address lookup and route calculation, label switching allows data packets to be transported more efficiently through the network.
  • the path along which the packets are transmitted on the network is known as a label switch path (LSP), which is a connection-oriented path over a connectionless Internet Protocol (IP) network.
  • LSP label switch path
  • IP Internet Protocol
  • FIG. 1 is a schematic diagram of an example MPLS network in which protection switching is performed
  • FIG. 2 is a flowchart of an example method for assigning labels and storing forwarding information
  • FIG. 3 is a schematic diagram of the example MPLS network in FIG. 1 with corresponding forwarding information
  • FIG. 4 is a flowchart of an example method for packet forwarding on a working label switch path
  • FIG. 5 is a flowchart of an example method for packet forwarding on a protection label switch path
  • FIG. 6 is a continuation of the flowchart in FIG. 5 ;
  • FIG. 7 is an example of packet forwarding on a protection label switch path in the network in FIG. 1 and FIG. 3 ;
  • FIG. 8 is another example of packet forwarding on a protection label switch path in the network in FIG. 1 and FIG. 3 ;
  • FIG. 9 is an example of packet forwarding in the example in FIG. 7 , but with all nodes having the same working label.
  • FIG. 10 is a block diagram of an example structure of a network device.
  • FIG. 1 shows an example multiprotocol label switching (MPLS) network 100 with a ring topology.
  • the network 100 may use any MPLS-based protocols, such as MPLS transport profile (MPLS-TP) that enhances existing MPLS standards to include support for transport operational modules, such as Operations, Administration and Maintenance (OAM) protection mechanism.
  • MPLS-TP MPLS transport profile
  • OAM Operations, Administration and Maintenance
  • the ring network 100 is formed by multiple network devices in the form of nodes A, B, C, D, E, F, G and H.
  • Each node may be connected to other devices outside of the ring network, such as devices J, K and U which are connected to nodes G, A and F respectively.
  • a network device may be router, bridge, switch or host etc.
  • the network 100 is configured with a fully closed protection label switch path (LSP) 110 shared by multiple working LSPs 120 , 130 .
  • LSP fully closed protection label switch path
  • a working LSP 120 , 130 may be established between any network devices in the ring network 100 to, for example, facilitate a service connection etc.
  • service connection 122 is established to forward packets from device J to device K; and service connection 132 to forward packets from device U to device K.
  • the protection LSP 110 enables protection switching in the network 100 to allow recovery from a link or node failure on a working LSP 120 , 130 while minimising disruption to traffic.
  • link or node failure may be caused by degradation in the quality of service, network congestion, and physical damage to the link etc.
  • Packets are generally transmitted on the working LSPs 120 , 130 , but when a link or node failure is detected, packets are ‘switched’ to travel in an opposite direction on the protection LSP 110 and therefore away from the failure.
  • packets are transmitted clockwise on the working LSPs 120 , 130 and anticlockwise on the protection LSP 110 .
  • the direction of the working 120 , 130 and protection 110 LSPs may be reversed.
  • a node is ‘downstream’ of another node device if the former receives packets forwarded by the latter.
  • network device E is downstream to network device F on the clockwise working LSP 120 , 130 whereas network device D is downstream to network device F on the anticlockwise protection LSP 110 .
  • each node is assigned a working label and a protection label to facilitate packet transmission on the working LSP 120 , 130 and the protection LSP 110 .
  • each working LSP 120 , 130 is assigned a merge label.
  • the merge label uniquely identifies a particular working LSP 120 , 130 in the network 100 and is known to nodes on the working LSP 120 , 130 .
  • the example protection switching method explained below may be used for both link and node failure protection.
  • the merge label is known to nodes on the particular working LSP 120 , 130 .
  • each node on a working LSP 120 , 130 is assigned a working label.
  • the nodes on the same working LSP 120 , 130 may have different working labels (see also FIG. 7 and FIG. 8 ) or share the same working label (see also FIG. 9 ).
  • the following working labels are assigned to the nodes on working LSP 120 : [W 16 ] to ingress node G; [W 15 ] to transit node F; [W 14 ] to transit node E; [W 13 ] to transit node D; [W 12 ] to transit node C; and [W 11 ] to transit node B.
  • working labels are assigned to different working LSPs.
  • the following working labels are assigned to the nodes on working LSP 130 : [W 25 ] to ingress node F; [W 24 ] to transit node E; [W 23 ] to transit node D; [W 22 ] to transit node C; and [W 21 ] to transit node B.
  • An egress node of a working LSP does not require a working label because a packet exits the working LSP 120 , 130 via the egress node.
  • each node is assigned a merge label that uniquely identifies a particular working LSP 120 , 130 .
  • merge label [Wm 1 ] is assigned to working LSP 120
  • merge label [Wm 2 ] is assigned to working LSP 130 .
  • different merge labels are assigned to different working LSPs 120 , 130 in the network 100 .
  • Each merge label is known to nodes on the particular working LSP 120 , 130 identified by the merge label. For example, [Wm 1 ] is known to nodes G, F, E, D, C, B and A on working LSP 120 while [Wm 2 ] is known to nodes F, E, D, C, B and A on working LSP 130 .
  • the merge labels may be assigned by a network controller, or one of the nodes.
  • the merge label may be the same as or different to the working labels of a particular working LSP.
  • the merge labels [Wm 1 ], [Wm 2 ] are different to the working labels assigned to the respective working LSP 120 ([W 16 ] to [W 11 ]) and working LSP 130 ([W 25 ] to [W 21 ]).
  • the same working label [W 1 ] is assigned to all nodes on a particular working LSP 140 , in which case the working label may be used as a merge label.
  • each node on the protection LSP 110 is assigned a protection label.
  • the following protection labels are assigned to nodes A to H: [P 1 ] to node F; [P 2 ] to node G; [P 3 ] to node H; [P 4 ] to node A; [P 5 ] to node B; [P 6 ] to node C; [P 7 ] to node D; and [P 8 ] to node E.
  • the protection labels may be assigned by a network controller, or one of the nodes.
  • Each node on the protection LSP 110 may have different protection labels like in the example in FIG. 1 , or have the same protection label, [P 1 ] for example.
  • the protection label of a node is different to its working label or merge label.
  • each node stores forwarding information to help it determine how a packet should be forwarded and any necessary label operation on the packet.
  • the forwarding information may be stored in the form of forwarding table entries.
  • a forwarding entry may be a next hop label forwarding entry (NHFLE), which generally includes information on a next hop of a packet and a label operation on a label stack of the packet.
  • a label operation may be one of the following:
  • An incoming label map may be used to map each incoming label to an outgoing label.
  • a forwarding equivalence class (FEC) to NHFLE (FTN) map maps each FEC to an NHFLE.
  • FEC forwarding equivalence class
  • FTN NHFLE
  • the forwarding information is generally stored locally at each node.
  • forwarding information associated with each working LSP 120 , 130 is stored at each node on the respective working LSP 120 , 130 .
  • the following forwarding information is stored:
  • the following forwarding information is stored:
  • Forwarding information associated with a merge label facilitate protection switching in the event of link or node failure. Note that if the working labels of all nodes on a particular working LSP are the same, and they are the same as the merge label, it is not necessary to store forwarding information associated for both the merge and working labels.
  • forwarding information associated with the protection LSP is stored at each node on the protection LSP 110 .
  • the following forwarding information is stored:
  • Px->Py forwarding entry that maps an incoming protection label (Px) of an adjacent node upstream from the current node on the protection LSP 110 to an outgoing protection label of the current node (Py), and a swap operation.
  • the following forwarding information is stored at the ingress node G, transit nodes F, E, D, C, and B and egress node A:
  • At egress node A ( 324 in FIG. 3 ):
  • the following forwarding information is stored at the ingress node F, transit nodes E, D, C, and B and egress node A:
  • At egress node A (see 324 in FIG. 3 ):
  • Packets may be forwarded on working LSP 130 in a similar manner using forwarding information stored at ingress node F, transit nodes E to B and egress node A.
  • the packets are ‘wrapped’ around a failed link or node during protection switching. Since the failure is detected locally at the first and second nodes, the example method described with reference to FIG. 5 and FIG. 6 facilitates recovery within a desired 50 ms.
  • FIG. 7 shows an example of how packets are transmitted in the event of a failure on the working LSP 120 in the network in FIG. 3 according to the example method in FIG. 5 and FIG. 6 .
  • transit node F (“first node”) and node D (“second node”) are disconnected from an adjacent node E, which is downstream to node F but upstream from the node D on the working LSP 120 .
  • FIG. 8 shows another example of how packets are transmitted in the event of a link or node failure on the working LSP 130 for service connection 132 in the network in FIG. 3 35 according to the example method in FIG. 5 and FIG. 6 .
  • This example illustrates, inter alia, how the protection LSP 110 for working LSP 120 in FIG. 7 is also used for working LSP 130 in FIG. 8 .
  • ingress node node F (“first node”) and node D (“second node”) are disconnected from an adjacent node E, which is downstream to node F but upstream from the node D on the working LSP 130 .
  • the same working label [W 1 ] is used by all nodes on the working LSP 120 in FIG. 7 .
  • the working label [W 1 ] also uniquely identifies the working LSP 120 and therefore is also used as a merge label.
  • node E and a link between nodes C and D have failed.
  • Node F (“first node”) detects that its disconnection from adjacent node E while node C (“second node”) detects its disconnection from adjacent node D.
  • the above examples can be implemented by hardware, software or firmware or a combination thereof.
  • FIG. 10 an example structure of a network device capable of acting as a node (such as A to H, J, K and U) in the MPLS network 100 is shown.
  • the example network device 150 includes a processor 152 , a memory 154 and a network interface device 158 that communicate with each other via bus 156 .
  • Forwarding information 154 a is stored in the memory 154 .
  • the processor 152 implements functional units in the form of receiving unit 152 a , 35 processing unit 152 b and transmission unit 152 c .
  • Information may be transmitted and received via the network interface device 158 , which may include one or more logical or physical ports that connect the network device 150 to another network device.
  • processor 152 may be implemented by the various methods, processes and functional units described herein.
  • the term ‘processor’ is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate array etc.
  • the processes, methods and functional units may all be performed by a single processor 150 or split between several processors (not shown in FIG. 10 for simplicity); reference in this disclosure or the claims to a ‘processor’ should thus be interpreted to mean ‘one or more processors’.
  • network interface device 158 Although one network interface device 158 is shown in FIG. 10 , processes performed by the network interface device 158 may be split between several network interface devices. As such, reference in this disclosure to a ‘network interface device’ should be interpreted to mean ‘one or more network interface devices”.
  • the processes, methods and functional units may be implemented as machine-readable instructions 154 b executable by one or more processors, hardware logic circuitry of the one or more processors or a combination thereof.
  • the machine-readable instructions 154 b are stored in the memory 154 .
  • One or more of the receiving unit 152 a , processing unit 152 b and transmission unit 152 c may be implemented as hardware or a combination of hardware and software.
  • the processes, methods and functional units described in this disclosure may be implemented in the form of a computer software product.
  • the computer software product is stored in a storage medium and comprises a plurality of instructions for making a computer device (which can be a personal computer, a server or a network device such as a router, switch, bridge, host, access point etc.) to implement the methods recited in the examples of the present disclosure.
  • a computer device which can be a personal computer, a server or a network device such as a router, switch, bridge, host, access point etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Protection switching in a multi-protocol label switching (MPLS) ring network with a protection label switch path that is shared by multiple working label switch paths may include receiving a packet intended for transmission towards a disconnected adjacent node on a working label switch path, and adding, to the received packet, a protection label and a merge label. The merge label may uniquely identify the working label switch path on which the packet is received. The packet may be transmitted on the protection label switch path.

Description

    BACKGROUND
  • Multiprotocol label switching (MPLS) is a label switching protocol designed to transport data packets from a source node to a destination node based on short, fixed-length path labels. Since nodes in the network are not required to perform complex network address lookup and route calculation, label switching allows data packets to be transported more efficiently through the network. The path along which the packets are transmitted on the network is known as a label switch path (LSP), which is a connection-oriented path over a connectionless Internet Protocol (IP) network.
  • BRIEF DESCRIPTION OF DRAWINGS
  • By way of non-limiting examples, protection switching will be described with reference to the following drawings, in which:
  • FIG. 1 is a schematic diagram of an example MPLS network in which protection switching is performed;
  • FIG. 2 is a flowchart of an example method for assigning labels and storing forwarding information;
  • FIG. 3 is a schematic diagram of the example MPLS network in FIG. 1 with corresponding forwarding information;
  • FIG. 4 is a flowchart of an example method for packet forwarding on a working label switch path;
  • FIG. 5 is a flowchart of an example method for packet forwarding on a protection label switch path;
  • FIG. 6 is a continuation of the flowchart in FIG. 5;
  • FIG. 7 is an example of packet forwarding on a protection label switch path in the network in FIG. 1 and FIG. 3;
  • FIG. 8 is another example of packet forwarding on a protection label switch path in the network in FIG. 1 and FIG. 3;
  • FIG. 9 is an example of packet forwarding in the example in FIG. 7, but with all nodes having the same working label; and
  • FIG. 10 is a block diagram of an example structure of a network device.
  • DETAILED DESCRIPTION
  • FIG. 1 shows an example multiprotocol label switching (MPLS) network 100 with a ring topology. The network 100 may use any MPLS-based protocols, such as MPLS transport profile (MPLS-TP) that enhances existing MPLS standards to include support for transport operational modules, such as Operations, Administration and Maintenance (OAM) protection mechanism.
  • In the example in FIG. 1, the ring network 100 is formed by multiple network devices in the form of nodes A, B, C, D, E, F, G and H. Each node may be connected to other devices outside of the ring network, such as devices J, K and U which are connected to nodes G, A and F respectively. A network device may be router, bridge, switch or host etc.
  • The network 100 is configured with a fully closed protection label switch path (LSP) 110 shared by multiple working LSPs 120, 130. A working LSP 120, 130 may be established between any network devices in the ring network 100 to, for example, facilitate a service connection etc. In the example in FIG. 1, service connection 122 is established to forward packets from device J to device K; and service connection 132 to forward packets from device U to device K.
  • Two corresponding working LSPs 120, 130 are established for the service connections 122, 132:
      • (i) working LSP 120 between nodes G and A, in which case G serves as an ingress node; nodes F, E, D, C and B are transit nodes; and A serves as an egress node for service connection 122; and
      • (ii) working LSP 130 between nodes F and A, in which case F serves as an ingress node; nodes E, D, C and B are transit nodes; and A serves as an egress node for service connection 132.
  • The protection LSP 110 enables protection switching in the network 100 to allow recovery from a link or node failure on a working LSP 120, 130 while minimising disruption to traffic. For example, link or node failure may be caused by degradation in the quality of service, network congestion, and physical damage to the link etc.
  • Packets are generally transmitted on the working LSPs 120, 130, but when a link or node failure is detected, packets are ‘switched’ to travel in an opposite direction on the protection LSP 110 and therefore away from the failure. In the example in FIG. 1, packets are transmitted clockwise on the working LSPs 120, 130 and anticlockwise on the protection LSP 110. Of course, in another implementation, the direction of the working 120, 130 and protection 110 LSPs may be reversed. A node is ‘downstream’ of another node device if the former receives packets forwarded by the latter. For example, network device E is downstream to network device F on the clockwise working LSP 120, 130 whereas network device D is downstream to network device F on the anticlockwise protection LSP 110.
  • In the MPLS network 100, packets are forwarded in the network 100 based on labels. The process of placing labels on a packet is known as label stacking, and packets need only be routed based on the topmost label in its label stack. As will be explained in further detail below, each node is assigned a working label and a protection label to facilitate packet transmission on the working LSP 120, 130 and the protection LSP 110.
  • To facilitate sharing of the protection LSP 110 by multiple working LSPs during protection switching, each working LSP 120, 130 is assigned a merge label. The merge label uniquely identifies a particular working LSP 120, 130 in the network 100 and is known to nodes on the working LSP 120, 130. The example protection switching method explained below may be used for both link and node failure protection. The merge label is known to nodes on the particular working LSP 120, 130.
  • Label Assignment
  • Referring also to FIG. 2, an example method for assigning labels to nodes in the network 100 will now be explained.
  • At block 210, each node on a working LSP 120, 130 is assigned a working label. The nodes on the same working LSP 120, 130 may have different working labels (see also FIG. 7 and FIG. 8) or share the same working label (see also FIG. 9).
  • In the example network 100 in FIG. 1, the following working labels are assigned to the nodes on working LSP 120: [W16] to ingress node G; [W15] to transit node F; [W14] to transit node E; [W13] to transit node D; [W12] to transit node C; and [W11] to transit node B.
  • Different working labels, however, are assigned to different working LSPs. In the example in FIG. 1, the following working labels are assigned to the nodes on working LSP 130: [W25] to ingress node F; [W24] to transit node E; [W23] to transit node D; [W22] to transit node C; and [W21] to transit node B.
  • An egress node of a working LSP (node A in the above examples) does not require a working label because a packet exits the working LSP 120, 130 via the egress node.
  • At block 220, each node is assigned a merge label that uniquely identifies a particular working LSP 120, 130. In the example in FIG. 1, merge label [Wm1] is assigned to working LSP 120, and merge label [Wm2] to working LSP 130. In other words, different merge labels are assigned to different working LSPs 120, 130 in the network 100.
  • Each merge label is known to nodes on the particular working LSP 120, 130 identified by the merge label. For example, [Wm1] is known to nodes G, F, E, D, C, B and A on working LSP 120 while [Wm2] is known to nodes F, E, D, C, B and A on working LSP 130. The merge labels may be assigned by a network controller, or one of the nodes.
  • The merge label may be the same as or different to the working labels of a particular working LSP. In the example in FIG. 1, the merge labels [Wm1], [Wm2] are different to the working labels assigned to the respective working LSP 120 ([W16] to [W11]) and working LSP 130 ([W25] to [W21]). In another example shown in FIG. 9, the same working label [W1] is assigned to all nodes on a particular working LSP 140, in which case the working label may be used as a merge label.
  • At block 230, each node on the protection LSP 110 is assigned a protection label. In the example in FIG. 1, the following protection labels are assigned to nodes A to H: [P1] to node F; [P2] to node G; [P3] to node H; [P4] to node A; [P5] to node B; [P6] to node C; [P7] to node D; and [P8] to node E.
  • The protection labels may be assigned by a network controller, or one of the nodes. Each node on the protection LSP 110 may have different protection labels like in the example in FIG. 1, or have the same protection label, [P1] for example. The protection label of a node is different to its working label or merge label.
  • Forwarding Information
  • To facilitate packet forwarding within the network 100, each node stores forwarding information to help it determine how a packet should be forwarded and any necessary label operation on the packet.
  • In one implementation, the forwarding information may be stored in the form of forwarding table entries. A forwarding entry may be a next hop label forwarding entry (NHFLE), which generally includes information on a next hop of a packet and a label operation on a label stack of the packet. A label operation may be one of the following:
      • a ‘swap’ operation where a topmost label (“incoming label”) on a label stack of the packet is replaced with a new label (“outgoing label”);
      • a ‘pop’ operation where the topmost label (“incoming label”) on a label stack of the packet is removed to reveal an inner label on the label stack; or
      • a ‘push’ operation where a new label (“outgoing label”) is added or pushed onto the topmost label (“incoming label”) on a label stack of the packet.
  • An incoming label map (ILM) may be used to map each incoming label to an outgoing label. A forwarding equivalence class (FEC) to NHFLE (FTN) map maps each FEC to an NHFLE. In an MPLS network 100, packets with the same features, such as destination or service level etc., are classified as one class or FEC. Packets belong to the same FEC receive the same treatment in the network 100.
  • Referring to FIG. 2 and FIG. 3, an example method for storing forwarding information at each node in the network 100 is explained. The forwarding information is generally stored locally at each node.
  • At block 240, forwarding information associated with each working LSP 120, 130 is stored at each node on the respective working LSP 120, 130.
  • At an ingress node, the following forwarding information is stored:
      • (a) a forwarding entry (FECu->Wy) that maps an FEC of a packet (FECu) to a working label (Wy) of the ingress node and a push operation; and
      • (b) a forwarding entry (FECu->Wm) that maps an FEC of a packet (FECu) to a merge label (Wm) and a push operation.
      • FECu represents an FEC of a packet sent by a node outside of the ring, such as devices J and K in service connections 122 and 132 respectively; Wy represents a working label of the ingress node on a working LSP, and Wm represents a merge label of the working LSP.
  • At a transit node, the following forwarding information is stored:
      • (a) a forwarding entry (Wx->Wy) that maps a working label (Wx) of an adjacent node upstream from the transit node on a working LSP 120, 130 with a working label (Wy) of the transit node and a swap operation;
      • (b) a forwarding entry (Wx->Wm) that maps a working label (Wx) of an adjacent node upstream from the transit node on a working LSP 120, 130 with a merge label (Wm) identifying the working LSP 120, 130 and a swap operation; and
      • (c) a forwarding entry (Wm->Wy) that maps a merge label (Wm) of a working LSP 120, 130 to a working label (Wy) of the transit node when traffic is switched from the protection LSP to a working LSP.
  • At an egress node, the following forwarding information is stored:
      • (a) a forwarding entry (Ww) that maps a working label (Ww) of a node upstream from the egress node on a working LSP to a pop label operation; and
      • (b) a forwarding entry (Ww->Wm) that maps an incoming working label (Ww) of a node upstream from the egress node on a working LSP to an outgoing merge label (Wm) and a swap operation.
  • Forwarding information associated with a merge label facilitate protection switching in the event of link or node failure. Note that if the working labels of all nodes on a particular working LSP are the same, and they are the same as the merge label, it is not necessary to store forwarding information associated for both the merge and working labels.
  • At block 250, forwarding information associated with the protection LSP is stored at each node on the protection LSP 110. In particular, at each node, the following forwarding information is stored:
  • forwarding entry (Px->Py) that maps an incoming protection label (Px) of an adjacent node upstream from the current node on the protection LSP 110 to an outgoing protection label of the current node (Py), and a swap operation.
  • Using working LSP 120 in FIG. 3 as an example, the following forwarding information is stored at the ingress node G, transit nodes F, E, D, C, and B and egress node A:
  • At ingress node G (see 312 in FIG. 3):
      • (a) FEC1->[W16], which maps an incoming packet with FEC information FEC1 to working label [W16] and a push label operation;
      • (b) FEC1->[Wm1], which maps FEC information FEC1 to merge label [Wm1] and a push operation; and
      • (c) [Wm1]->[W16], which maps merge label [Wm1] to working label [W16] and a swap operation.
  • At transit node F (see 314 in FIG. 3):
      • (a) [W16]->[W15], which maps a packet with incoming working label [W16] to an outgoing working label [W15] and a swap label operation;
      • (b) [W16]->[Wm1], which maps a packet with incoming working label [W16] to an outgoing merge label [Wm1] and a swap label operation; and
      • (c) [Wm1]->[W15], which maps merge label [Wm1] to working label [W15] and a swap operation.
  • At transit node E (see 316 in FIG. 3):
      • (a) [W15]->[W14], which maps a packet with incoming working label [W15] to an outgoing working label [W14] and a swap label operation;
      • (b) [W15]->[Wm1], which maps a packet with incoming working label [W15] to an outgoing merge label [Wm1] and a swap label operation; and
      • (c) [Wm1]->[W14], which maps merge label [Wm1] to working label [W14] and a swap operation.
  • At transit node D (see 318 in FIG. 3):
      • (a) [W14]->[W13], which maps a packet with incoming working label [W14] to an outgoing working label [W13] and a swap label operation;
      • (b) [W14]->[Wm1], which maps a packet with incoming working label [W14] to an outgoing merge label [Wm1] and a swap label operation; and
      • (c) [Wm1]->[W13], which maps merge label [Wm1] to working label [W13] and a swap operation.
  • At transit node C (see 320 in FIG. 3):
      • (a) [W13]->[W12], which maps a packet with incoming working label [W13] to an outgoing working label [W12] and a swap label operation;
      • (b) [W13]->[Wm1], which maps a packet with incoming working label [W13] to an outgoing merge label [Wm1] and a swap label operation during protection switching; and
      • (c) [Wm1]->[W12], which maps merge label [Wm1] to working label [W12] and a swap operation.
  • At transit node B (see 322 in FIG. 3):
      • (a) [W12]->[W11], which maps a packet with incoming working label [W12] to an outgoing working label [W11] and a swap label operation;
      • (b) [W12]->[Wm1], which maps a packet with incoming working label [W12] to an outgoing merge label [Wm1] and a swap label operation during protection switching; and
      • (c) [Wm1]->[W11], which maps merge label [Wm1] to working label [W11] and a swap operation.
  • At egress node A (324 in FIG. 3):
      • (a) [W11], which maps an incoming working label [W11] to a pop operation; and
      • (b) [Wm1], which maps merge label [Wm1] to a pop operation.
  • Using working LSP 130 in FIG. 3 as another example, the following forwarding information is stored at the ingress node F, transit nodes E, D, C, and B and egress node A:
  • At ingress node F (see 314 in FIG. 3):
      • (a) FEC2->[W25], which maps an incoming packet with FEC information FEC2 to working label [W25] and a push operation;
      • (b) FEC2->[Wm2], which maps FEC information FEC2 to merge label [Wm2] and a push operation; and
      • (c) [Wm2]->[W25], which maps merge label [Wm2] to working label [W25] and a swap operation
  • At transit node E (see 316 in FIG. 3):
      • (a) [W25]->[W24], which maps a packet with incoming working label [W25] to an outgoing working label [W24] and a swap label operation;
      • (b) [W25]->[Wm2], which maps a packet with incoming working label [W25] to an outgoing merge label [Wm2] and a swap label operation during protection switching; and
      • (c) [Wm2]->[W24], which maps merge label [Wm2] to working label [W24] and a swap operation.
  • At transit node D (see 318 in FIG. 3):
      • (a) [W24]->[W23], which maps a packet with incoming working label [W24] to an outgoing working label [W23] and a swap label operation;
      • (b) [W24]->[Wm2], which maps a packet with incoming working label [W24] to an outgoing merge label [Wm2] and a swap label operation during protection switching; and
      • (c) [Wm2]->[W23], which maps merge label [Wm2] to working label [W23] and a swap operation.
  • At transit node C (see 320 in FIG. 3):
      • (a) [W23]->[W22], which maps a packet with incoming working label [W23] to an outgoing working label [W22] and a swap label operation;
      • (b) [W23]->[Wm2], which maps a packet with incoming working label [W23] to an outgoing merge label [Wm2] and a swap label operation during protection switching; and
      • (c) [Wm2]->[W22], which maps merge label [Wm2] to working label [W22] and a swap operation.
  • At transit node B (see 322 in FIG. 3):
      • (a) [W22]->[W21], which maps a packet with incoming working label [W22] to an outgoing working label [W21] and a swap label operation;
      • (b) [W22]->[Wm2], which maps a packet with incoming working label [W22] to an outgoing merge label [Wm2] and a swap label operation during protection switching; and
      • (c) [Wm2]->[W21], which maps merge label [Wm2] to working label [W21] and a swap operation.
  • At egress node A (see 324 in FIG. 3):
      • (a) [W21], which maps an incoming working label [W21] to pop operation; and
      • (b) [Wm2], which maps merge label [Wm2] to a pop operation.
  • The following forwarding information associated with protection switching is also stored on nodes A to H, as detailed below:
      • At node G (see 312 in FIG. 3): [P1]->[P2] and swap operation;
      • At node H (see 314 in FIG. 3): [P2]->[P3] and swap operation;
      • At node A (see 316 in FIG. 3): [P3]->[P4] and swap operation;
      • At node B (see 318 in FIG. 3): [P4]->[P5] and swap operation;
      • At node C (see 320 in FIG. 3): [P5]->[P6] and swap operation;
      • At node D (see 322 in FIG. 3): [P6]->[P7] and swap operation;
      • At node E (see 324 in FIG. 3): [P7]->[P8] and swap operation; and
      • At node F (see 326 in FIG. 3): [P8]->[P1] and swap operation.
    Packet Forwarding on a Working LSP
  • An example process for packet forwarding on a working LSP 120, 130 will now be explained with reference to FIG. 4.
      • At block 410, an ingress node on a working LSP 120, 130 receives a packet from a source node outside of the ring 100.
      • At block 420, the ingress node determines an FEC of the packet, and then searches for forwarding information (FECu->Wy) that associates the FEC with a working label (Wy) and a push operation. The ingress node performs the push operation to add the working label to the message.
      • At block 430, the ingress node forwards the packet on the working LSP 120, 130 to a downstream node.
      • At block 440, a transit node on the working LSP 120, 130 receives the packet, and searches for forwarding information that associates a working label on the packet (incoming label Wx) with a new working label (outgoing label Wy). The transit node then carries out a swap operation to replace the incoming label with the outgoing label, after which the packet is forwarded to a downstream node on the working LSP 120, 130.
      • At block 450, an egress node on the working LSP 120, 130 receives the packet, and searches for forwarding information that associates the working label on the packet (incoming label Ww) with a pop operation. The ingress node then proceeds to remove the working label from the packet.
      • At block 460, the egress node forwards the packet to its destination, which may be the egress node itself or a node outside of the ring 100.
  • Using working LSP 120 in FIG. 3 as an example, the process for forwarding packets in a clockwise direction from ingress node G to egress node A is as follows:
  • At ingress node G:
      • Upon receiving a packet 340 from source device J, ingress node G searches for forwarding information 312 associated with the FEC information (FEC1) on the packet 320. According to forwarding information FEC1->[W16], node G pushes working label [W16] onto the packet 342 and forwards it to downstream node F on working LSP 120.
  • At transit node F:
      • Transit node F searches for forwarding information 314 based on working label [W16] on the packet 342. According to entry [W16]->[W15], transit node F swaps working label [W16] with new label [W15] and forwards the packet 344 to downstream node E on the working LSP.
  • At transit nodes E, D, C and B:
      • Similar swap operations are performed at transit nodes E, D, C and B, in which case the working label is changed from [W15] to [W14] at node E, to [W13] at node D, to [W12] at node C, and finally to [W11] at node B. See forwarding information 316, 318, 320, 322 and outgoing packets 346, 348, 350, 352 in FIG. 3.
  • At egress node A:
      • Upon receiving the packet 352 from transit node B, egress node A searches for a forwarding information 324 based on working label [W11], and removes the working label [W11] from the packet 352. Egress node A then forwards the packet 354 to destination device K, which is outside of the ring.
  • Packets (not shown in FIG. 3) may be forwarded on working LSP 130 in a similar manner using forwarding information stored at ingress node F, transit nodes E to B and egress node A.
  • Packet Forwarding on a Protection LSP
  • In the event of a link or node failure on a working LSP 120, 130, packets are transmitted on the protection LSP 110 in an opposite direction and therefore away from the failure. An example method of protection switching will now be explained with reference to FIG. 5 and FIG. 6.
      • At block 510, the first node detects that it is disconnected from an adjacent node downstream to the first node on a working LSP 120, 130. The disconnection may be due to the failure of the adjacent node, or a failure of a link between the first node and the adjacent node.
      • The first node then activates protection switching such that any packets that arrived on the working LSP 120, 130 and intended for the disconnected adjacent node are redirected to the protection LSP 110. The first node also blocks any packet received on the protection LSP from the downstream node, such as by discarding the packets until the first node is connected to the adjacent node again.
      • At block 520, the second node also detects that it is disconnected from an adjacent node upstream from the second node on a working LSP 120, 130. Again, the disconnection may be due to the failure of the adjacent node, or a failure of a link between the first node and the adjacent node.
      • The second node updates its local forwarding information such that any packets received on the protection LSP 110 is redirected onto the working LSP 120, 130. More specifically, a label operation associated with an incoming protection label is updated from ‘swap’ to ‘pop’ to remove the protection label of an incoming packet arriving on the protection LSP 110. This ensures the packet is not forwarded to the disconnected adjacent node on the protection LSP 110, and the direction of the packet transmission is changed from the protection LSP 110 to the working LSP 120, 130.
      • At block 530, the first node receives a packet that is intended for transmission towards the disconnected adjacent node on the working LSP 120, 130.
      • At block 540, the first node adds a protection label and a merge label. The merge label uniquely identifies the working LSP 120, 130 on which the packet is received and known to the nodes on the working LSP 120, 130 identified by the merge label. More specifically, if the first node is also an egress node, the first node:
        • analyses a header of the packet to determine its FEC,
        • determines the merge label from forwarding information (FEC->Wm) that associates the FEC with the merge label Wm and a push operation, and
        • performs a push operation to add the merge label to the packet.
      • If the first node is a transit node, the first node:
        • determines the working label (Wx) of the received packet,
        • determines the merge label (Wm) from forwarding information (Wx->Wm) that associates the working label (Wx) with the merge label (Wm) and a swap operation; and
        • performs a swap operation to replace the working label (Wx) with the merge label (Wm).
      • At block 550, the first node transmits the packet to a downstream node on the protection LSP 110, which operates in a direction opposite to that of the working LSP and therefore away from the failure on the working LSP 120, 130.
      • At block 560, one or more nodes on the protection LSP 110 receive the packet, perform a swap operation on the incoming protection label of the packet according to stored forwarding information and forward the packet to a downstream node on the protection LSP 110.
      • At block 610 in FIG. 6, the second node receives, on the protection LSP 110, the packet with the merge label and protection label.
      • At block 620, the second node removes the protection label from the packet. More specifically, the second node searches for forwarding information associated with the protection label on the packet, and based on the pop operation updated at block 520 in FIG. 5, removes the protection label from the packet to reveal the inner merge label.
      • At block 630, the second node replaces the merge label with a working label. In particular, the second node searches for forwarding information associated with the merge label, and based on the swap operation associated with the merge label, replaces the merge label with the working label.
      • At block 640 in FIG. 6, the second node transmits the packet with the working label on the working LSP 120, 130.
  • Using the above process, the packets are ‘wrapped’ around a failed link or node during protection switching. Since the failure is detected locally at the first and second nodes, the example method described with reference to FIG. 5 and FIG. 6 facilitates recovery within a desired 50 ms.
  • EXAMPLES
  • FIG. 7 shows an example of how packets are transmitted in the event of a failure on the working LSP 120 in the network in FIG. 3 according to the example method in FIG. 5 and FIG. 6.
  • In this case, transit node F (“first node”) and node D (“second node”) are disconnected from an adjacent node E, which is downstream to node F but upstream from the node D on the working LSP 120.
  • At node F:
      • Node F detects that it is not connected to an adjacent node E. In the example shown in FIG. 7, node E is shown to have failed but the disconnection may also be caused by a failure of the link connecting nodes E and F. Node F proceeds to activate protection switching and blocks any packets received on the protection LSP from node E.
  • At node G:
      • Ingress node G receives a packet 710 from device J that is intended for transmission to device K. Based on the FEC information 312 of the packet, node G adds a working label [W16] to the packet and forwards the packet 712 to node F on the working LSP 120.
  • At node F
      • Node F receives the packet 712, and searches for forwarding information 314 associated with the working label [W16] on the packet. Since protection switching is activated due to disconnection with node E, node F adds a merge label [Wm1] and a protection label [P1] to the packet 714.
  • At node G:
      • Node G receives and processes the packet 714 based on its topmost label [P1]. Based on forwarding information 312 associated with protection label [P1], node G swaps the protection label [Pl] with protection label [P2] and sends the packet 716 to node H.
  • At nodes H, A, B and C:
      • Similar swap operations are performed at nodes H, A, B and C. More specifically, node H replaces [P2] in packet 716 with [P3] based on forwarding information 326. Node A replaces [P3] in packet 718 with [P4] based on forwarding information 324. Node B replaces [P4] in packet 720 with [P5] based on forwarding information 322. Node C replaces [P5] in packet 722 with [P6] based on forwarding information 320.
  • At node D:
      • Node D receives the packet 724 with protection label [P6] and merge label [Wm1]. Based on forwarding information 318, node D removes protection label [P6] from the packet 724, and replaces the merge label [Wm1] with a working label [W13]. The packet 726 is then transmitted on the working LSP 120 to a downstream node C.
  • At nodes C and B:
      • Node C receives the packet 726 and replaces the working label [W13] with [W12] based on its forwarding information 320. Similarly, node B receives the packet 728 and replaces the working label [W12] with [W11] based on its forwarding information 322.
  • At node A:
      • Egress node A receives the packet 730 on the working LSP 120. Based on forwarding information 324, node A removes working label [W11] and forwards the packet 732 to device K.
  • FIG. 8 shows another example of how packets are transmitted in the event of a link or node failure on the working LSP 130 for service connection 132 in the network in FIG. 3 35 according to the example method in FIG. 5 and FIG. 6. This example illustrates, inter alia, how the protection LSP 110 for working LSP 120 in FIG. 7 is also used for working LSP 130 in FIG. 8.
  • In this case, ingress node node F (“first node”) and node D (“second node”) are disconnected from an adjacent node E, which is downstream to node F but upstream from the node D on the working LSP 130.
  • At node F (“first node”):
      • Node F detects that it is not connected to an adjacent node E and proceeds to activate protection switching and block any packets received on the protection LSP 110 from node E.
      • Node F, being also an ingress node, receives a packet 810 with FEC2 from device U that is intended for transmission to device K. Since protection switching is activated, node F adds a merge label [Wm2] and protection label [P1] to the packet 812, which is then sent to a node G downstream to node F on the protection LSP 110.
  • At node G:
      • Based on forwarding information 312 associated with protection label [P1], node G swaps the protection label [P1] with protection label [P2] and forwards the packet 816 on.
  • At nodes H, A, B and C:
      • Similar swap operations are performed at nodes H, A, B and C. Node H replaces [P2] in packet 816 with [P3] based on forwarding information 326. Node A replaces [P3] in packet 818 with [P4] based on forwarding information 324. Node B replaces [P4] in packet 820 with [P5] based on forwarding information 322. Node C replaces [P5] in packet 822 with [P6] based on forwarding information 320.
  • At node D (“second node”):
      • Node D receives the packet 824 with protection label [P6] and merge label [Wm2]. Based on forwarding information 318, node D removes protection label [P6] from the packet 824, and replaces the merge label [Wm2] with a working label [W23]. The packet 826 is then transmitted on the working LSP 120 to a downstream node C.
  • At nodes C and B:
      • Node C receives the packet 826 and replaces the working label [W23] with [W22] based on its forwarding information 320. Similarly, node B receives the packet 828 and replaces the working label [W22] with [W21] based on its forwarding information 322.
  • At node A:
      • Egress node A receives the packet 730 on the working LSP 120. Based on forwarding information 324, node A removes working label [W21] and forwards the packet 832 to device K.
  • In another example in FIG. 9, the same working label [W1] is used by all nodes on the working LSP 120 in FIG. 7. In this case, the working label [W1] also uniquely identifies the working LSP 120 and therefore is also used as a merge label.
  • In this case, node E and a link between nodes C and D have failed. Node F (“first node”) detects that its disconnection from adjacent node E while node C (“second node”) detects its disconnection from adjacent node D.
  • At node F:
      • Node F detects that it is not connected to an adjacent node E and proceeds to activate protection switching and block any packets received on the protection LSP 110 from node E.
  • At node G:
      • Ingress node G receives a packet 910 from device J that is intended for transmission to device K. Based on the FEC information of the packet, node G adds a working label [W1] to the packet and forwards the packet 912 to node F on the working LSP 120.
  • At node F
      • Node F receives the packet 912, and searches for forwarding information 944 associated with the working label [W1] on the packet. Since protection switching is activated due to disconnection with node E, node F also adds protection label [P1] to the packet 914.
  • At node G:
      • Based on forwarding information 942 associated with protection label [P1], node G swaps the protection label [P1] with [P2].
  • At nodes H, A and B:
      • Similar swap operations are performed at nodes H, A, B and C. More specifically, node H replaces [P2] in packet 916 with [P3] based on forwarding information 956. Node A replaces [P3] in packet 918 with [P4] based on forwarding information 954. Node B replaces [P4] in packet 920 with [P5] based on forwarding information 952.
  • At node C:
      • Node C receives the packet 922 with protection label [P6] and working label [W1]. Based on forwarding information 950, node C removes protection label [P6] from the packet 922, and replaces the working label [W1] with [W1]. The packet 924 is then transmitted on the working LSP 120 to a downstream node C.
  • At node B:
      • Node B receives the packet 924 and replaces the working label [W1] with [W1] based on its forwarding information 952.
  • At node A:
      • Egress node A receives the packet 926 on the working LSP 120. Based on forwarding information 954, node A removes working label [W1] and forwards the packet 930 to device K.
    Network Device
  • The above examples can be implemented by hardware, software or firmware or a combination thereof. Referring to FIG. 10, an example structure of a network device capable of acting as a node (such as A to H, J, K and U) in the MPLS network 100 is shown. The example network device 150 includes a processor 152, a memory 154 and a network interface device 158 that communicate with each other via bus 156. Forwarding information 154 a is stored in the memory 154.
  • The processor 152 implements functional units in the form of receiving unit 152 a, 35 processing unit 152 b and transmission unit 152 c. Information may be transmitted and received via the network interface device 158, which may include one or more logical or physical ports that connect the network device 150 to another network device.
  • In case of a network device capable of acting as a “first node”:
      • The receiving unit 152 a is to receive, on a working label switch path, a packet intended for transmission towards a disconnected adjacent node on the working label switch path.
      • The processing unit 152 b is to add, to the received packet, a protection label and a merge label based on the forwarding information 154 a in the memory 154. The merge label unique identifies the working label switch path on which the packet is received and known to nodes on the working label switch path identified by the merge label.
      • The transmitting unit 152 c is to transmit the packet on the protection label switch path.
  • In case of a network device capable of acting as a “first node”:
      • The receiving unit 152 a is to receive, on a working label switch path, a packet intended for transmission towards a disconnected adjacent node on the working label switch path.
      • The processing unit 152 b is to add, to the received packet, a protection label and a merge label based on the forwarding information 154 a in the memory 154. The merge label unique identifies the working label switch path on which the packet is received, and is known to nodes on the working label switch path identified by the merge label.
      • The transmitting unit 152 c to transmit the packet on the protection label switch path.
  • For example, the various methods, processes and functional units described herein may be implemented by the processor 152. The term ‘processor’ is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate array etc. The processes, methods and functional units may all be performed by a single processor 150 or split between several processors (not shown in FIG. 10 for simplicity); reference in this disclosure or the claims to a ‘processor’ should thus be interpreted to mean ‘one or more processors’.
  • Although one network interface device 158 is shown in FIG. 10, processes performed by the network interface device 158 may be split between several network interface devices. As such, reference in this disclosure to a ‘network interface device’ should be interpreted to mean ‘one or more network interface devices”.
  • The processes, methods and functional units, which may include one or more of the receiving unit 152 a, the processing unit 152 b and the transmission unit 152 c, may be implemented as machine-readable instructions 154 b executable by one or more processors, hardware logic circuitry of the one or more processors or a combination thereof. In the example in FIG. 10, the machine-readable instructions 154 b are stored in the memory 154. One or more of the receiving unit 152 a, processing unit 152 b and transmission unit 152 c may be implemented as hardware or a combination of hardware and software.
  • Further, the processes, methods and functional units described in this disclosure may be implemented in the form of a computer software product. The computer software product is stored in a storage medium and comprises a plurality of instructions for making a computer device (which can be a personal computer, a server or a network device such as a router, switch, bridge, host, access point etc.) to implement the methods recited in the examples of the present disclosure.
  • The figures are only illustrations of an example, wherein the units or procedure shown in the figures are not necessarily essential for implementing the present disclosure. Those skilled in the art will understand that the units in the device in the example can be arranged in the device in the examples as described, or can be alternatively located in one or more devices different from that in the examples. The units in the examples described can be combined into one module or further divided into a plurality of sub-units.
  • Although the flowcharts described show a specific order of execution, the order of execution may differ from that which is depicted. For example, the order of execution of two or more blocks may be changed relative to the order shown. Also, two or more blocks shown in succession may be executed concurrently or with partial concurrence. All such variations are within the scope of the present disclosure.
  • It will be appreciated that numerous variations and/or modifications may be made to the processes, methods and functional units as shown in the examples without departing from the scope of the disclosure as broadly described. The examples are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims (13)

1. A method for protection switching in a multi-protocol label switching (MPLS) ring network with a protection label switch path that is shared by multiple working label switch paths, the method comprising:
receiving, on a working label switch path, a packet intended for transmission towards a disconnected adjacent node on the working label switch path;
adding, to the received packet, a protection label and a merge label,
wherein the merge label unique identifies the working label switch path on which the packet is received and known to nodes on the working label switch path identified by the merge label; and
transmitting the packet on the protection label switch path.
2. The method of claim 1, wherein adding the merge label to the received packet 15 further comprises:
determining a forwarding equivalence class of the packet;
determining the merge label from forwarding information that associates the forwarding equivalence class with the merge label and a push label operation; and
performing a push label operation to add the merge label to the received packet.
3. The method of claim 1, wherein adding the merge label to the received packet further comprises:
determining a working label of the received packet;
determining the merge label from forwarding information that associates the working label with the merge label and a swap label operation; and
performing a swap label operation to replace the working label of the packet with the merge label.
4. The method of claim 3, wherein the merge label is the same as the working label.
5. The method of claim 1, further comprising, prior to receiving the packet,
detecting disconnection from the adjacent node on the working label switch path; and
upon the detection, blocking any packet received on the protection label switch path from the adjacent node.
6. The method of claim 5, wherein the disconnection is due to a failure of the adjacent node or a failure of a link leading to the adjacent node.
7. A network device for protection switching in a multi-protocol label switching (MPLS) ring network with a protection label switch path shared by multiple working label switch paths, the network device comprising:
a memory to store forwarding information;
a receiving unit to receive, on a working label switch path, a packet intended for transmission towards a disconnected adjacent node on the working label switch path;
a processing unit to add, to the received packet, a protection label and a merge label based on the forwarding information,
wherein the merge label unique identifies the working label switch path on which the packet is received and known to nodes on the working label switch path identified by the merge label; and
a transmitting unit to transmit the packet on the protection label switch path.
8. A method for protection switching in a multi-protocol label switching (MPLS) ring network having a protection label switch path shared by multiple working label switch paths, the method comprising:
receiving, on the protection label switch path, a packet with a protection label and a merge label, wherein the merge label uniquely identifies a working label switch path in the network and known to nodes on the working label switch path identified by the merge label;
removing the protection label from the packet and replacing the merge label with a working label, wherein the working label is associated with the working label switch path uniquely identified by the merge label; and
transmitting the packet on the working label switch path.
9. The method of claim 7, wherein replacing the merge label with the working label further comprises:
determining the working label from forwarding information that associates the merge label with the working label and a swap label operation; and
performing a swap label operation to replace the merge label with the working label.
10. The method of claim 7, wherein the merge label is the same as the working label, and the working label is also used by other network devices in the network.
11. The method of claim 7, further comprising:
detecting disconnection from the adjacent on the working label switch path; and
upon the detection, updating forwarding information to associate the protection label of the packet with a pop operation.
12. The method of claim 11, wherein the disconnection is due to a failure of the adjacent node or a failure of a link leading to the adjacent node.
13. A network device for protection switching in a multi-protocol label switching (MPLS) ring network having a protection label switch path shared by multiple working label switch paths, the network device comprising:
a memory to store forwarding information;
a receiving unit to receive, on the protection label switch path, a packet with a protection label and a merge label, wherein the merge label uniquely identifies a working label switch path in the network and known to nodes on the working label switch path identified by the merge label;
a processing unit to remove the protection label from the packet and replace the merge label with a working label based on the forwarding information, wherein the working label is associated with the working label switch path identified by the merge label; and
a transmitting unit to transmit the packet on the working label switch path.
US13/464,229 2011-05-06 2012-05-04 Protection switching in multiprotocol label switching (MPLS) networks Active 2032-09-23 US8989195B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201110117304.2A CN102201985B (en) 2011-05-06 2011-05-06 Ring protection switching method adopting multi-protocol label switching transport profile (MPLS TP) and node
CN201110117304.2 2011-05-06
CN201110117304 2011-05-06

Publications (2)

Publication Number Publication Date
US20120281705A1 true US20120281705A1 (en) 2012-11-08
US8989195B2 US8989195B2 (en) 2015-03-24

Family

ID=44662386

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/464,229 Active 2032-09-23 US8989195B2 (en) 2011-05-06 2012-05-04 Protection switching in multiprotocol label switching (MPLS) networks

Country Status (2)

Country Link
US (1) US8989195B2 (en)
CN (1) CN102201985B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841017A (en) * 2012-11-22 2014-06-04 华为技术有限公司 Method and device for automatically distribute label in ring network protection
US20140348028A1 (en) * 2012-01-17 2014-11-27 Huawei Technologies Co., Ltd. Method for creating ring network label switched path, related device, and communications system
EP2955877A4 (en) * 2013-02-05 2016-08-10 China Mobile Comm Corp Point to multi-point ring network protection method and device
CN112543147A (en) * 2019-09-20 2021-03-23 瞻博网络公司 PING/traceroute for static label switched path and segment routing traffic engineering tunnels

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102299865B (en) * 2011-09-30 2014-05-14 杭州华三通信技术有限公司 Ring protection switching method of MPLS TP (multi-protocol label switching transport profile) and nodes
CN102315972B (en) * 2011-10-14 2013-12-25 杭州华三通信技术有限公司 Method for realizing label switching path (LSP) switching and device
EP2779530B1 (en) * 2012-06-20 2017-04-26 Huawei Technologies Co., Ltd. Method, system, and node device for establishing recovery path
CN103916300B (en) * 2012-12-31 2017-07-14 新华三技术有限公司 A kind of MPLS ring net protection methods and device
CN103856404B (en) * 2013-12-24 2017-06-27 华为技术有限公司 Data transmission method, device and system in a kind of looped network
CN106330701B (en) * 2015-07-01 2020-09-25 深圳市中兴通讯技术服务有限责任公司 Rapid rerouting method and device for ring network
CN112910772B (en) * 2019-11-19 2023-01-13 中国移动通信有限公司研究院 Message forwarding method and device based on segmented routing

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030043792A1 (en) * 2001-08-31 2003-03-06 Carpini Walter Joseph Label switched communication network, a method of conditioning the network and a method of data transmission
US20030081589A1 (en) * 2001-10-19 2003-05-01 Marian Constantin Viorel Label switched routing system and method
US20030108029A1 (en) * 2001-12-12 2003-06-12 Behnam Behzadi Method and system for providing failure protection in a ring network that utilizes label switching
US20040062197A1 (en) * 2000-02-29 2004-04-01 Siemens Aktiengesellschaft Method for the providing an equivalent circuit for transmission devices in ring architectures that route MPLS packets
US20050097219A1 (en) * 2003-10-07 2005-05-05 Cisco Technology, Inc. Enhanced switchover for MPLS fast reroute
US20050201273A1 (en) * 2004-03-09 2005-09-15 Nec Corporation Label-switched path network with alternate routing control
US20070159961A1 (en) * 2005-11-17 2007-07-12 Huawei Technologies Co., Ltd. Method and Devices for Implementing Group Protection in MPLS Network
US20070274207A1 (en) * 2005-04-04 2007-11-29 Huawei Technologies Co., Ltd. Method for implementing network protection combining network element dual homing and ring network protection
US20070280251A1 (en) * 2004-09-27 2007-12-06 Huawei Technologies Co., Ltd. Ring Network And A Method For Implementing The Service Thereof
US7339887B2 (en) * 2003-05-06 2008-03-04 Overture Networks, Inc. Multipoint protected switching ring
US20080170496A1 (en) * 2007-01-15 2008-07-17 Fujitsu Limited Management of protection path bandwidth and changing of path bandwidth
US20080304407A1 (en) * 2004-09-16 2008-12-11 Alcatel Telecom Israel Efficient Protection Mechanisms For Protecting Multicast Traffic in a Ring Topology Network Utilizing Label Switching Protocols
US20090040922A1 (en) * 2004-05-06 2009-02-12 Umansky Igor Efficient protection mechanisms in a ring topology network utilizing label switching protocols
US20090238084A1 (en) * 2008-03-18 2009-09-24 Cisco Technology, Inc. Network monitoring using a proxy
US20100189115A1 (en) * 2007-10-25 2010-07-29 Fujitsu Limited Edge node redundant system in label switching network
US20100188970A1 (en) * 2009-01-28 2010-07-29 Fujitsu Limited Node apparatus, ring network, and protection path bandwidth control method
US20110205885A1 (en) * 2010-02-22 2011-08-25 Telefonaktiebolaget L M Ericsson Optimized Fast Re-Route In MPLS Ring Topologies

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100359860C (en) * 2004-09-27 2008-01-02 华为技术有限公司 Multiprotocol label switching network protection switching method
CN101146115B (en) * 2005-04-15 2011-08-10 华为技术有限公司 Implementation method for bidirectional protective switching of multi-protocol label switching
CN101159690B (en) * 2007-11-19 2010-10-27 杭州华三通信技术有限公司 Multi-protocol label switch forwarding method, device and label switching path management module

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040062197A1 (en) * 2000-02-29 2004-04-01 Siemens Aktiengesellschaft Method for the providing an equivalent circuit for transmission devices in ring architectures that route MPLS packets
US20030043792A1 (en) * 2001-08-31 2003-03-06 Carpini Walter Joseph Label switched communication network, a method of conditioning the network and a method of data transmission
US20030081589A1 (en) * 2001-10-19 2003-05-01 Marian Constantin Viorel Label switched routing system and method
US20030108029A1 (en) * 2001-12-12 2003-06-12 Behnam Behzadi Method and system for providing failure protection in a ring network that utilizes label switching
US7339887B2 (en) * 2003-05-06 2008-03-04 Overture Networks, Inc. Multipoint protected switching ring
US20050097219A1 (en) * 2003-10-07 2005-05-05 Cisco Technology, Inc. Enhanced switchover for MPLS fast reroute
US20050201273A1 (en) * 2004-03-09 2005-09-15 Nec Corporation Label-switched path network with alternate routing control
US20090040922A1 (en) * 2004-05-06 2009-02-12 Umansky Igor Efficient protection mechanisms in a ring topology network utilizing label switching protocols
US20080304407A1 (en) * 2004-09-16 2008-12-11 Alcatel Telecom Israel Efficient Protection Mechanisms For Protecting Multicast Traffic in a Ring Topology Network Utilizing Label Switching Protocols
US20070280251A1 (en) * 2004-09-27 2007-12-06 Huawei Technologies Co., Ltd. Ring Network And A Method For Implementing The Service Thereof
US20070274207A1 (en) * 2005-04-04 2007-11-29 Huawei Technologies Co., Ltd. Method for implementing network protection combining network element dual homing and ring network protection
US20070159961A1 (en) * 2005-11-17 2007-07-12 Huawei Technologies Co., Ltd. Method and Devices for Implementing Group Protection in MPLS Network
US20080170496A1 (en) * 2007-01-15 2008-07-17 Fujitsu Limited Management of protection path bandwidth and changing of path bandwidth
US20100189115A1 (en) * 2007-10-25 2010-07-29 Fujitsu Limited Edge node redundant system in label switching network
US20090238084A1 (en) * 2008-03-18 2009-09-24 Cisco Technology, Inc. Network monitoring using a proxy
US20100188970A1 (en) * 2009-01-28 2010-07-29 Fujitsu Limited Node apparatus, ring network, and protection path bandwidth control method
US20110205885A1 (en) * 2010-02-22 2011-08-25 Telefonaktiebolaget L M Ericsson Optimized Fast Re-Route In MPLS Ring Topologies

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348028A1 (en) * 2012-01-17 2014-11-27 Huawei Technologies Co., Ltd. Method for creating ring network label switched path, related device, and communications system
EP2797259A4 (en) * 2012-01-17 2014-12-31 Huawei Tech Co Ltd Ring network label switch path creating method, related device and communication system
US9350620B2 (en) * 2012-01-17 2016-05-24 Huawei Technologies Co., Ltd. Method for creating ring network label switched path, related device, and communications system
CN103841017A (en) * 2012-11-22 2014-06-04 华为技术有限公司 Method and device for automatically distribute label in ring network protection
US20140211664A1 (en) * 2012-11-22 2014-07-31 Huawei Technologies Co., Ltd. Method and device for label automatic allocation in ring network protection
US9391883B2 (en) * 2012-11-22 2016-07-12 Huawei Technologies Co., Ltd. Method and device for label automatic allocation in ring network protection
KR101750844B1 (en) * 2012-11-22 2017-06-26 후아웨이 테크놀러지 컴퍼니 리미티드 Method and device for automatically distributing labels in ring network protection
EP2955877A4 (en) * 2013-02-05 2016-08-10 China Mobile Comm Corp Point to multi-point ring network protection method and device
US10075328B2 (en) 2013-02-05 2018-09-11 China Mobile Communications Corporation Point-to-multipoint ring network protection method and device
CN112543147A (en) * 2019-09-20 2021-03-23 瞻博网络公司 PING/traceroute for static label switched path and segment routing traffic engineering tunnels

Also Published As

Publication number Publication date
CN102201985A (en) 2011-09-28
US8989195B2 (en) 2015-03-24
CN102201985B (en) 2014-02-05

Similar Documents

Publication Publication Date Title
US8989195B2 (en) Protection switching in multiprotocol label switching (MPLS) networks
JP7152533B2 (en) Method, apparatus, and system for handling transmission path failures
US7957306B2 (en) Providing reachability information in a routing domain of an external destination address in a data communications network
EP1844579B1 (en) System and methods for network path detection
EP1609279B1 (en) Method for recursive bgp route updates in mpls networks
CN108702326B (en) Method, device and non-transitory machine-readable medium for detecting SDN control plane loops
WO2019120042A1 (en) Method and node for transmitting packet in network
US8804501B2 (en) Link failure recovery method and apparatus
EP3151485A1 (en) Egress node protection in evpn all-active topology
EP3148127A1 (en) Egress protection for bum traffic with link failures in evpn
AU2011306508B2 (en) Method and apparatus to improve LDP convergence using hierarchical label stacking
JP4389221B2 (en) Network, router device, switching method used therefor, program therefor, and recording medium
US10397044B2 (en) Network function virtualization (“NFV”) based communications network resilience
US7447149B1 (en) Virtual interface with active and backup physical interfaces
US20110164508A1 (en) Network relay apparatus, network system, and control method of network relay apparatus
CN1969492A (en) Dynamic forwarding adjacency
EP2676390A1 (en) Automated transitioning of networks between protocols
US7457248B1 (en) Graceful shutdown of network resources in data networks
EP3874722A1 (en) Coordinated offloaded oam recording within packets
CN102299865B (en) Ring protection switching method of MPLS TP (multi-protocol label switching transport profile) and nodes
CN104717143B (en) For returning the method and apparatus of scene muticast data transmission more
WO2015192496A1 (en) Method and device for processing mpls load sharing
KR20210037086A (en) network switching administrating method utilizing virtual anycast node
US20070076706A1 (en) Fast reroute in a multiprotocol label switching network
US9794168B2 (en) Scalable continuity test for a group of communication paths

Legal Events

Date Code Title Description
AS Assignment

Owner name: HANGZHOU H3C TECHNOLOGIES CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YE, JINRONG;REEL/FRAME:029060/0599

Effective date: 20120509

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:H3C TECHNOLOGIES CO., LTD.;HANGZHOU H3C TECHNOLOGIES CO., LTD.;REEL/FRAME:039767/0263

Effective date: 20160501

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8