EP2761829B1 - Point-to-point based multicast label distribution protocol local protection solution - Google Patents

Point-to-point based multicast label distribution protocol local protection solution Download PDF

Info

Publication number
EP2761829B1
EP2761829B1 EP12778894.1A EP12778894A EP2761829B1 EP 2761829 B1 EP2761829 B1 EP 2761829B1 EP 12778894 A EP12778894 A EP 12778894A EP 2761829 B1 EP2761829 B1 EP 2761829B1
Authority
EP
European Patent Office
Prior art keywords
node
backup
lsp
nodes
merge point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP12778894.1A
Other languages
German (de)
French (fr)
Other versions
EP2761829A2 (en
Inventor
Qianglin Quintin ZHAO
Ying Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP2761829A2 publication Critical patent/EP2761829A2/en
Application granted granted Critical
Publication of EP2761829B1 publication Critical patent/EP2761829B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • H04L45/507Label distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery

Definitions

  • LDP Label Distribution Protocol
  • P2P Point-to-Point
  • MP2P Multipoint-to-Point
  • LSPs Label Switched Paths
  • the set of LDP extensions for setting up P2MP or MP2MP LSPs may be referred to as multipoint LDP (mLDP), which may be specified in Internet Engineering Task Force (IETF) Request for Comments (RFC) 6388, titled "Label Distribution Protocol Extensions for Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths," which is hereby incorporated by reference.
  • IETF Internet Engineering Task Force
  • RRC Request for Comments
  • Protecting services may include the establishment of backup paths for link or node failure. Under such practices, traffic should transmit across the backup path once a failure has been detected on the primary path.
  • Current solutions suffer several known problems, particularly with respect to efficiently timing the establishment and disestablishment of backup paths. A rapidly established backup path is preferable for data continuity. If the backup path is disestablished too quickly, the receiver experiences packet loss; if too slowly, package duplication results.
  • RSVP-TE Resource reservation Protocol-Traffic Engineer
  • TE Traffic Engineered
  • P2MP Point-to-MultiPoint
  • LSP Label Switched Path
  • MPLS Multi-Protocol Label Switching
  • G MPLS Generalized MPLS
  • LDP Since LDP establishes LSPs along IGP routed paths, its failure recovery is gated by IGP's re-convergence. Mechanisms such as IPFRR and RSVP-TE based FRR have been used to provide faster re-route for LDP LSPs.
  • pages 1 - 42, XP015077572 addresses the functionality described in the MPLS-TP Survivabilty Framework document [SurvivFwk] and defines a protocol that may be used to fulfill the function of the Protection State Coordination for linear protection, as described in that document.
  • the disclosure includes an upstream node, which is upstream from a downstream protected node in a primary label switched path, LSP, in a network
  • the upstream node comprises: a processor configured to: receive node protection backup route data from the downstream protected node, wherein traffic is transmitted from the upstream node to a receiver, passing through the downstream protected node and at least one merge point node; determine at least one backup LSP for the primary LSP according to the node protection backup route data, wherein the backup LSP does not include the downstream protected node; transmit traffic to the at least one merge point node along the backup LSP using forwarding information contained in the node protection backup route data following failure of the downstream protected node; cease transmitting traffic along the backup LSP using the forwarding information following a trigger event; and delete the backup LSP when the backup LSP is no longer needed; wherein the trigger event is selected from one of the following: expiration of label reserve timer; and receipt of a make before break (MBB) route completion status message.
  • MBB make before break
  • a protected node Prior to failure, a protected node (N) may provide its upstream PLR router with information regarding N's downstream node(s), also referred to herein as merge point(s).
  • the PLR may identify the P2P backup tunnel(s) for the merge point(s) prior to link or node failure at N.
  • the PLR may promptly begin forwarding traffic around failed node N to the merge point(s) through the backup P2P tunnel(s).
  • Backup forwarding may cease and traffic to the merge point(s) may resume normal transmission on the new LSP once either (1) a timer at the PLR times out, or (2) the PLR receives a make before break (MBB) re-route completion message from the merge point(s).
  • MBB make before break
  • FIG. 1 depicts one embodiment of a label switched system 100, where a plurality of P2P LSPs and P2MP LSPs may be established between at least some of the components.
  • the P2P LSPs and P2MP LSPs may be used to transport data traffic, e.g., using packets and packet labels for routing and forwarding.
  • the label switched system 100 may comprise a label switched network 101, which may be a packet switched network that transports data traffic using packets or frames along network paths or routes.
  • the packets may be routed or switched along the paths, which may be established by a label switching protocol, such as MPLS or generalized MPLS (GMPLS).
  • MPLS MPLS
  • GMPLS generalized MPLS
  • the label switched network 101 may comprise a plurality of edge nodes, including a first ingress node 111, a second ingress node 112, a plurality of first egress nodes 121, and a plurality of second egress nodes 122.
  • the first ingress node 111 and second ingress node 112 may be referred to as root nodes or head nodes
  • the first egress nodes 121 and second egress nodes 122 may be referred to as leaf nodes or tail end nodes.
  • the label switched network 101 may comprise a plurality of internal nodes 130, which may communicate with one another and with the edge nodes.
  • the first ingress node 111 and the second ingress node 112 may communicate with a source node 145 at a first external network 140, such as an Internet Protocol (IP) network, which may be coupled to the label switched network 101.
  • a first external network 140 such as an Internet Protocol (IP) network
  • IP Internet Protocol
  • First egress nodes 121 and second egress nodes 122 may communication with destination nodes 150 or other networks 160.
  • the first ingress node 111 and the second ingress node 112 may transport data, e.g., data packets, from the external network 140 to destination nodes 150.
  • the edge nodes and internal nodes 130 may be any devices or components that support transportation of the packets through the label switched network 101.
  • the network nodes may include switches, routers, or various combinations of such devices.
  • Each network node may comprise a receiver that receives packets from other network nodes, a processor or other logic circuitry that determines which network nodes to send the packets to, and a transmitter that transmits the packets to the other network nodes.
  • at least some of the network nodes may be LSRs, which may be configured to modify or update the labels of the packets transported in the label switched network 101.
  • at least some of the edge nodes may be label edge routers (LERs), which may be configured to insert or remove the labels of the packets transported between the label switched network 101 and the external network 140.
  • LERs label edge routers
  • the label switched network 101 may comprise a first P2MP LSP 105, which may be established to multicast data traffic from the first external network 140 to the destination nodes 150 or 160.
  • the first P2MP LSP 105 may comprise the first ingress node 111 and at least some of the first egress nodes 121.
  • the first P2MP LSP 105 is shown using solid arrow lines in FIG. 1 .
  • the label switched network 101 may comprise a second P2MP LSP 106, which may comprise the second ingress node 112 and at least some of the second egress nodes 122.
  • the second P2MP LSP 106 is shown using dashed arrow lines in FIG. 1 .
  • Each second egress node 122 may be paired with a first egress node 121 of the first P2MP LSP 105.
  • the second P2MP LSP 106 may also comprise some of the same or completely different internal nodes 130.
  • the second P2MP LSP 106 may provide a backup path to the first P2MP LSP 105 and may be used to forward traffic from the first external network 140 to the first P2MP LSP 105 or second P2MP LSP 106, e.g., to egress node 123, when a network component of P2MP LSP 105 fails.
  • FIGS. 2-4 depict an embodiment of an illustrative P2MP network map before, during and after a failure of a protected node.
  • the components of FIGS. 2-4 may be substantially the same as the corresponding components of FIG. 1 .
  • the root 200 may transmit data through the internal nodes 202 (R3), 204 (R2), and 206 (R1) to the receiver 208. This path is referred to herein as the Primary Path 201.
  • the root 200 may also transmit data through the internal nodes 202 (R3), 204 (R2), and 210 (R4) to the receiver 212. This path is referred to herein as the Primary Path 203.
  • Nodes 206 (R1) and 210 (R4) may be called merge points of node 204 (R2).
  • node 202 Prior to failure, node 202 (R3) may inform node 204 (R2) of its protection protocol capability, e.g., by sending a notification message. Node 204 (R2) may subsequently inform node 202 (R3) of information related to merge points 206 (R1) and 210 (R4). The information may be sent, e.g., in the format depicted in FIGS. 5 and 6 . This information may include, without limitation, the following information regarding nodes downstream of node 204 (R2): the number of merge point nodes, the merge point node address(es), label reserve time(s), and forwarding label(s). The data content and format of the information which node 202 (R3) may receive will be discussed further below.
  • a first P2P backup LSP 218 may be established through internal nodes 202 (R3), 214 (R6), 216 (R5), 210 (R4) and 206 (R1) to receiver 208.
  • the P2P backup LSP 218 may use label L1.
  • a second P2P backup LSP 220 may be established through internal nodes 202 (R3), 214 (R6), 216 (R5) and 210 (R4) to receiver 212.
  • the P2P backup LSP 220 may use label L4.
  • FIG. 3 depicts the embodiment of FIG. 2 with a failure at node 204 (R2).
  • node 202 (R3) may enter a FRR scheme, e.g., by routing traffic through the first P2P backup LSP 218 using inner label L1 and through the second P2P backup LSP 220 using inner label L4.
  • node 210 (R4) may act as the Penultimate Hop Popping (PHP) node.
  • PGP Penultimate Hop Popping
  • node 216 (R5) may act as the PHP node.
  • the PHP node may remove, or pop, the backup tunnel labels, also called outer labels, for the packets, permitting nodes 206 (R1) and 210 (R4) to receive packets with the same forwarding information as pre-failure of node 204 (R2) from a different interface (although the packets may be received on a different port).
  • Nodes 210 (R4) and 206 (R1) may thereafter receive and process packets from the PHP nodes once the PHP nodes have popped the backup tunnel labels in the same manner as if they were received on Primary Path 201 and Primary Path 203.
  • the described series of tunneling, forwarding and penultimate hop popping may continue as long as a backup LSP is needed.
  • At least one timer also referred to herein as a reserve-timer or label reserve timer, associated with one or more leaf nodes may be maintained by a network component, e.g., by node 202 (R3).
  • the label reserve timer(s) may begin counting down. Expiration of the timer(s) may serve as a trigger event for cessation of packet forwarding on the relevant backup LSP(s) and node 202 (R3) may tear down the relevant P2P backup tunnel, e.g., by removing the forwarding state which is being protected by the FRR scheme.
  • Node 202 may subsequently commence routing packets to receivers 208 and 212 through the new LSPs, depicted in FIG. 4 .
  • the reserve timer(s) may be set to less than about five seconds, less than about one second, or from about 5 to about 200 milliseconds.
  • node 202 may send notifications with a Make Before Break (MBB) "Request Status" code to nodes 206 (R1) and 210 (R4) requesting the status of MBB completion.
  • MBB Make Before Break
  • 206 (R1) and 210 (R4) may send node 202 (R3) notification messages with status codes indicating completion of the MBB routine. Further information concerning MBB may be found in RFC 3209, titled “RSVP-TE: Extensions to RSVP for LSP Tunnels,".
  • node 202 (R3) may remove the old forwarding state for backup LSP 220 and/or backup LSP 218, as applicable. Subsequently, node 202 (R3) may stop forwarding packets along the relevant P2P backup LSP as backup LSPs, and may commence routing packets through the relevant P2P backup LSPs as newly established primary LSPs 221 and 223, depicted in FIG. 4 . Although shown as P2P backup LSPs, in another embodiment the backup paths are P2MP rather than P2P backup LSPs.
  • FIG. 5 depicts an embodiment of a new type of LDP MP Status Value Element (SVE) 300.
  • the LDP MP SVE may utilize targeted-LDP (T-LDP) as documented in RFC 5036, titled “LDP Specification” and RFC 5561, titled “LDP Capabilities,".
  • the LDP MP SVE 300 may contain one or more Downstream Elements 310, 1 through N , where N is an integer representing the number of applicable downstream LSRs.
  • An LDP MP SVE 300 may be sent, e.g., from node 204 (R2) to node 202 (R3) in FIG. 2 to inform node 202 (R3) of the downstream nodes of node 204 (R2).
  • the relevant LSRs for the LDP MP SVE 300 may be designated within the Downstream Elements 310.
  • the "Type" field 312 may indicate the type of the LDP MP SVE 300, including without limitation types specified by the Internet Assigned Names Authority (IANA). For example, a '2' in the Type field 312 may indicate that the type-length-value (TLV) is for a MSLP P2P.
  • the "Length" field 314 of the LDP MP SVE 300 may indicate the length of the SVE 300 in octets.
  • the "Status Code" field 316 may indicate (e.g., using a first status code) whether the LDP MP SVE 300 advertises the existing downstream LSRs or (e.g., using a second status code) withdraws the deleted downstream LSRs.
  • FIG. 6 depicts an embodiment of a Downstream Element 310 of a LDP MP SVE 300.
  • "The "Backup Label” field 402 indicates the backup label assigned to the backup tunnel for the PLR.
  • the "D Bit” field 404 may be a Delete Flag that indicates the type of deleting routine specified for the backup tunnel.
  • a '1' in the D Bit field 404 may indicate an 'explicit-delete' routine, or deleting the backup tunnel following a MBB completion notification message received through targeted LDP (T-LDP).
  • a '0' in the D Bit field 404 may indicate an 'implicit-delete' routine, or deleting the backup tunnel by reserve-timer expiration.
  • the "N Bit” field 406 may be a Node Failure Required Flag that indicates the occasion of switching traffic's on PLR.
  • a '1' in the N Bit 406 field may indicate a 'Yes', or that the PLR should switch traffic to a P2P backup path only when the PLR detects the node failure.
  • a '0' in the N Bit field 406 may indicate a 'No', or that the PLR should switch traffic to a P2P backup path when PLR detects any failure.
  • the "Res-time” field 408 may indicate the timer delay limit value for the reserve-timer.
  • the Res-time field 408 may be effective when the D Bit field 404 is set as 'implicit-delete' and may be ignored when D bit field 408 is set as 'explicit-delete'.
  • the "Downstream Node Address" field 410 may indicate the downstream node's LSR identification address. Downstream Element 310 may also contain an "Optional Parameters" field 412 to accommodate any optional parameters a system designer may desire to include.
  • FIG. 7 depicts a flow chart of an embodiment of a point-to-point based multicast label distribution protocol local protection solution.
  • Method 500 begins with informing a protected node of the protection capability of an upstream node, depicted at block 502.
  • node 202 may inform node 204 of its protection capability.
  • the protected node responds to the upstream node by sending label mapping message information regarding its merge point nodes, depicted at block 504.
  • node 204 may inform node 202 of the number of merge point nodes, the merge point node addresses, label reserve times, and forwarding labels specific to nodes 206 and 210.
  • the upstream node may establish backup tunnels to serve the downstream nodes in the event of a failure of the protected node, depicted at block 506.
  • the backup tunnels can be P2P or P2MP LSPs.
  • node 202 may establish backup LSPs 218 and 220.
  • the upstream node may initiate a FRR protocol to route traffic through the backup tunnels to the merge point nodes, depicted at block 510.
  • node 202 may route traffic to nodes 206 and 210 using backup LSPs 218 and 220 following failure of node 204.
  • the flowchart splits at block 512 based on whether the system utilizes a label reserve timer. In another embodiment, both the label reserve timer and the MBB status monitor criteria are used.
  • a label reserve timer if a label reserve timer is used the upstream node waits for the reserve timer to timeout, depicted at block 516. For example, in FIG. 3 node 202 may wait a predefined time specified in the information passed from node 204 for the downstream nodes 206 and/or 210. If a label reserve timer is not used, the upstream node may wait for a status update from the downstream nodes indicating that the backup tunnel is established using a MBB protocol. For example, in FIG.
  • node 202 may wait for a status update message from nodes 206 and/or 210 or other indicia indicating that the backup LSPs 218 and/or 220 are established. Upon expiration of the label reserve timer for a downstream node or upon confirmation of MBB completion from a downstream node, the upstream node makes the appropriate backup tunnel(s) the new primary tunnel(s), depicted at block 518. For example, in FIG. 4 node 202 may transmit data along LSPs 218 and 220 which may be designated the new primary LSPs for nodes 206 and 210 by removing the secondary forwarding states, e.g., by deleting the associated backup label. In one embodiment, each downstream node has a label reserve timer value specific to the downstream node. In another embodiment, all downstream nodes are served based on the same label reserve timer.
  • FIG. 8 illustrates a typical, general-purpose network component or computer system 800 suitable for implementing one or more embodiments of methods disclosed herein, such as one or more steps of method 500.
  • the general-purpose network component or computer system 800 includes a processor 802 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 804, read only memory (ROM) 806, random access memory (RAM) 808, input/output (I/O) 810 devices, and network connectivity devices 812.
  • ROM read only memory
  • RAM random access memory
  • I/O input/output
  • the processor 802 may be implemented as one or more CPU chips, or one or more cores (e.g., a multi-core processor), or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs).
  • the processor 802 may be configured to implement any of the schemes described herein which may be implemented using hardware, software, or both.
  • General-purpose network component or computer system 800 may comprise an mLDP node or a P2P LDP node.
  • the secondary storage 804 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 808 is not large enough to hold all working data. Secondary storage 804 may be used to store programs that are loaded into RAM 808 when such programs are selected for execution.
  • the ROM 806 is used to store instructions and perhaps data that are read during program execution. ROM 806 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage.
  • the RAM 808 is used to store volatile data and perhaps to store instructions. Access to both ROM 806 and RAM 808 is typically faster than to secondary storage 804.
  • R R 1 + k * (R u - R 1 ), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 5 percent, ..., 50 percent, 51 percent, 52 percent, ..., 95 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent.
  • any numerical range defined by two R numbers as defined in the above is also specifically disclosed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

    BACKGROUND
  • Label Distribution Protocol (LDP) defines mechanisms for setting up Point-to-Point (P2P) and Multipoint-to-Point (MP2P) Label Switched Paths (LSPs) in a network. The set of LDP extensions for setting up P2MP or MP2MP LSPs may be referred to as multipoint LDP (mLDP), which may be specified in Internet Engineering Task Force (IETF) Request for Comments (RFC) 6388, titled "Label Distribution Protocol Extensions for Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths," which is hereby incorporated by reference.
  • Service providers continue to deploy real-time multicast applications using mLDP across Multiprotocol Label Switching (MPLS) networks. There is a clear need to protect these real-time applications and to provide the shortest switching times in the event of failure. Protecting services may include the establishment of backup paths for link or node failure. Under such practices, traffic should transmit across the backup path once a failure has been detected on the primary path. Current solutions suffer several known problems, particularly with respect to efficiently timing the establishment and disestablishment of backup paths. A rapidly established backup path is preferable for data continuity. If the backup path is disestablished too quickly, the receiver experiences packet loss; if too slowly, package duplication results.
  • CHEN HUAWEI TECHNOLOGIES N SO VERIZON INC H, "Extensions to RSVP-TE for P2MP LSP Ingress Local Protection; draft-chen-mpls-p2mp-ingress-protection-03.txt", EXTENSIONS TO RSVP-TE FOR P2MP LSP INGRESS LOCAL PROTECTION; DRAFT-CHEN-MPLS-P2MP-INGRESS-PROTECTION-03.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, (20110711), no. 3, pages 1 - 11, XP015077163, describes extensions to Resource reservation Protocol-Traffic Engineer (RSVP-TE) for locally protecting the ingress node of a Traffic Engineered (TE) Point-to-MultiPoint (P2MP) Label Switched Path (LSP) in a Multi-Protocol Label Switching (MPLS) and Generalized MPLS (G MPLS) network.
  • KINI S NARAYANAN ERICSSON S, "MPLS Fast Re-route using extensions to LDP; draft-kini-mpls-frr-1dp-01.txt", MPLS FAST RE-ROUTE USING EXTENSIONS TO LDP; DRAFT-KINI-MPLS-FRR-LDP-01.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH-1205 GENEVA, SWITZERLAND, (20110711), no. 1, pages 1 - 11, XP015077288, discloses that LDP is widely deployed in MPLS networks to signal LSPs. Since LDP establishes LSPs along IGP routed paths, its failure recovery is gated by IGP's re-convergence. Mechanisms such as IPFRR and RSVP-TE based FRR have been used to provide faster re-route for LDP LSPs.
  • BRYANT E OSBORNE CISCO N SPRECHER NOKIA SIEMENS NETWORKS A FULIGNOLI S ET AL, "MPLS-TP Linear Protection; draft-ietf-mpls-tp-linear-protection-09.txt", MPLS-TP LINEAR PROTECTION; DRAFT-IETF-MPLS-TP-LINEAR-PROTECTION-09.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, (20110804), no. 9, pages 1 - 42, XP015077572, addresses the functionality described in the MPLS-TP Survivabilty Framework document [SurvivFwk] and defines a protocol that may be used to fulfill the function of the Protection State Coordination for linear protection, as described in that document.
  • AWDUCHE MOVAZ NETWORKS D ET AL, "RSVP-TE: Extensions to RSVP for LSP Tunnels; rfc3209.txt", 20011201, (20011201), ISSN 0000-0003, XP015008988, describes the use of RSVP (Resource Reservation Protocol), including all the necessary extensions, to establish label-switched paths (LSPs) in MPLS. Since the flow along an LSP is completely identified by the label applied at the ingress node of the path, these paths may be treated as tunnels.
  • SUMMARY
  • In one aspect, the disclosure includes an upstream node, which is upstream from a downstream protected node in a primary label switched path, LSP, in a network, wherein the upstream node comprises: a processor configured to: receive node protection backup route data from the downstream protected node, wherein traffic is transmitted from the upstream node to a receiver, passing through the downstream protected node and at least one merge point node; determine at least one backup LSP for the primary LSP according to the node protection backup route data, wherein the backup LSP does not include the downstream protected node; transmit traffic to the at least one merge point node along the backup LSP using forwarding information contained in the node protection backup route data following failure of the downstream protected node; cease transmitting traffic along the backup LSP using the forwarding information following a trigger event; and delete the backup LSP when the backup LSP is no longer needed; wherein the trigger event is selected from one of the following: expiration of label reserve timer; and receipt of a make before break (MBB) route completion status message.
  • These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
    • FIG. 1 depicts a schematic diagram of an embodiment of a label switched system.
    • FIG. 2 depicts an embodiment of an illustrative network before failure of a protected node on the primary LSP.
    • FIG. 3 depicts an embodiment of an illustrative network during failure of a protected node on the primary LSP.
    • FIG. 4 depicts an embodiment of an illustrative network after failure of a protected node on the primary LSP.
    • FIG. 5 depicts an embodiment of an illustrative LDP multi-point Status Value Element.
    • FIG. 6 depicts an embodiment of a Downstream Element of an LDP multi-point Status Value Element.
    • FIG. 7 is a flow chart of an embodiment of a point-to-point based multicast label distribution protocol local protection solution.
    • FIG. 8 depicts a typical, general-purpose network component suitable for implementing one or more embodiments of the disclosed components.
    DETAILED DESCRIPTION
  • It should be understood at the outset that, although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
  • Disclosed herein are methods, apparatuses, and systems for employing a point-to-point (P2P) protection mechanism for P2MP LSP link/node protection. Prior to failure, a protected node (N) may provide its upstream PLR router with information regarding N's downstream node(s), also referred to herein as merge point(s). The PLR may identify the P2P backup tunnel(s) for the merge point(s) prior to link or node failure at N. Upon a link or node failure at N, the PLR may promptly begin forwarding traffic around failed node N to the merge point(s) through the backup P2P tunnel(s). Backup forwarding may cease and traffic to the merge point(s) may resume normal transmission on the new LSP once either (1) a timer at the PLR times out, or (2) the PLR receives a make before break (MBB) re-route completion message from the merge point(s). This process avoids breadth-first, depth-first or other backup path identification schemes as well as duplicative packet transmission and packet loss through late or early backup path disestablishment.
  • FIG. 1 depicts one embodiment of a label switched system 100, where a plurality of P2P LSPs and P2MP LSPs may be established between at least some of the components. The P2P LSPs and P2MP LSPs may be used to transport data traffic, e.g., using packets and packet labels for routing and forwarding. The label switched system 100 may comprise a label switched network 101, which may be a packet switched network that transports data traffic using packets or frames along network paths or routes. The packets may be routed or switched along the paths, which may be established by a label switching protocol, such as MPLS or generalized MPLS (GMPLS).
  • The label switched network 101 may comprise a plurality of edge nodes, including a first ingress node 111, a second ingress node 112, a plurality of first egress nodes 121, and a plurality of second egress nodes 122. When a P2MP LSP in the label switched network 101 comprises ingress and egress edge nodes, the first ingress node 111 and second ingress node 112 may be referred to as root nodes or head nodes, and the first egress nodes 121 and second egress nodes 122 may be referred to as leaf nodes or tail end nodes. Additionally, the label switched network 101 may comprise a plurality of internal nodes 130, which may communicate with one another and with the edge nodes. The first ingress node 111 and the second ingress node 112 may communicate with a source node 145 at a first external network 140, such as an Internet Protocol (IP) network, which may be coupled to the label switched network 101. First egress nodes 121 and second egress nodes 122 may communication with destination nodes 150 or other networks 160. As such, the first ingress node 111 and the second ingress node 112 may transport data, e.g., data packets, from the external network 140 to destination nodes 150.
  • In an embodiment, the edge nodes and internal nodes 130 (collectively, network nodes) may be any devices or components that support transportation of the packets through the label switched network 101. For example, the network nodes may include switches, routers, or various combinations of such devices. Each network node may comprise a receiver that receives packets from other network nodes, a processor or other logic circuitry that determines which network nodes to send the packets to, and a transmitter that transmits the packets to the other network nodes. In some embodiments, at least some of the network nodes may be LSRs, which may be configured to modify or update the labels of the packets transported in the label switched network 101. Further, at least some of the edge nodes may be label edge routers (LERs), which may be configured to insert or remove the labels of the packets transported between the label switched network 101 and the external network 140.
  • The label switched network 101 may comprise a first P2MP LSP 105, which may be established to multicast data traffic from the first external network 140 to the destination nodes 150 or 160. The first P2MP LSP 105 may comprise the first ingress node 111 and at least some of the first egress nodes 121. The first P2MP LSP 105 is shown using solid arrow lines in FIG. 1. To protect the first P2MP LSP 105 against link or node failures, the label switched network 101 may comprise a second P2MP LSP 106, which may comprise the second ingress node 112 and at least some of the second egress nodes 122. The second P2MP LSP 106 is shown using dashed arrow lines in FIG. 1. Each second egress node 122 may be paired with a first egress node 121 of the first P2MP LSP 105. The second P2MP LSP 106 may also comprise some of the same or completely different internal nodes 130. The second P2MP LSP 106 may provide a backup path to the first P2MP LSP 105 and may be used to forward traffic from the first external network 140 to the first P2MP LSP 105 or second P2MP LSP 106, e.g., to egress node 123, when a network component of P2MP LSP 105 fails.
  • FIGS. 2-4 depict an embodiment of an illustrative P2MP network map before, during and after a failure of a protected node. The components of FIGS. 2-4 may be substantially the same as the corresponding components of FIG. 1. The root 200 may transmit data through the internal nodes 202 (R3), 204 (R2), and 206 (R1) to the receiver 208. This path is referred to herein as the Primary Path 201. The root 200 may also transmit data through the internal nodes 202 (R3), 204 (R2), and 210 (R4) to the receiver 212. This path is referred to herein as the Primary Path 203. Nodes 206 (R1) and 210 (R4) may be called merge points of node 204 (R2).
  • Prior to failure, node 202 (R3) may inform node 204 (R2) of its protection protocol capability, e.g., by sending a notification message. Node 204 (R2) may subsequently inform node 202 (R3) of information related to merge points 206 (R1) and 210 (R4). The information may be sent, e.g., in the format depicted in FIGS. 5 and 6. This information may include, without limitation, the following information regarding nodes downstream of node 204 (R2): the number of merge point nodes, the merge point node address(es), label reserve time(s), and forwarding label(s). The data content and format of the information which node 202 (R3) may receive will be discussed further below. A first P2P backup LSP 218 may be established through internal nodes 202 (R3), 214 (R6), 216 (R5), 210 (R4) and 206 (R1) to receiver 208. The P2P backup LSP 218 may use label L1. Similarly, a second P2P backup LSP 220 may be established through internal nodes 202 (R3), 214 (R6), 216 (R5) and 210 (R4) to receiver 212. The P2P backup LSP 220 may use label L4.
  • FIG. 3 depicts the embodiment of FIG. 2 with a failure at node 204 (R2). When node 202 (R3) detects a failure of node 204 (R2), node 202 (R3) may enter a FRR scheme, e.g., by routing traffic through the first P2P backup LSP 218 using inner label L1 and through the second P2P backup LSP 220 using inner label L4. For packets on the first P2P backup LSP 218, node 210 (R4) may act as the Penultimate Hop Popping (PHP) node. For packets on the second P2P backup LSP 220, node 216 (R5) may act as the PHP node. The PHP node may remove, or pop, the backup tunnel labels, also called outer labels, for the packets, permitting nodes 206 (R1) and 210 (R4) to receive packets with the same forwarding information as pre-failure of node 204 (R2) from a different interface (although the packets may be received on a different port). Nodes 210 (R4) and 206 (R1) may thereafter receive and process packets from the PHP nodes once the PHP nodes have popped the backup tunnel labels in the same manner as if they were received on Primary Path 201 and Primary Path 203. The described series of tunneling, forwarding and penultimate hop popping may continue as long as a backup LSP is needed.
  • In one embodiment, at least one timer, also referred to herein as a reserve-timer or label reserve timer, associated with one or more leaf nodes may be maintained by a network component, e.g., by node 202 (R3). Upon failure of protected node 204 (R2) and establishment of the backup tunnels, the label reserve timer(s) may begin counting down. Expiration of the timer(s) may serve as a trigger event for cessation of packet forwarding on the relevant backup LSP(s) and node 202 (R3) may tear down the relevant P2P backup tunnel, e.g., by removing the forwarding state which is being protected by the FRR scheme. Node 202 (R3) may subsequently commence routing packets to receivers 208 and 212 through the new LSPs, depicted in FIG. 4. The reserve timer(s) may be set to less than about five seconds, less than about one second, or from about 5 to about 200 milliseconds.
  • In another embodiment, upon failure of Primary Path 201 and Primary Path 203 and commencement of packet forwarding through backup LSPs 218 and 220, node 202 (R3) may send notifications with a Make Before Break (MBB) "Request Status" code to nodes 206 (R1) and 210 (R4) requesting the status of MBB completion. After 206 (R1) and 210 (R4) complete the MBB sequence, 206 (R1) and 210 (R4) may send node 202 (R3) notification messages with status codes indicating completion of the MBB routine. Further information concerning MBB may be found in RFC 3209, titled "RSVP-TE: Extensions to RSVP for LSP Tunnels,". Upon receiving a completion status code from 206 (R1) and/or 210 (R4), node 202 (R3) may remove the old forwarding state for backup LSP 220 and/or backup LSP 218, as applicable. Subsequently, node 202 (R3) may stop forwarding packets along the relevant P2P backup LSP as backup LSPs, and may commence routing packets through the relevant P2P backup LSPs as newly established primary LSPs 221 and 223, depicted in FIG. 4. Although shown as P2P backup LSPs, in another embodiment the backup paths are P2MP rather than P2P backup LSPs.
  • FIG. 5 depicts an embodiment of a new type of LDP MP Status Value Element (SVE) 300. The LDP MP SVE may utilize targeted-LDP (T-LDP) as documented in RFC 5036, titled "LDP Specification" and RFC 5561, titled "LDP Capabilities,". The LDP MP SVE 300 may contain one or more Downstream Elements 310, 1 through N, where N is an integer representing the number of applicable downstream LSRs. An LDP MP SVE 300 may be sent, e.g., from node 204 (R2) to node 202 (R3) in FIG. 2 to inform node 202 (R3) of the downstream nodes of node 204 (R2). The relevant LSRs for the LDP MP SVE 300 may be designated within the Downstream Elements 310. The "Type" field 312 may indicate the type of the LDP MP SVE 300, including without limitation types specified by the Internet Assigned Names Authority (IANA). For example, a '2' in the Type field 312 may indicate that the type-length-value (TLV) is for a MSLP P2P. The "Length" field 314 of the LDP MP SVE 300 may indicate the length of the SVE 300 in octets. The "Status Code" field 316 may indicate (e.g., using a first status code) whether the LDP MP SVE 300 advertises the existing downstream LSRs or (e.g., using a second status code) withdraws the deleted downstream LSRs.
  • FIG. 6 depicts an embodiment of a Downstream Element 310 of a LDP MP SVE 300. "The "Backup Label" field 402 indicates the backup label assigned to the backup tunnel for the PLR. The "D Bit" field 404 may be a Delete Flag that indicates the type of deleting routine specified for the backup tunnel. A '1' in the D Bit field 404 may indicate an 'explicit-delete' routine, or deleting the backup tunnel following a MBB completion notification message received through targeted LDP (T-LDP). A '0' in the D Bit field 404 may indicate an 'implicit-delete' routine, or deleting the backup tunnel by reserve-timer expiration. The "N Bit" field 406 may be a Node Failure Required Flag that indicates the occasion of switching traffic's on PLR. A '1' in the N Bit 406 field may indicate a 'Yes', or that the PLR should switch traffic to a P2P backup path only when the PLR detects the node failure. A '0' in the N Bit field 406 may indicate a 'No', or that the PLR should switch traffic to a P2P backup path when PLR detects any failure. The "Res-time" field 408 may indicate the timer delay limit value for the reserve-timer. The Res-time field 408 may be effective when the D Bit field 404 is set as 'implicit-delete' and may be ignored when D bit field 408 is set as 'explicit-delete'. The "Downstream Node Address" field 410 may indicate the downstream node's LSR identification address. Downstream Element 310 may also contain an "Optional Parameters" field 412 to accommodate any optional parameters a system designer may desire to include.
  • FIG. 7 depicts a flow chart of an embodiment of a point-to-point based multicast label distribution protocol local protection solution. Method 500 begins with informing a protected node of the protection capability of an upstream node, depicted at block 502. For example, in FIG. 2 node 202 may inform node 204 of its protection capability. The protected node responds to the upstream node by sending label mapping message information regarding its merge point nodes, depicted at block 504. For example, in FIG. 2 node 204 may inform node 202 of the number of merge point nodes, the merge point node addresses, label reserve times, and forwarding labels specific to nodes 206 and 210. With the information received from the protected node, the upstream node may establish backup tunnels to serve the downstream nodes in the event of a failure of the protected node, depicted at block 506. In one embodiment, the backup tunnels can be P2P or P2MP LSPs. For example, in FIG. 2 node 202 may establish backup LSPs 218 and 220. Upon failure of the protected node, depicted at block 508, the upstream node may initiate a FRR protocol to route traffic through the backup tunnels to the merge point nodes, depicted at block 510. For example, in FIG. 3 node 202 may route traffic to nodes 206 and 210 using backup LSPs 218 and 220 following failure of node 204. The flowchart splits at block 512 based on whether the system utilizes a label reserve timer. In another embodiment, both the label reserve timer and the MBB status monitor criteria are used. In FIG. 7, if a label reserve timer is used the upstream node waits for the reserve timer to timeout, depicted at block 516. For example, in FIG. 3 node 202 may wait a predefined time specified in the information passed from node 204 for the downstream nodes 206 and/or 210. If a label reserve timer is not used, the upstream node may wait for a status update from the downstream nodes indicating that the backup tunnel is established using a MBB protocol. For example, in FIG. 3 node 202 may wait for a status update message from nodes 206 and/or 210 or other indicia indicating that the backup LSPs 218 and/or 220 are established. Upon expiration of the label reserve timer for a downstream node or upon confirmation of MBB completion from a downstream node, the upstream node makes the appropriate backup tunnel(s) the new primary tunnel(s), depicted at block 518. For example, in FIG. 4 node 202 may transmit data along LSPs 218 and 220 which may be designated the new primary LSPs for nodes 206 and 210 by removing the secondary forwarding states, e.g., by deleting the associated backup label. In one embodiment, each downstream node has a label reserve timer value specific to the downstream node. In another embodiment, all downstream nodes are served based on the same label reserve timer.
  • The schemes described above may be implemented on any general-purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it. FIG. 8 illustrates a typical, general-purpose network component or computer system 800 suitable for implementing one or more embodiments of methods disclosed herein, such as one or more steps of method 500. The general-purpose network component or computer system 800 includes a processor 802 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 804, read only memory (ROM) 806, random access memory (RAM) 808, input/output (I/O) 810 devices, and network connectivity devices 812. The processor 802 may be implemented as one or more CPU chips, or one or more cores (e.g., a multi-core processor), or may be part of one or more application specific integrated circuits (ASICs) and/or digital signal processors (DSPs). The processor 802 may be configured to implement any of the schemes described herein which may be implemented using hardware, software, or both. General-purpose network component or computer system 800 may comprise an mLDP node or a P2P LDP node.
  • The secondary storage 804 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 808 is not large enough to hold all working data. Secondary storage 804 may be used to store programs that are loaded into RAM 808 when such programs are selected for execution. The ROM 806 is used to store instructions and perhaps data that are read during program execution. ROM 806 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage. The RAM 808 is used to store volatile data and perhaps to store instructions. Access to both ROM 806 and RAM 808 is typically faster than to secondary storage 804.
  • At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, R1, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R = R1 + k * (Ru - R1), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 5 percent, ..., 50 percent, 51 percent, 52 percent, ..., 95 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. The use of the term about means ±10% of the subsequent number, unless otherwise stated. Use of the term "optionally" with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application.
  • While several embodiments have been provided in the present disclosure, it may be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
  • In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise.

Claims (3)

  1. An upstream node (202), which is upstream from a downstream protected node (204) in a primary label switched path (201, 203), LSP, in a network, wherein the upstream node (202) comprises:
    a processor configured to:
    receive node protection backup route data from the downstream protected node (204), wherein traffic is transmitted from the upstream node (202) to a receiver (208, 212), passing through the downstream protected node (204) and at least one merge point node (206, 210);
    determine at least one backup LSP (218, 220) for the primary LSP (201, 203)according to the node protection backup route data, wherein the backup LSP (218, 220) does not include the downstream protected node (204);
    transmit traffic to the at least one merge point node (206, 210) along the backup LSP (218, 220) using forwarding information contained in the node protection backup route data following failure of the downstream protected node (204);
    cease transmitting traffic along the backup LSP (218, 220) using the forwarding information following a trigger event; and
    delete the backup LSP (218, 220) when the backup LSP (218, 220) is no longer needed;
    wherein the trigger event is selected from one of the following:
    expiration of label reserve timer; and
    receipt of a make before break, MBB, route completion status message.
  2. The upstream node of claim 1,
    wherein the received node protection backup route data comprises one or more of the following data elements related to one or more merge point nodes of a protected node: a number of merge point nodes, a plurality of merge point node addresses, a plurality of merge point node label reserve times and a plurality of merge point node forwarding labels.
  3. The upstream node of claim 2, wherein the processor is further configured to notify the downstream protected node of the capability of the upstream node to receive and utilize the data elements for establishing the backup LSP.
EP12778894.1A 2011-10-10 2012-10-09 Point-to-point based multicast label distribution protocol local protection solution Active EP2761829B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161545397P 2011-10-10 2011-10-10
PCT/US2012/059372 WO2013055696A2 (en) 2011-10-10 2012-10-09 Point-to-point based multicast label distribution protocol local protection solution

Publications (2)

Publication Number Publication Date
EP2761829A2 EP2761829A2 (en) 2014-08-06
EP2761829B1 true EP2761829B1 (en) 2018-12-05

Family

ID=47080844

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12778894.1A Active EP2761829B1 (en) 2011-10-10 2012-10-09 Point-to-point based multicast label distribution protocol local protection solution

Country Status (4)

Country Link
US (1) US9036642B2 (en)
EP (1) EP2761829B1 (en)
CN (1) CN104067573B (en)
WO (1) WO2013055696A2 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9083636B2 (en) * 2012-02-13 2015-07-14 Cisco Technology, Inc. System and method for multipoint label distribution protocol node protection using a targeted session in a network environment
WO2013188802A1 (en) * 2012-06-14 2013-12-19 Huawei Technologies Co., Ltd. Mrsvp-te based fast reroute in facility (1:n) protection mode
WO2013188801A1 (en) * 2012-06-14 2013-12-19 Huawei Technologies Co., Ltd. Mrsvp-te based fast reroute in detour (1:1) protection mode
US9001671B2 (en) * 2012-10-17 2015-04-07 Verizon Patent And Licensing Inc. Feature peer network representations and scalable feature peer network management
US9660897B1 (en) 2013-12-04 2017-05-23 Juniper Networks, Inc. BGP link-state extensions for segment routing
US10020984B1 (en) * 2014-01-10 2018-07-10 Juniper Networks, Inc. RSVP local protection signaling reduction
US9363169B2 (en) * 2014-03-31 2016-06-07 Juniper Networks, Inc. Apparatus, system, and method for reconfiguring point-to-multipoint label-switched paths
US9887874B2 (en) * 2014-05-13 2018-02-06 Cisco Technology, Inc. Soft rerouting in a network using predictive reliability metrics
US9838246B1 (en) * 2014-09-30 2017-12-05 Juniper Networks, Inc. Micro-loop prevention using source packet routing
CN105610708B (en) * 2014-10-31 2019-11-12 新华三技术有限公司 The implementation method and RB equipment of multicast FRR in a kind of TRILL network
US9886665B2 (en) * 2014-12-08 2018-02-06 International Business Machines Corporation Event detection using roles and relationships of entities
US9853915B2 (en) 2015-11-04 2017-12-26 Cisco Technology, Inc. Fast fail-over using tunnels
US9781029B2 (en) 2016-02-04 2017-10-03 Cisco Technology, Inc. Loop detection and prevention
CN107181677A (en) * 2016-03-09 2017-09-19 中兴通讯股份有限公司 A kind of method and device of the main tunnel nodes protections of P2MP
US10104139B2 (en) * 2016-03-31 2018-10-16 Juniper Networks, Inc. Selectively signaling selective tunnels in multicast VPNs
CN110620829B (en) * 2018-06-19 2022-08-19 中兴通讯股份有限公司 Method, device and equipment for distributing multicast service identification number and storage medium
CN108924044B (en) * 2018-06-22 2020-12-11 迈普通信技术股份有限公司 Link maintenance method, PE device and readable storage medium
CN110177044B (en) * 2019-06-27 2021-08-24 烽火通信科技股份有限公司 Method and system for creating protection tunnel
US11870684B2 (en) * 2021-09-09 2024-01-09 Ciena Corporation Micro-loop avoidance in networks

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7626925B1 (en) * 2003-06-19 2009-12-01 Cisco Technology, Inc. Methods for finding a merge point node for MPLS fast re-route
KR100693052B1 (en) * 2005-01-14 2007-03-12 삼성전자주식회사 Apparatus and method of fast reroute for mpls multicast
US7602702B1 (en) * 2005-02-10 2009-10-13 Juniper Networks, Inc Fast reroute of traffic associated with a point to multi-point network tunnel
US7889641B2 (en) * 2006-07-18 2011-02-15 Opnet Technologies, Inc. Path flow formulation for fast reroute bypass tunnels in MPLS networks
CN101335695B (en) * 2007-06-27 2012-11-07 华为技术有限公司 Head node protection method, apparatus and device for point-to-multipoint label switching path
IL192397A0 (en) * 2008-06-23 2012-06-28 Eci Telecom Ltd Technique for fast reroute protection of logical paths in communication networks
US8879384B2 (en) * 2009-09-14 2014-11-04 Alcatel Lucent Fast upstream source failure detection
US8848519B2 (en) * 2011-02-28 2014-09-30 Telefonaktiebolaget L M Ericsson (Publ) MPLS fast re-route using LDP (LDP-FRR)
CN102123097B (en) * 2011-03-14 2015-05-20 杭州华三通信技术有限公司 Method and device for protecting router
US8576708B2 (en) * 2011-06-02 2013-11-05 Cisco Technology, Inc. System and method for link protection using shared SRLG association

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US9036642B2 (en) 2015-05-19
CN104067573B (en) 2018-01-02
WO2013055696A2 (en) 2013-04-18
WO2013055696A3 (en) 2013-07-11
EP2761829A2 (en) 2014-08-06
CN104067573A (en) 2014-09-24
US20130089100A1 (en) 2013-04-11

Similar Documents

Publication Publication Date Title
EP2761829B1 (en) Point-to-point based multicast label distribution protocol local protection solution
EP2645644B1 (en) Protecting ingress and egress of a label switched path
US8565098B2 (en) Method, device, and system for traffic switching in multi-protocol label switching traffic engineering
US9860161B2 (en) System and method for computing a backup ingress of a point-to-multipoint label switched path
US7765306B2 (en) Technique for enabling bidirectional forwarding detection between edge devices in a computer network
US7746796B2 (en) Directed echo requests and reverse traceroute
US8830826B2 (en) System and method for computing a backup egress of a point-to-multi-point label switched path
US8218432B2 (en) Routing method in a label switching network
US20110199891A1 (en) System and Method for Protecting Ingress and Egress of a Point-to-Multipoint Label Switched Path
US8976646B2 (en) Point to multi-point based multicast label distribution protocol local protection solution
US20090292943A1 (en) Techniques for determining local repair connections
US20090292942A1 (en) Techniques for determining optimized local repair paths
EP2767052B1 (en) Failure detection in the multiprotocol label switching multicast label switched path's end-to-end protection solution
CN103795625A (en) Multi-protocol label switching network quick rerouting implementation method and device
Agarwal et al. Ingress failure recovery mechanisms in MPLS network

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140430

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20161202

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602012054388

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012700000

Ipc: H04L0012703000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/723 20130101ALI20180411BHEP

Ipc: H04L 12/703 20130101AFI20180411BHEP

Ipc: H04L 12/707 20130101ALI20180411BHEP

Ipc: H04L 12/761 20130101ALI20180411BHEP

INTG Intention to grant announced

Effective date: 20180507

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1074488

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181215

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602012054388

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20181205

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1074488

Country of ref document: AT

Kind code of ref document: T

Effective date: 20181205

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190305

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190305

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190306

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190405

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190405

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602012054388

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

26N No opposition filed

Effective date: 20190906

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191009

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20191031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20191009

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191009

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191009

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20121009

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602012054388

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012703000

Ipc: H04L0045280000

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20181205

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230830

Year of fee payment: 12