EP1994666A2 - A method and apparatus for distributing labels in a label distribution protocol multicast network - Google Patents

A method and apparatus for distributing labels in a label distribution protocol multicast network

Info

Publication number
EP1994666A2
EP1994666A2 EP07779743A EP07779743A EP1994666A2 EP 1994666 A2 EP1994666 A2 EP 1994666A2 EP 07779743 A EP07779743 A EP 07779743A EP 07779743 A EP07779743 A EP 07779743A EP 1994666 A2 EP1994666 A2 EP 1994666A2
Authority
EP
European Patent Office
Prior art keywords
node
label
network
path vector
nexthop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP07779743A
Other languages
German (de)
French (fr)
Other versions
EP1994666A4 (en
EP1994666B1 (en
Inventor
Alex E. Raj
Eric C. Rosen
Robert H. Thomas
Ijsbrand Wijnands
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Publication of EP1994666A2 publication Critical patent/EP1994666A2/en
Publication of EP1994666A4 publication Critical patent/EP1994666A4/en
Application granted granted Critical
Publication of EP1994666B1 publication Critical patent/EP1994666B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/18Loop-free operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • H04L45/507Label distribution

Definitions

  • the present invention generally relates to distribution of labels, for example,
  • the invention relates more specifically to a method and apparatus for distributing labels in a Label Distribution Protocol (LDP) multicast network.
  • LDP Label Distribution Protocol
  • packets of data are sent from a source to a destination via a network of elements including links (communication paths such as telephone or optical lines) and nodes (for example, routers directing the packet along one or more of a plurality of links connected to it) according to one of various routing protocols.
  • links communication paths such as telephone or optical lines
  • nodes for example, routers directing the packet along one or more of a plurality of links connected to it
  • MPLS is a protocol that is well known to the skilled reader and which is described in document "Multi Protocol Label Switching Architecture" which is available at the time of writing on the file “rfc3031.txt” in the directory “rfc” of the domain “ietf.org” on the World Wide Web.
  • MPLS a path for a source-destination pair is established, and values required for forwarding a packet between adjacent routers in the path together with headers or "labels" are prepended to the packet.
  • the labels are used to di- rect the packet to the correct interface and next hop.
  • the labels precede the IP or other header allowing smaller outer headers.
  • LSP Label Switched Path
  • LDP Label Distribution Protocol
  • TLV Type Length Value
  • FEC forwarding equivalent class
  • LFIB La- bel Forwarding Information Base
  • MPLS LDP approaches have further been applied to multicast networks.
  • Conventionally multicast networks rely on unicast routing protocols.
  • Unicast routing protocol relies on a routing algorithm resident at each node. Each node on the network advertises the routes throughout the network. The routes are stored in a routing information base (RIB) and based on these results a forwarding information base (FIB) or forwarding table is updated to control forwarding of packets appropriately.
  • RIB routing information base
  • FIB forwarding information base
  • a notification representing the change is flooded through the network by each node adjacent the change, each node receiving a notification sending it to each adjacent node.
  • Link state protocols can support multicast traffic comprising point to multipoint traffic (P2MP) and multipoint to multipoint traffic (MP2MP).
  • IP Internet protocol
  • Internet Protocol Multicast is well known to the skilled reader and is described in document “Internet Protocol Multicast” which is available at the time of writing on the file “IP multLhtm” in the directory “univercd/cc/td/doc/cisintwk/ito_doc" of the domain vAvw.cisco.com of the World Wide Web.
  • Multicast allows data packets to be forwarded to multiple destinations (or “receivers") without unnecessary duplication, reducing the amount of data traffic accord- ingly.
  • All hosts wishing to become a receiver for a multicast group perform a "join" operation to join the multicast group.
  • a multicast tree such as a shortest path tree is then created providing routes to all receivers in the group.
  • the multicast group in a P2MP group is denoted (S,G) where S is the address of the source or broadcasting host and G is an IP multicast address taken from a reserved address space.
  • a shared group is denoted (* ( G) allowing multiple sources to send to multiple receivers.
  • the multicast tree is constructed as a shared tree including a shared root or rendezvous point (RP).
  • RP rendezvous point
  • each router ensures that data is only sent away from the source and towards the receiver as otherwise traffic would loop back, which is impermissible in multicast.
  • the router carries out a reverse path forwarding (RPF) check to ensure that the incoming packet has arrived on the appropriate input interface. If the check fails then the packet is dropped.
  • RPF reverse path forwarding
  • the router uses the unicast forwarding table to identify the appropriate upstream and downstream interfaces in the tree as part of the RPF and only forwards packets arriving from the upstream direction.
  • Multicast methods which make use of existing forwarding information in this manner belong to the family of "protocol independent multicast" (PIM) methods as they are independent of the specific routing protocol adopted at each router.
  • Fig. 1 is a network diagram illustrating a P2MP network
  • Fig. 2 is a flow diagram illustrating the steps involved in a node joining the network.
  • the network shown in Fig. 1 is designated generally 100 and includes nodes comprising, for example routers Rl, reference 102, R2, reference numeral 104, R3, reference numeral 106 and R4, refer- ence numeral 108.
  • Node Rl, R2 and R4 are joined to node R3 via respective interfaces SO 5 Sl, S2, reference numerals 110, 112, 114 respectively.
  • Nodes Rl and R2 comprise leaf or receiver nodes which can receive multicast traffic from root node R4 via transit node R3.
  • receiver node R2 joins the multicast tree accord- ing to any appropriate mechanism, and obtains the relevant identifiers of the tree, namely the root node and the FEC of traffic belonging to the tree. It then creates an LDP path from the root R4.
  • R2 identifies its nexthop to the root of the tree for example from its IP forwarding table, in the present case, node R3.
  • node R2 constructs a P2MP label mapping message 116 indicating the multicast tree FEC (for example an identifier "200"), the root R4 of the multicast tree and the label it pushes to R3, label L2.
  • the downstream direction for traffic is from R4 via R3 to R2 and hence the label mapping message is sent upstream from R2 to R3.
  • node R3 similarly allocates a label L5 and updates its forwarding state such that incoming packets with label L5 will have the label swapped for label L2 and forwarded along interface Sl to R2.
  • Node R3 further sends a P2MP label mapping message to node R4 indicating the FEC 200, the root R4 and its label L5 at step 208.
  • root node R4 updates its forwarding state with label L5 for the FEC 200. It will be noted that steps 200 to 210 are repeated for each leaf or receiver node joining the multi- cast tree. For example if node Rl joins the tree then it sends a P2MP label mapping message to R3 with FEC 200, route R4 and label Ll . In this case, as is appropriate for multicast, R3 does not construct a further label to send to R4 but adds label Ll to the forwarding state corresponding to incoming packets with label L5.
  • P2MP LDP Multicast can be further understood with reference to Fig. 3 which shows the network of Fig. 1 with the datapath of multicast traffic, and Fig 4 which comprises a flow diagram showing the steps performed in the forwarding operation.
  • the root node R4 acting as ingress node to the P2MP network, recognizes in any appropriate manner traffic for example ingress IP traffic for the multicast tree 100 and forwards the traffic shown as packet 300 to which the label L5 302 is appended to an IP pay- load 304.
  • the forwarding table or multicast LFIB (mLFIB) 306 maintained at R3 for traffic incoming on interface S2 is shown in Fig. 3 for "down" traffic, that is, traffic from the route to the receivers.
  • mLFIB multicast LFIB
  • node R3 carries out an RPF check to ensure that the incoming packet with label L5 arrived on the correct interface S2. If so, then at step 404, labels Ll and L2 are swapped for label L5 for forwarding along respective interfaces SO and Sl. As a result packets 308, 310 are sent to the respective receivers with the appropriate label appended to the payload.
  • Fig. 5 is a flow diagram illustrating the steps performed in a label withdrawal transaction, where a node for example node R2 wishes to leave the multicast tree then at step 500 it sends a label withdraw message to its nexthop neighbor R3. At step 502, node R3 deletes the relevant state for example label L2 and at step 504 R3 sends a label release message to R2. It will be noted that if node Rl also leaves the tree then node R3 will remove all of the state corresponding to FEC 200 and will send a label withdraw message to node R4.
  • Fig. 6 is a flow diagram illustrating the steps performed when a nexthop changes but without removal of any receiver node from the multicast tree.
  • FIG. 7 is a network diagram corresponding to Figs. 1 and 3 but with an additional node R5 700 as node R3's nexthop to node R4, and an additional node R6 702 as an alternative nexthop for node R2 to node R4.
  • Node R2's nexthop to node R4 will change if the link between node R5 and node R4 fails, and change to, for example, node R6.
  • node R2 sends a label withdraw message to node R3 and at step 602 node R2 clears the relevant entries in its mLFIB.
  • node R2 sends its new label for example L6 to node R6 following the label mapping procedures described above with reference to Fig. 2.
  • node R6 installs the label L6 and forwards a label mapping message to root R4 again in the manner described above.
  • LDP allocates a local label for every FEC it learns, and if the FEC is removed, the local label and an associated binding (i.e., remote corresponding Ia- bels) for the FEC are preserved for a timeout period. If the FEC is reinstated before the timeout expires, LDP uses the same local label binding for that FEC. Accordingly where there is a network change which changes the route of the multicast tree's unicast nexthop, the same local label binding is used and rewritten in an ingress interface independent manner such that the label rewrite is used on the data plane, i.e., hi the mLFIB, before and after the network change.
  • each leaf can either be a receiver from the root node as with a P2MP network, or a sender of multicast traffic to the other leaves on the network.
  • traffic can be considered as either “down traffic” i.e., from the root to the leaves acting as receivers, or "up traffic” in the form of traffic from the leaves, acting as senders, towards the root.
  • up traffic i.e., from the root to the leaves acting as receivers
  • up traffic in the form of traffic from the leaves, acting as senders
  • Fig. 8 is a network diagram showing an MP2MP network. The network shown is different to that shown in Figs. 1 and 3 and hence different numbering is used although ' the nodes are named similarly.
  • the network is designated generally 800 and includes receiver/sender nodes Rl , R2 reference numbers 802, 806, a transit node R3 reference number 808, a root node R4, reference numeral 810 and a further receiver/sender node R5, reference number 812.
  • Nodes Rl , R2 and R4 are joined to node R3 via respective interfaces SO, 814 Sl, 816, and S2, 818.
  • NodeR4 is joined to node R5 by afurther interface 53.
  • the root node R4 is a shared root although it may in addition be an ingress or receiver or sender node as appropriate.
  • Fig. 9 is a flow diagram illustrating the manner in which a receiver/sender node for example node R2 joins an MP2MP multicast tree.
  • node R2 joins the tree and at step 902 node R2 identifies its nexthop to the root node R4, namely node R3, in the manner described above with respect to P2MP.
  • node R2 sends a "pseudo label mapping message" or pseudo label 820 to node R3.
  • the request pseudo includes identification of the FEC 200, the root R4, and R2's ingress label L2.
  • the message is generally in the similar form to a P2MP label mapping message however it is termed here a pseudo label request label mapping message as it must be distinguishable from a standard P2MP label mapping message as described in more detail below.
  • the message can be recognizable as a pseudo label request message in any appropriate manner.
  • node R3 recognizes the message as a pseudo label request message and sends a return MP2MP label mapping to node R2 identifying the FEC 200 and providing its own ingress label L3.
  • node R3 provides a label to node R2 for use with "up traffic" from R2 towards the route.
  • node R3 sends a pseudo label request message 824 to node R4 indicating the FEC 200, root R4 and node R3's ingress label L5.
  • node R4 sends its MP2MP label mapping 826 for up traffic to node R3 indicating FEC 200 and its ingress label L6.
  • each additional receiver/sender carries out the same procedure, for example node Rl will send a pseudo label request message 828 to node R2 indi- eating FEC 200, root R4 and label Ll and will receive a label mapping 830 from R3 indicating FEC 200 and label L4 for up traffic.
  • Fig. 10 is a network diagram corresponding to Fig. 8 and showing some of the forwarding state or mLFIB's constructed following the transactions described with reference to Fig. 9.
  • the forwarding table is shown at 840.
  • Fig. 11 which is a flow diagram illustrating forwarding of MP2MP multicast traffic
  • traffic arriving with label L5 is RPF checked to ensure that it arrived on ingress interface S2.
  • label L5 is replaced by label Ll and the traffic is forwarded on interface SO to node Rl .
  • label L2 is added and the traffic forwarded on inter- face Sl to node R2.
  • forwarding table 842 For up traffic from node Rl towards the root on interface SO forwarding table 842 is shown and forwarding of such traffic at node R3 can be understood with reference to Fig. 12 which is a flow diagram illustrating forwarding of incoming traffic on interface SO.
  • an RPF check is carried out on traffic with label L4 to ensure that it ar- rives on interface SO.
  • traffic to node R4 is forwarded on interface S2 with label L6. It will be noted that this label is learnt from the MP2MP label mapping from node R4.
  • label L2 is added for traffic on interface Sl for node R2. It will be noted that this forwarding information can be inherited from the downstream state table 840.
  • Table 844 shows the forwarding state for up traffic received at node R3 on interface Sl from node R2.
  • Fig. 13 is a flow diagram illustrating the steps in forwarding said up traffic.
  • an RPF check is carried out on traffic carrying label L3 to ensure that it arrived on interface Sl.
  • traffic towards the root R4 is forwarded on interface S2 with label L6 which again is learnt from the MP2MP label mapping from node R4.
  • traffic for node Rl is forwarded on interface SO with label Ll which again is inherited from the downstream state.
  • micro loops occur when a network change takes place and nodes converge on the new network at different times. While the nodes are not all converged, there is a risk that one node will forward according to an old topology whereas another node will forward according to a new topology such that traffic will be sent back and forth between two or more nodes in a micro loop.
  • transient micro loops can occur for example because of control plane inconsistency between local and remote devices (that is, for example, inconsistencies in the RIB), control and data plane inconsistency on a local device (that is inconsistencies between the RIB and the FIB if the FIB has not yet been updated), and inconsistencies on the data plane between local and remote devices, for example where the FIB or LFIB or respective nodes are converged on different topologies.
  • control plane inconsistency between local and remote devices that is, for example, inconsistencies in the RIB
  • control and data plane inconsistency on a local device that is inconsistencies between the RIB and the FIB if the FIB has not yet been updated
  • inconsistencies on the data plane between local and remote devices for example where the FIB or LFIB or respective nodes are converged on different topologies.
  • Transient micro loops are in fact common in IP networks, and in unicast IP routing the impact and number of devices affected is restricted. However, in the case of multicast networks there is the risk of exponential-traffic loops during convergence. For ex- ample, if there are 100,000 multicast trees through a multicast core router such as router R3 then during a network change, transient micro loops could bring down the entire network.
  • ingress traffic from the new tree could be forwarded to the old tree and traffic from the old tree could be forwarded to the new tree forming a transient micro loop.
  • traffic from the old tree could be forwarded to the new tree which is then forwarded back to the old tree to form a transient micro loop.
  • FIG. 1 is a schematic diagram illustrating a P2MP network
  • FIG. 2 is a flow diagram illustrating the steps involved addition of a leaf to a
  • FIG. 3 is a network diagram corresponding to Fig. 1 showing the forwarding of multicast traffic on a P2MP network
  • FIG.4 is a flow diagram illustrating the steps involved in forwarding multicast data on a P2MP network
  • FIG. 5 is a flow diagram illustrating the steps involved in a label withdraw session when a leaf leaves a P2MP network
  • FIG. 6 is a flow diagram illustrating the steps involved in a label withdraw session when a nexthop changes in a P2MP network
  • FIG. 7 is a schematic diagram of a network as shown in FIG. 3 with additional nodes to illustrate a nexthop change
  • FIG. 8 is a schematic diagram illustrating an MP2MP network
  • FIG.9 is a flow diagram illustrating the steps involved in addition of a leaf to an MP2MP network
  • FIG. 10 is a schematic diagram of an MP2MP network corresponding to that of FIG. 8 showing forwarding of up and down traffic;
  • FIG. 11 is a flow diagram showing steps involved in forwarding down traffic in an MP2MP network;
  • FIG. 12 is a flow diagram showing steps involved in forwarding up traffic from a first leaf in an MP2MP network
  • FIG. 13 is a flow diagram showing steps involved in forwarding up traffic from a further leaf in an MP2MP network
  • FIG 14 is a flow diagram showing at a high level steps involved in avoiding loops in an LDP multicast network
  • FIG. 15a is a schematic diagram showing a first circumstance in which the loops may occur in a network
  • FIG. 15b is a schematic diagram showing a second circumstance in which looping may occur in a network
  • FIG. 16 is a schematic diagram showing an MP2MP network in steady state
  • FIG. 17 is a network diagram corresponding to that of Fig. 16 showing forwarding of up traffic
  • FIG. 18 is a network diagram corresponding to the network of Fig. 16 after a route change
  • FIG. 19 is a network diagram corresponding to the network of Fig. 18 showing a forwarding loop for up traffic
  • FIG. 20 is a flow diagram showing in more detail steps involved in preventing looping
  • FIG. 21 A is a flow diagram showing in more detail steps involved in preventing loops according to another aspect
  • FIG. 21 B is a continuation of the flow diagram of FIG. 21 A;
  • FIG. 22 is a network diagram corresponding to Fig. 18 showing steps involved in loop prevention
  • FIG. 23 is a block diagram that illustrates a computer system upon which a method for distributing labels in an LDP multicast network may be implemented.
  • a method of distributing labels in a label distribution protocol multicast network having a root node and at least one leaf node.
  • the method comprises the steps, performed at a receiving node, of receiving a label and path vector from a distributing node, carrying out loop or convergence detection from the re- ceived path vector and, if convergence or no loop is detected, sending a receiving node label and path vector to its nexthop node in the network together with a path root.
  • a transient micro loop prevention technique for P2MP and MP2MP trees is provided in relation to both down and up traffic related micro loops.
  • an improved clean up procedure is provided according to which a node's local rewrites and mappings are cleared after every nexthop change removing local label and FEC mappings or bindings and label release and withdraw messages are sent for all local and remote labels, and an ordered mFIB/mLFIB installation is carried out using strict path vector-based ordered label distribution and/or rewrite installation procedures, hi addi- tion, old and new multicast LSPs are made disjoint by ensuring that new labels are assigned if the nexthop is changed.
  • Fig. 14 is a flow diagram showing, at a high level, the steps involved according to the method described herein.
  • step 1400 when a change takes place, the corresponding local data plane rewrites are cleaned up immediately and, if the nexthop is changed, label withdrawal and release for the local and remote labels are also sent immediately.
  • the multicast LDP label hold time is set to zero such that the labels are not held open in case the FEC is reintroduced.
  • all local labels associated with a nexthop change are replaced with new labels.
  • a node such as a distributing node sends a pseudo label request (label mapping) message to its nexthop towards the root node, in conjunction with its path vector, that is, information representing the path to the root.
  • the distributing node may be a node adjacent to a nexthop change and may comprise multiple nodes.
  • the nexthop node towards the root node carries out a local or convergence detection step and checks the path vector against its own path vector and, if it is not in the path vector - i.e., there is loop -it sends a pseudo label request (label mapping) message to its nexthop towards the root node together with its path vector. This is re- peated at each node until the root node receives a pseudo label request (label mapping) message and path vector which are checked against its own path vector. If they match then the root node sends its label mapping message with an up traffic label and its path vector back down the path.
  • the receiving node once again checks the received path vector against its own path vector and, if there is a match, sends a further label mapping message and its path vector to the next node in the path and so forth until the distributing node receives its label mapping.
  • each downstream node in the up traffic sense only sends its up traffic label mapping to an upstream node once it has received such a mapping from its upstream nexthop accordingly.
  • the up traffic tables are up- dated in order from the root.
  • each downstream node in the up traffic sense to send a label mapping response to the pseudo label request (label mapping) from the upstream node without waiting to hear from its own downstream node in which case the path vector match step still ensures that all nodes are converged.
  • the path vector match step still ensures that all nodes are converged.
  • transient micro loops are prevented in the multicast network as an ordered convergence is achieved in one or both directions.
  • each node towards the root checks its path vector and will only forward its pseudo label request (label mapping) for down traffic if its received path vector corresponds then it is ensured that the nodes are converged and in the correct order for down traffic.
  • Figs. 15a and 15b are diagrams showing potential inconsistencies between data plane (the mFLIB held on each line card (LC) of a router) and control plane (the FIB, LFIB and mLFIB from which the data plane is duplicated).
  • node Rl is shown generally at 1500 and includes a control plane 1502 and a data plane 1504.
  • control plane 1502 and data plane 1504 are not in synchronization, there will be stale forwarding entries at Rl which can give rise to looping even if, for example, the data plane 1508 and a node R2 1506 is synchronized with the control plane 1502 with node Rl.
  • Fig. 15b in an alternative possible configuration is shown in which the control plane 1502 and data plane 1504 of node Rl are synchronized with one another, but the control plane 1502 of node Rl is not syn- chronized with the data plane 1508 of node R2.
  • a network includes nodes Rl to R5, reference numerals 1600, 1602, 1604, 1606, 1608, 1610 respectively.
  • Nodes R3, R2 and R4 are connected to node Rl .
  • Nodes R2 and R4 are further connected, and both provide paths to root node Ri, with node R2 as nexthop to node R5.
  • Node R3's path vector to node R5 is therefore R3 - Rl - R4 - R2 - ... R5 as is shown generally by arrow 1612.
  • Node R3 sends a pseudo label request (label mapping) 1614 with FEC 200, roots R5, label L3 for down traffic to node Rl which returns a label mapping 1616 for up traffic with FEC 200 and label Ll 3.
  • Node Rl sends a pseudo label request (label mapping) 1618 for down traffic to node R4 with FEC 200, root R5, label Ll and node R4 returns a label mapping 1620 with FEC 200 and labels L41.
  • Node R4 sends a pseudo label request (label mapping) 1622 for down traffic to node R2 with FEC 200, root R5 and Ia- bel L4 and node R2 returns a label mapping 1624 for up traffic with FEC 200 and label L2.
  • Node R2 carries out further label mappings of the same type with its nexthop to node R5 and so forth.
  • Fig. 17 which is a network diagram corresponding to Fig. 16 showing forwarding of up traffic
  • Nodes Rl and R3 communicate over respective interfaces SO
  • reference numeral 1700 nodes Rl and R4 communicate via interfaces Sl, 1702
  • nodes Rl and R2 communicate over interfaces S3, 1704
  • nodes R2 and R4 communicate over interfaces S2, 1706 and nodes R2 and R5 communicate over interfaces S4, 1708.
  • node R3 sends up traffic with label L13 to node Rl then node Rl 's forwarding ta- ble swaps label L41 for label Ll 3 and sends traffic on interface Sl to node R4.
  • label 41 is swapped for label S2 and the packet is sent on interface S2 to node R2.
  • label L2 is swapped for label L25 and the packet is forwarded on interface S4.
  • Fig. 18 is a network diagram corresponding to that of Fig. 16 but in which node R2 no longer has a path to node R5 for example because of component failure on the path as a result of which node R4 is nexthop to root node R5.
  • problems can arise, for example, when node R2 has converged on the new topology but node R4 has not. In that case the up and down label mapping between nodes Rl and R3 and between nodes Rl and R4 are unchanged.
  • node R2 has sent a pseudo label request (label mapping) 1800 to node Rl indicating FEC 200, root R5, label 21 for down traffic and node Rl has returned an up traffic label mapping with FEC 200 and label 12 at 1802.
  • label mapping label mapping
  • node R3s path vector to node R5 is R3 - Rl - R4 ... R5.
  • node R4 has not converged, for example, because R2 has not yet sent a label withdrawal to R4 or because R4 has not yet processed it, upon receipt of a packet with label L41 on interface Sl, this will be forwarded to node R2 on interface S2 with label L2.
  • node R2 As node R2 has converged on the network and has as its nexthop Rl (for example because the link between node R2 and node R4 is at too high a cost) then it swaps label L2 with label Ll 2 and forward to node Rl which returns a packet to R4 set- ting up a loop shown generally at 1804 and following the path R3 - Rl - R4 - R2 - Rl - R4 -R2 and so forth.
  • nexthop Rl for example because the link between node R2 and node R4 is at too high a cost
  • Fig. 19 is a network diagram corresponding to Fig. 18 and showing the respective forwarding tables or forwarding table portions for nodes Rl , R4 and R2.
  • Rl's forwarding table 1900 replaces label L13 with label L14 and forwards the packet on interface S 1.
  • R4's forwarding table 1902 replaces L41 with label L2 and forwards on S2.
  • Node R2's forwarding table 1904 replaces label L2 with L12 and forwards on interface S3
  • Rl's forwarding table replaces L 12 with L41 and forwards to node R4 on Sl and so forth. It will be seen, therefore, that looping arises in particular as a result of failure or delay in converging between nodes.
  • Fig. 20 is a flow diagram illustrating steps carried out in a clean up procedure of the data plane at step 2000 the local data plane re- writes are cleaned up and, at step 2002, if the nexthop is changed label withdraw and release for the local and remote labels are sent immediately. It will be noted that in this case a multicast LDP label hold time period is set at zero. At step 2004 local label, FEC mappings/binding and rewrites are removed. All of these steps are carried out substantially immediately such that the time during which micro looping can occur is minimized because the data plane converges as quickly as possible.
  • step 2006 if an ingress interface of a P2MP tree changed as a node then all associated MP2MP local labels are changed at that hop to make the old and new multicast LPS's disjoint.
  • all associated MP2MP local labels are changed at that hop to make the old and new multicast LPS's disjoint.
  • the new label need only be allocated if the nexthop is changed which is preferable as label space is not unnecessarily reduced.
  • ordered mFIB/mLFIB installation is achieved by using a strict ordered convergence which may be for down traffic or in both directions i.e., for up and down traffic.
  • a strict ordered convergence which may be for down traffic or in both directions i.e., for up and down traffic.
  • Figs. 21A and 21B which shows a flow diagram illustrating the steps in ensuring ordered convergence in relation to the network change set out above with reference to Figs 18 and 19.
  • node R2 acting as distributing node is considered.
  • R2 allocates its new label R21 sends a pseudo label mapping to its new nexthop Rl with the label L21 and its new unicast path vector R2 - Rl - R4 - R5.
  • R2 allocates its new label R21 sends a pseudo label mapping to its new nexthop Rl with the label L21 and its new unicast path vector R2 - Rl - R4 - R5.
  • R2 allocates its new label R21 sends a pseudo label mapping to its new nexthop Rl with the label L21 and its new unicast path vector R2 - Rl - R4 - R5.
  • R2 allocates its new label R21 sends a pseudo label mapping to its new nexthop Rl with the label L21 and its new unicast path vector R2
  • node Rl receives the pseudo label request (label mapping) and checks the received path vector against its own path vector to see if they match and/or checks whether it appears in the received path vector signifying a loop.
  • the path vectors are the same then this indicates that the downstream node to Rl (in the sense of up traffic — i.e., node R2) is converged. However upstream nodes may not be converged. Accordingly, optionally without sending a label mapping response to node R2 at this stage, node Rl sends its own pseudo label request (label mapping) message and path vector to its next hop node R4 at step 2106.
  • node Rl will reject the pseudo label request (label mapping) message and notify node R2 accordingly. Then, for example, node R2 may wait a pre-determined period (for example slightly exceeding the maximum convergence time for the network) and resend its label mapping message and path vector which should then be converged.
  • node R4 checks its path vector against the received path vector and if a match is established forwards its label request on to the nexthop to node R5, again optionally without returning a label mapping for up traffic to node Rl .
  • node Rl and node R4 will, upon path vector match, update their down traffic forwarding tables to include the label learnt from their down stream neigh- bor in the down traffic sense (node R2 and Rl respectively).
  • node Rl will install node R2's label L12
  • node R4 will install node RIs label L41 and so forth.
  • root R5 receives a pseudo label request (label mapping) message, checks the path vector against its own path vector or loop defects and, if there is a match, at step 2112, sends a label mapping response and its path vector back up stream, in the up traffic sense.
  • the nexthop node for example node R4, checks the path vector against its path vector and, if they are matched at step 2116, then at step 2118 if it has optionally withheld doing so node R4 sends its label mapping for up traffic and path vector to node Rl which carries out the same steps at 2120.
  • node R4 sends a reject and notify message to its downstream node in the up traffic sense which can then delay resending its label mapping and path vector for a suitable period as discussed above with reference to step 2105.
  • node R2 receives the up traffic label mapping response from node Rl .
  • each node receives its up traffic label mapping from the preceding node starting at the root node, and checks the path vector is matched, then it is assured that the down stream nodes, in the up traffic sense, are converged and it can install the received label.
  • label mapping the up traffic label mapping response from node Rl .
  • the path vector matching step in any case ensures that all of the nodes are converged.
  • This alternative approach has the advantage that there is potentially less delay in updating the up traffic forwarding table at each node as it does not have to wait for a "cascade" of up traffic label mappings down from the root.
  • Fig 22 which is a network diagram corresponding to Fig.l 8.
  • Node R2 sends a pseudo label request 2202 with its down traffic label mapping FEC 200, root R5, label L21 and its converged path vector R2 - Rl - R4 - ... R5.
  • Node Rl which is also converged, matches this path vector and forwards its pseudo label request (label mapping) 2204 to node R4.
  • node R4's path vector is R4 - R2 - R5 - and, as node R4 appears, implying a loop, it sends a reject and notify message to Rl .
  • node R4 Similarly if node R4 has received a label release it sends a pseudo label request (label mapping) 2206 with FEC 200, root R5, label R4 and path vector R4 - R2 - R5 which is rejected and notified by node R2.
  • label mapping label mapping
  • nodes may also be triggered to send pseudo label requests according to the route table. For example if there has been a unicast path vector change at node R3 it sends a pseudo label request with FEC 200, route R5, label L3 and path vector R3 - Rl - R4 - ... R5 to node Rl . It will be appreciated that further optimizations are available. For example if the ordered convergence is also being used for unicast, then instead of sending the path vectors multiple times for each, the unicast path vector can be obtained and used to update the forwarding tables and carry out additional steps as described above appropriately. If there is a path vector change downstream in the up traffic sense it can be notified to an up stream node in an event driven mode allowing a separate check as to where the convergence has taken place.
  • the pseudo label request message and unicast vector need only be sent to "affected nodes", that is all those down stream nodes in the up traffic sense which have a nexthop change.
  • the request reaches a destination node where there is no nexthop change it can act as though it were the root node in as much as it sends a label mapping up traffic response.
  • nodes unaffected by the change do not need to update their forwarding tables and hence additional signaling and computing time is avoided using this approach.
  • downstream nodes can start using the path immediately.
  • looping is reduced.
  • effectively unicast routing is used to keep the base P2MP (down traffic) tree loop free, relying on ordered label distribution up stream (in the down traffic sense) through the affected nodes as a result of which convergence progresses in an or- dered manner in an up stream direction.
  • up traffic is also accommodated in the MP2MP case as the up traffic label mappings are distributed in the opposite direction on the same tree but again corresponding to ordered convergence in the up stream direction in the up traffic sense.
  • FIG. 23 is a block diagram that illustrates a computer system 40 upon which the method may be implemented. The method is implemented using one or more computer programs running on a network element such as a router device. Thus, in this embodiment, the computer system 140 is a router.
  • Computer system 140 includes a bus 142 or other communication mechanism for communicating information, and a processor 144 coupled with bus 142 for processing information.
  • Computer system 140 also includes a main memory 146, such as a random access memory (RAM), flash memory, or other dynamic storage device, coupled to bus 142 for storing information and instructions to be executed by processor 144.
  • Main memory 146 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 144.
  • Computer system 140 further includes a read only memory (ROM) 148 or other static storage device coupled to bus 142 for storing static information and instructions for processor 144.
  • a storage device 150 such as a magnetic disk, flash memory or optical disk, is provided and coupled to bus 142 for storing information and instructions.
  • a communication interface 158 may be coupled to bus 142 for communicating information and command selections to processor 144.
  • Interface 158 is a conventional serial interface such as an RS-232 or RS-422 interface.
  • An external terminal 152 or other computer system connects to the computer system 140 and provides commands to it using the interface 158.
  • Firmware or software running in the computer system 140 provides a terminal interface or character-based command interface so that external commands can be given to the computer system.
  • a switching system 156 is coupled to bus 142 and has an input interface and a re- spective output interface (commonly designated 159) to external network elements.
  • the external network elements may include a plurality of additional routers 160 or a local network coupled to one or more hosts or routers, or a global network such as the Internet having one or more servers.
  • the switching system 156 switches information traffic arriving on the input interface to output interface 159 according to pre-determined protocols and conventions that are well known. For example, switching system 156, in cooperation with processor 144, can determine a destination of a packet of data arriving on the input interface and send it to the correct destination using the output interface.
  • the destinations may include a host, server, other end stations, or other routing and switching devices in a local network or Internet.
  • the computer system 140 implements as a node acting as root, leaf or transit node, the above described method.
  • the implementation is provided by computer system 140 in response to processor 144 executing one or more sequences of one or more instructions contained in main memory 146.
  • Such instructions may be read into main memory 146 from another computer-readable medium, such as storage device 150. Exe- cution of the sequences of instructions contained in main memory 146 causes processor 144 to perform the process steps described herein.
  • processors in a multiprocessing arrangement may also be employed to execute the sequences of instructions contained in main memory 146.
  • hard- wired circuitry may be used in place of or in combination with software instructions to implement the method. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 150.
  • Volatile media includes dynamic memory, such as main memory 146.
  • Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 142. Transmission media can also take the form of wireless links such as acoustic or electromagnetic waves, such as those generated during radio wave and infrared data communications.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 144 for execution.
  • the instructions may initially be carried on a magnetic disk of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 140 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal.
  • An infrared detector coupled to bus 142 can receive the data carried in the infrared signal and place the data on bus 142.
  • Bus 142 carries the data to main memory 146, from which processor 144 retrieves and executes the instructions.
  • Interface 159 also provides a two-way data communication coupling to a network link that is connected to a local network.
  • the interface 159 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • the interface 159 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • the interface 159 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • the network link typically provides data communication through one or more networks to other data devices.
  • the network link may provide a connection through a local network to a host computer or to data equipment operated by an Internet Service Provider (ISP).
  • ISP Internet Service Provider
  • the ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet".
  • the local network and the Internet both use electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on the network link and through the interface 159, which carry the digital data to and from computer system 140, are exemplary forms of carrier waves transporting the information.
  • Computer system 140 can send messages and receive data, including program code, through the network(s), network link and interface 159.
  • a server might transmit a requested code for an application program through the Internet, ISP, local network and communication interface 158.
  • One such downloaded application provides for the method as described herein.
  • the received code may be executed by processor 144 as it is received, and/or stored in storage device 150, or other non-volatile storage for later execution. In this manner, computer system 140 may obtain application code in the form of a carrier wave.
  • the method steps set out can be carried out in any appropriate order and aspects from the examples and the embodiments described juxtaposed or interchanged as appropriate the method can be applied in any network of any topology supporting multicast in relation to any component change in the network for example a link or node failure or the introduction or removal of a network component by an administrator and in relation to up or down traffic loops.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method of distributing labels in a label distribution protocol multicast network having a root node and at least one leaf node comprises the steps, performed at a receivng node, of receiving a label and path vector from a distributing node, carrying out loop or convergence detection from the received path vector ( I 404) and, if convergence or no loop is detected, sending a receiving node label and path vector (1406) to its nexthop node in the network.

Description

A METHOD AND APPARATUS FOR DISTRIBUTING LABELS IN A LABEL DISTRIBUTION PROTOCOL MULTICAST NETWORK
BACKGROUND OF THE INVENTION
Field of the Invention The present invention generally relates to distribution of labels, for example,
Multi Protocol Label Switching (MPLS) labels. The invention relates more specifically to a method and apparatus for distributing labels in a Label Distribution Protocol (LDP) multicast network.
Background Information The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
In computer networks such as the Internet, packets of data are sent from a source to a destination via a network of elements including links (communication paths such as telephone or optical lines) and nodes (for example, routers directing the packet along one or more of a plurality of links connected to it) according to one of various routing protocols.
MPLS is a protocol that is well known to the skilled reader and which is described in document "Multi Protocol Label Switching Architecture" which is available at the time of writing on the file "rfc3031.txt" in the directory "rfc" of the domain "ietf.org" on the World Wide Web. According to MPLS, a path for a source-destination pair is established, and values required for forwarding a packet between adjacent routers in the path together with headers or "labels" are prepended to the packet. The labels are used to di- rect the packet to the correct interface and next hop. The labels precede the IP or other header allowing smaller outer headers.
The path for the source-destination pair, termed a Label Switched Path (LSP) can be established according to various different approaches. One such approach is Label Distribution Protocol (LDP) in which each router in the path sends its label to the neighbor routers according to its IP routing table. LDP labels are sent to the neighbor routers in a label mapping message which can include as one of its TLV (Type Length Value) fields a path vector specifying the LSP. For each LSP created, a forwarding equivalent class (FEC) is associated with the path specifying which packets are mapped to it. A La- bel Forwarding Information Base (LFIB) stores the FEC, the next-hop information for the LSP, and the label required by the next hop.
MPLS LDP approaches have further been applied to multicast networks. Conventionally multicast networks rely on unicast routing protocols. Unicast routing protocol relies on a routing algorithm resident at each node. Each node on the network advertises the routes throughout the network. The routes are stored in a routing information base (RIB) and based on these results a forwarding information base (FIB) or forwarding table is updated to control forwarding of packets appropriately. When there is a network change, a notification representing the change is flooded through the network by each node adjacent the change, each node receiving a notification sending it to each adjacent node.
As a result, when a data packet for a destination node arrives at a node, the node identifies the optimum route to that destination and forwards the packet via the correct interface to the next node ("NEXT_HOP") along that route. The next node repeats this step and so forth. Link state protocols can support multicast traffic comprising point to multipoint traffic (P2MP) and multipoint to multipoint traffic (MP2MP). For example IP (internet protocol) multicast is well known to the skilled reader and is described in document "Internet Protocol Multicast" which is available at the time of writing on the file "IP multLhtm" in the directory "univercd/cc/td/doc/cisintwk/ito_doc" of the domain vAvw.cisco.com of the World Wide Web.
Multicast allows data packets to be forwarded to multiple destinations (or "receivers") without unnecessary duplication, reducing the amount of data traffic accord- ingly. All hosts wishing to become a receiver for a multicast group perform a "join" operation to join the multicast group. A multicast tree such as a shortest path tree is then created providing routes to all receivers in the group. The multicast group in a P2MP group is denoted (S,G) where S is the address of the source or broadcasting host and G is an IP multicast address taken from a reserved address space. As a result routers receiving a packet from the source S to the multicast address G send the packet down each interface providing a next hop along the route to any receiver on the tree.
In the case of MP2MP multicasts, a shared group is denoted (*(G) allowing multiple sources to send to multiple receivers. The multicast tree is constructed as a shared tree including a shared root or rendezvous point (RP). During forwarding of multicast data at a router, when a packet is received at the router with a multicast address as destination address, the router consults the multicast forwarding table and sends the packet to the correct next hop via the corresponding interface. As a result, even if the path from the next hop subsequently branches to multiple receivers, only a single multicast packet needs to be sent to the next hop. If, at the router, more than one next hop is required, that is to say the multicast tree branches at the router, then the packet is copied and sent on each relevant output interface.
In order to avoid looping, each router ensures that data is only sent away from the source and towards the receiver as otherwise traffic would loop back, which is impermissible in multicast. In order to achieve this the router carries out a reverse path forwarding (RPF) check to ensure that the incoming packet has arrived on the appropriate input interface. If the check fails then the packet is dropped. The router uses the unicast forwarding table to identify the appropriate upstream and downstream interfaces in the tree as part of the RPF and only forwards packets arriving from the upstream direction. Multicast methods which make use of existing forwarding information in this manner belong to the family of "protocol independent multicast" (PIM) methods as they are independent of the specific routing protocol adopted at each router.
More recently the use of MPLS multicast has been explored and in particular the use of LDP has been discussed for building receiver driven multicast trees. Once such approach is described in Label Distribution Protocol Extensions for Point — to- Multipoint Label Switched Paths" of I. Minei et al., which is available at the time of writing on the file "draft-minei-wijnands-mpls-ldp-p2mp-00.txt" in the directory "wg/mpls" of the domain "tools.ietf.org". The approach described therein can be understood further with reference to Fig. 1 which is a network diagram illustrating a P2MP network and Fig. 2 which is a flow diagram illustrating the steps involved in a node joining the network. The network shown in Fig. 1 is designated generally 100 and includes nodes comprising, for example routers Rl, reference 102, R2, reference numeral 104, R3, reference numeral 106 and R4, refer- ence numeral 108. Node Rl, R2 and R4 are joined to node R3 via respective interfaces SO5 Sl, S2, reference numerals 110, 112, 114 respectively. Nodes Rl and R2 comprise leaf or receiver nodes which can receive multicast traffic from root node R4 via transit node R3.
Referring to Fig. 2, at step 200, receiver node R2 joins the multicast tree accord- ing to any appropriate mechanism, and obtains the relevant identifiers of the tree, namely the root node and the FEC of traffic belonging to the tree. It then creates an LDP path from the root R4. In particular, at step 202 R2 identifies its nexthop to the root of the tree for example from its IP forwarding table, in the present case, node R3. At step 204 node R2 constructs a P2MP label mapping message 116 indicating the multicast tree FEC (for example an identifier "200"), the root R4 of the multicast tree and the label it pushes to R3, label L2. In the case of a P2MP network the downstream direction for traffic is from R4 via R3 to R2 and hence the label mapping message is sent upstream from R2 to R3.
At step 206 node R3 similarly allocates a label L5 and updates its forwarding state such that incoming packets with label L5 will have the label swapped for label L2 and forwarded along interface Sl to R2. Node R3 further sends a P2MP label mapping message to node R4 indicating the FEC 200, the root R4 and its label L5 at step 208. At step 210 root node R4 updates its forwarding state with label L5 for the FEC 200. It will be noted that steps 200 to 210 are repeated for each leaf or receiver node joining the multi- cast tree. For example if node Rl joins the tree then it sends a P2MP label mapping message to R3 with FEC 200, route R4 and label Ll . In this case, as is appropriate for multicast, R3 does not construct a further label to send to R4 but adds label Ll to the forwarding state corresponding to incoming packets with label L5.
P2MP LDP Multicast can be further understood with reference to Fig. 3 which shows the network of Fig. 1 with the datapath of multicast traffic, and Fig 4 which comprises a flow diagram showing the steps performed in the forwarding operation. At step 400 the root node R4, acting as ingress node to the P2MP network, recognizes in any appropriate manner traffic for example ingress IP traffic for the multicast tree 100 and forwards the traffic shown as packet 300 to which the label L5 302 is appended to an IP pay- load 304. The forwarding table or multicast LFIB (mLFIB) 306 maintained at R3 for traffic incoming on interface S2 is shown in Fig. 3 for "down" traffic, that is, traffic from the route to the receivers. At step 402 node R3 carries out an RPF check to ensure that the incoming packet with label L5 arrived on the correct interface S2. If so, then at step 404, labels Ll and L2 are swapped for label L5 for forwarding along respective interfaces SO and Sl. As a result packets 308, 310 are sent to the respective receivers with the appropriate label appended to the payload.
Provision is also made for withdrawal of labels. For example referring to Fig. 5, which is a flow diagram illustrating the steps performed in a label withdrawal transaction, where a node for example node R2 wishes to leave the multicast tree then at step 500 it sends a label withdraw message to its nexthop neighbor R3. At step 502, node R3 deletes the relevant state for example label L2 and at step 504 R3 sends a label release message to R2. It will be noted that if node Rl also leaves the tree then node R3 will remove all of the state corresponding to FEC 200 and will send a label withdraw message to node R4. Fig. 6 is a flow diagram illustrating the steps performed when a nexthop changes but without removal of any receiver node from the multicast tree. An example topology is shown in Fig. 7, which is a network diagram corresponding to Figs. 1 and 3 but with an additional node R5 700 as node R3's nexthop to node R4, and an additional node R6 702 as an alternative nexthop for node R2 to node R4. Node R2's nexthop to node R4 will change if the link between node R5 and node R4 fails, and change to, for example, node R6.
In that case at step 600 node R2 sends a label withdraw message to node R3 and at step 602 node R2 clears the relevant entries in its mLFIB. At step 604 node R2 sends its new label for example L6 to node R6 following the label mapping procedures described above with reference to Fig. 2. At step 606 node R6 installs the label L6 and forwards a label mapping message to root R4 again in the manner described above.
It will be noted that LDP allocates a local label for every FEC it learns, and if the FEC is removed, the local label and an associated binding (i.e., remote corresponding Ia- bels) for the FEC are preserved for a timeout period. If the FEC is reinstated before the timeout expires, LDP uses the same local label binding for that FEC. Accordingly where there is a network change which changes the route of the multicast tree's unicast nexthop, the same local label binding is used and rewritten in an ingress interface independent manner such that the label rewrite is used on the data plane, i.e., hi the mLFIB, before and after the network change.
In the case of an MP2MP multicast network, this is effectively treated as M individual P2MP networks in which each leaf can either be a receiver from the root node as with a P2MP network, or a sender of multicast traffic to the other leaves on the network. Because of this bi-directionality it will be noted that traffic can be considered as either "down traffic" i.e., from the root to the leaves acting as receivers, or "up traffic" in the form of traffic from the leaves, acting as senders, towards the root. Accordingly the direction of "upstream" and "downstream" traffic depends on whether it is "up traffic" in which case the downstream direction it towards the root, or "down" traffic in which case the downstream direction is away from the root. Further discussion of MP2MP multicast with LDP is provided in "Multicast Extensions for LDP" of Wijnands et al which is available at the time of writing on the file "watersprings.org/pub/id/draft-wijnands-mpls- ldp-mcast-ext-OO.txt" in the directory "pub/ID" of the domain "watersprings.org" on the World Wide Web. Fig. 8 is a network diagram showing an MP2MP network. The network shown is different to that shown in Figs. 1 and 3 and hence different numbering is used although ' the nodes are named similarly. In particular the network is designated generally 800 and includes receiver/sender nodes Rl , R2 reference numbers 802, 806, a transit node R3 reference number 808, a root node R4, reference numeral 810 and a further receiver/sender node R5, reference number 812. Nodes Rl , R2 and R4 are joined to node R3 via respective interfaces SO, 814 Sl, 816, and S2, 818. NodeR4 is joined to node R5 by afurther interface 53. It will be noted that the root node R4 is a shared root although it may in addition be an ingress or receiver or sender node as appropriate.
Fig. 9 is a flow diagram illustrating the manner in which a receiver/sender node for example node R2 joins an MP2MP multicast tree. At step 900 node R2 joins the tree and at step 902 node R2 identifies its nexthop to the root node R4, namely node R3, in the manner described above with respect to P2MP. At step 904 node R2 sends a "pseudo label mapping message" or pseudo label 820 to node R3. The request pseudo includes identification of the FEC 200, the root R4, and R2's ingress label L2. Accordingly the message is generally in the similar form to a P2MP label mapping message however it is termed here a pseudo label request label mapping message as it must be distinguishable from a standard P2MP label mapping message as described in more detail below. In practice, of course, the message can be recognizable as a pseudo label request message in any appropriate manner. At step 906, node R3 recognizes the message as a pseudo label request message and sends a return MP2MP label mapping to node R2 identifying the FEC 200 and providing its own ingress label L3. As a result node R3 provides a label to node R2 for use with "up traffic" from R2 towards the route. At step 908 node R3 sends a pseudo label request message 824 to node R4 indicating the FEC 200, root R4 and node R3's ingress label L5. At step 910, node R4 sends its MP2MP label mapping 826 for up traffic to node R3 indicating FEC 200 and its ingress label L6.
It will be noted that each additional receiver/sender carries out the same procedure, for example node Rl will send a pseudo label request message 828 to node R2 indi- eating FEC 200, root R4 and label Ll and will receive a label mapping 830 from R3 indicating FEC 200 and label L4 for up traffic.
Fig. 10 is a network diagram corresponding to Fig. 8 and showing some of the forwarding state or mLFIB's constructed following the transactions described with reference to Fig. 9. In particular for down traffic at node R3, that is traffic arriving from root node R4 on interface S2, the forwarding table is shown at 840. Referring to Fig. 11 which is a flow diagram illustrating forwarding of MP2MP multicast traffic, at step 1100, traffic arriving with label L5 is RPF checked to ensure that it arrived on ingress interface S2. Then at step 1102 label L5 is replaced by label Ll and the traffic is forwarded on interface SO to node Rl . At step 1104 label L2 is added and the traffic forwarded on inter- face Sl to node R2.
For up traffic from node Rl towards the root on interface SO forwarding table 842 is shown and forwarding of such traffic at node R3 can be understood with reference to Fig. 12 which is a flow diagram illustrating forwarding of incoming traffic on interface SO. At step 1200 an RPF check is carried out on traffic with label L4 to ensure that it ar- rives on interface SO. At step 1202 traffic to node R4 is forwarded on interface S2 with label L6. It will be noted that this label is learnt from the MP2MP label mapping from node R4. At step 1204, label L2 is added for traffic on interface Sl for node R2. It will be noted that this forwarding information can be inherited from the downstream state table 840. Table 844 shows the forwarding state for up traffic received at node R3 on interface Sl from node R2. Fig. 13 is a flow diagram illustrating the steps in forwarding said up traffic. At step 1300 an RPF check is carried out on traffic carrying label L3 to ensure that it arrived on interface Sl. At step 1302 traffic towards the root R4 is forwarded on interface S2 with label L6 which again is learnt from the MP2MP label mapping from node R4. At step 1304 traffic for node Rl is forwarded on interface SO with label Ll which again is inherited from the downstream state.
It will be noted that as a result of this arrangement, restricted label space is required and labels are reused where possible. In addition, information can be inherited from appropriate routing tables. Yet further, it will be seen that up traffic does not need to proceed all of the way to the root before it can be multicast to all other receivers, but can be forwarded at transit nodes as appropriate. For example traffic from node Rl acting as receiver to node R2 acting as sender is sent to R3 which then forwards it directly to node R2 rather than up to node R4 and back again. A problem inherent in both unicast and multicast traffic is that of micro looping.
In essence, micro loops occur when a network change takes place and nodes converge on the new network at different times. While the nodes are not all converged, there is a risk that one node will forward according to an old topology whereas another node will forward according to a new topology such that traffic will be sent back and forth between two or more nodes in a micro loop. In IP networks, transient micro loops can occur for example because of control plane inconsistency between local and remote devices (that is, for example, inconsistencies in the RIB), control and data plane inconsistency on a local device (that is inconsistencies between the RIB and the FIB if the FIB has not yet been updated), and inconsistencies on the data plane between local and remote devices, for example where the FIB or LFIB or respective nodes are converged on different topologies.
Transient micro loops are in fact common in IP networks, and in unicast IP routing the impact and number of devices affected is restricted. However, in the case of multicast networks there is the risk of exponential-traffic loops during convergence. For ex- ample, if there are 100,000 multicast trees through a multicast core router such as router R3 then during a network change, transient micro loops could bring down the entire network.
Other problems can arise as a result of re-using labels. Typically, on the control plane, a local label withdraw message is sent to the old nexthop and the same label may be distributed to the new nexthop. There is no strict timing for sending the label withdraws and releases and even if the withdraw message is sent to the old nexthop before the label mapping message is sent to the new nexthop, because of the asynchronous nature of the communication and processing of the node or router, the old nexthop may not have been updated before the new nexthop uses the label which can lead to a FTB/mLFIB inconsistency between local and remote devices for a period of time. In particular, because the local label is the same for the old and new trees, ingress traffic from the new tree could be forwarded to the old tree and traffic from the old tree could be forwarded to the new tree forming a transient micro loop. Similarly the reverse can take place whereby traffic from the old tree is forwarded to the new tree which is then forwarded back to the old tree to form a transient micro loop.
In fact, the problem is exacerbated as nodes do not withdraw or release their labels immediately but wait for the label hold down timer to expire, which again slows down the convergence process and increases the window in which errors can occur.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
FIG. 1 is a schematic diagram illustrating a P2MP network; FIG. 2 is a flow diagram illustrating the steps involved addition of a leaf to a
P2MP network;
FIG. 3 is a network diagram corresponding to Fig. 1 showing the forwarding of multicast traffic on a P2MP network;
FIG.4 is a flow diagram illustrating the steps involved in forwarding multicast data on a P2MP network;
FIG. 5 is a flow diagram illustrating the steps involved in a label withdraw session when a leaf leaves a P2MP network; FIG. 6 is a flow diagram illustrating the steps involved in a label withdraw session when a nexthop changes in a P2MP network;
FIG. 7 is a schematic diagram of a network as shown in FIG. 3 with additional nodes to illustrate a nexthop change; FIG. 8 is a schematic diagram illustrating an MP2MP network;
FIG.9 is a flow diagram illustrating the steps involved in addition of a leaf to an MP2MP network;
FIG. 10 is a schematic diagram of an MP2MP network corresponding to that of FIG. 8 showing forwarding of up and down traffic; FIG. 11 is a flow diagram showing steps involved in forwarding down traffic in an MP2MP network;
FIG. 12 is a flow diagram showing steps involved in forwarding up traffic from a first leaf in an MP2MP network;
FIG. 13 is a flow diagram showing steps involved in forwarding up traffic from a further leaf in an MP2MP network;
FIG 14 is a flow diagram showing at a high level steps involved in avoiding loops in an LDP multicast network;
FIG. 15a is a schematic diagram showing a first circumstance in which the loops may occur in a network; FIG. 15b is a schematic diagram showing a second circumstance in which looping may occur in a network;
FIG. 16 is a schematic diagram showing an MP2MP network in steady state;
FIG. 17 is a network diagram corresponding to that of Fig. 16 showing forwarding of up traffic; FIG. 18 is a network diagram corresponding to the network of Fig. 16 after a route change; FIG. 19 is a network diagram corresponding to the network of Fig. 18 showing a forwarding loop for up traffic;
FIG. 20 is a flow diagram showing in more detail steps involved in preventing looping; FIG. 21 A is a flow diagram showing in more detail steps involved in preventing loops according to another aspect;
FIG. 21 B is a continuation of the flow diagram of FIG. 21 A;
FIG. 22 is a network diagram corresponding to Fig. 18 showing steps involved in loop prevention; and FIG. 23 is a block diagram that illustrates a computer system upon which a method for distributing labels in an LDP multicast network may be implemented.
DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT
A method and apparatus for distributing labels in a label distribution protocol multicast network is described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Embodiments are described herein according to the following outline: 1.0 General Overview 2.0 Structural and Functional Overview
3.0 Method of distributing labels in a label distribution protocol multicast network
4.0 Implementation Mechanisms — Hardware Overview 5.0 Extensions and Alternatives
1.0 GENERAL OVERVIEW
The needs identified in the foregoing Background, and other needs and objects that will become apparent for the following description, are achieved in the present invention, which comprises, in one aspect, a method of distributing labels in a label distribution protocol multicast network having a root node and at least one leaf node. The method comprises the steps, performed at a receiving node, of receiving a label and path vector from a distributing node, carrying out loop or convergence detection from the re- ceived path vector and, if convergence or no loop is detected, sending a receiving node label and path vector to its nexthop node in the network together with a path root.
The needs identified in the foregoing Background, and other needs and objects that will become apparent for the following description, are achieved in the present invention, which comprises, in one aspect, a method of forwarding data in a data communi- cations network having a plurality of nodes.
2.0 STRUCTURAL AND FUNCTIONAL OVERVIEW
In overview a transient micro loop prevention technique for P2MP and MP2MP trees is provided in relation to both down and up traffic related micro loops. In particular an improved clean up procedure is provided according to which a node's local rewrites and mappings are cleared after every nexthop change removing local label and FEC mappings or bindings and label release and withdraw messages are sent for all local and remote labels, and an ordered mFIB/mLFIB installation is carried out using strict path vector-based ordered label distribution and/or rewrite installation procedures, hi addi- tion, old and new multicast LSPs are made disjoint by ensuring that new labels are assigned if the nexthop is changed.
Fig. 14 is a flow diagram showing, at a high level, the steps involved according to the method described herein. At step 1400, when a change takes place, the corresponding local data plane rewrites are cleaned up immediately and, if the nexthop is changed, label withdrawal and release for the local and remote labels are also sent immediately. Importantly, the multicast LDP label hold time is set to zero such that the labels are not held open in case the FEC is reintroduced. At step 1401, all local labels associated with a nexthop change are replaced with new labels.
At step 1402 a node such as a distributing node sends a pseudo label request (label mapping) message to its nexthop towards the root node, in conjunction with its path vector, that is, information representing the path to the root. The distributing node may be a node adjacent to a nexthop change and may comprise multiple nodes.
At step 1404, the nexthop node towards the root node carries out a local or convergence detection step and checks the path vector against its own path vector and, if it is not in the path vector - i.e., there is loop -it sends a pseudo label request (label mapping) message to its nexthop towards the root node together with its path vector. This is re- peated at each node until the root node receives a pseudo label request (label mapping) message and path vector which are checked against its own path vector. If they match then the root node sends its label mapping message with an up traffic label and its path vector back down the path. The receiving node once again checks the received path vector against its own path vector and, if there is a match, sends a further label mapping message and its path vector to the next node in the path and so forth until the distributing node receives its label mapping. To ensure convergence in both directions (i.e., for down traffic and up traffic) in one optimization, each downstream node in the up traffic sense only sends its up traffic label mapping to an upstream node once it has received such a mapping from its upstream nexthop accordingly. As a result the up traffic tables are up- dated in order from the root. However it is also possible for each downstream node in the up traffic sense to send a label mapping response to the pseudo label request (label mapping) from the upstream node without waiting to hear from its own downstream node in which case the path vector match step still ensures that all nodes are converged. As a result, transient micro loops are prevented in the multicast network as an ordered convergence is achieved in one or both directions. In particular because each node towards the root checks its path vector and will only forward its pseudo label request (label mapping) for down traffic if its received path vector corresponds then it is ensured that the nodes are converged and in the correct order for down traffic. If it only sends its label mapping for up traffic if its path vector corresponds with that received from its nex- thop nearer to the root, it is ensured that the preceding nodes towards the root are converged and in the correct order for up traffic. The process is accelerated because of the clean up, rewrite and reduction of hold down time at a zero step at 1400 and looping be- tween old and new trees is avoided according to the label allocation procedure described.
3.0 METHOD OF DISTRIBUTING LABELS IN A LABEL DISTRIBUTION PROTOCOL MULTICAST NETWORK
The mechanism which can give rise to looping can be understood further with reference to Figs. 15a and 15b which are diagrams showing potential inconsistencies between data plane (the mFLIB held on each line card (LC) of a router) and control plane (the FIB, LFIB and mLFIB from which the data plane is duplicated). Referring to Fig. 15a, node Rl is shown generally at 1500 and includes a control plane 1502 and a data plane 1504. It will be seen that where the control plane 1502 and data plane 1504 are not in synchronization, there will be stale forwarding entries at Rl which can give rise to looping even if, for example, the data plane 1508 and a node R2 1506 is synchronized with the control plane 1502 with node Rl. Referring to Fig. 15b, in an alternative possible configuration is shown in which the control plane 1502 and data plane 1504 of node Rl are synchronized with one another, but the control plane 1502 of node Rl is not syn- chronized with the data plane 1508 of node R2.
The problem can be understood in the specific case of up traffic in MP2MP with reference to Fig. 16 in which a further alternative network is shown such that the nodes although commonly labeled, are numbered differently. In particular a network includes nodes Rl to R5, reference numerals 1600, 1602, 1604, 1606, 1608, 1610 respectively. Nodes R3, R2 and R4 are connected to node Rl . Nodes R2 and R4 are further connected, and both provide paths to root node Ri, with node R2 as nexthop to node R5. Node R3's path vector to node R5 is therefore R3 - Rl - R4 - R2 - ... R5 as is shown generally by arrow 1612. Node R3 sends a pseudo label request (label mapping) 1614 with FEC 200, roots R5, label L3 for down traffic to node Rl which returns a label mapping 1616 for up traffic with FEC 200 and label Ll 3. Node Rl sends a pseudo label request (label mapping) 1618 for down traffic to node R4 with FEC 200, root R5, label Ll and node R4 returns a label mapping 1620 with FEC 200 and labels L41. Node R4 sends a pseudo label request (label mapping) 1622 for down traffic to node R2 with FEC 200, root R5 and Ia- bel L4 and node R2 returns a label mapping 1624 for up traffic with FEC 200 and label L2. Node R2 carries out further label mappings of the same type with its nexthop to node R5 and so forth.
Referring to Fig. 17 which is a network diagram corresponding to Fig. 16 showing forwarding of up traffic, for ease of reference interfaces are commonly labeled for adjacent nodes. Nodes Rl and R3 communicate over respective interfaces SO, reference numeral 1700, nodes Rl and R4 communicate via interfaces Sl, 1702, nodes Rl and R2 communicate over interfaces S3, 1704, nodes R2 and R4 communicate over interfaces S2, 1706 and nodes R2 and R5 communicate over interfaces S4, 1708. Accordingly when node R3 sends up traffic with label L13 to node Rl then node Rl 's forwarding ta- ble swaps label L41 for label Ll 3 and sends traffic on interface Sl to node R4. According to node R4's forwarding table 1712, label 41 is swapped for label S2 and the packet is sent on interface S2 to node R2. According to node R2's forwarding table 1714, label L2 is swapped for label L25 and the packet is forwarded on interface S4.
An instance in which looping can occur in relation to up traffic can be further un- derstood with reference to Fig. 18 which is a network diagram corresponding to that of Fig. 16 but in which node R2 no longer has a path to node R5 for example because of component failure on the path as a result of which node R4 is nexthop to root node R5. In particular problems can arise, for example, when node R2 has converged on the new topology but node R4 has not. In that case the up and down label mapping between nodes Rl and R3 and between nodes Rl and R4 are unchanged. However node R2 has sent a pseudo label request (label mapping) 1800 to node Rl indicating FEC 200, root R5, label 21 for down traffic and node Rl has returned an up traffic label mapping with FEC 200 and label 12 at 1802. As a result node R3s path vector to node R5 is R3 - Rl - R4 ... R5. However because node R4 has not converged, for example, because R2 has not yet sent a label withdrawal to R4 or because R4 has not yet processed it, upon receipt of a packet with label L41 on interface Sl, this will be forwarded to node R2 on interface S2 with label L2. As node R2 has converged on the network and has as its nexthop Rl (for example because the link between node R2 and node R4 is at too high a cost) then it swaps label L2 with label Ll 2 and forward to node Rl which returns a packet to R4 set- ting up a loop shown generally at 1804 and following the path R3 - Rl - R4 - R2 - Rl - R4 -R2 and so forth.
This can be understood further with reference to Fig. 19 which is a network diagram corresponding to Fig. 18 and showing the respective forwarding tables or forwarding table portions for nodes Rl , R4 and R2. In particular for traffic incoming on interface SO node Rl's forwarding table 1900 replaces label L13 with label L14 and forwards the packet on interface S 1. R4's forwarding table 1902 replaces L41 with label L2 and forwards on S2. Node R2's forwarding table 1904 replaces label L2 with L12 and forwards on interface S3, Rl's forwarding table replaces L 12 with L41 and forwards to node R4 on Sl and so forth. It will be seen, therefore, that looping arises in particular as a result of failure or delay in converging between nodes.
It can be seen that the problem is exacerbated as convergence is required for both up traffic and down traffic as a node issuing a pseudo label request may not converge at the same time as the node receiving the pseudo label request. Because the node received in the pseudo label request responds to the request immediately by sending the corre- spending label mapping message whether or not further nodes closer to the route are converged, then inconsistencies may arise between mLFIBs on local and remote devices giving rise to potential micro loops. However there is currently no mechanism to indicate the convergence of nodes closer to the root and the following description relates to prevention of such an up event transient micro loop by distributing and installing label re- writes only after confirming that nodes further from the root are converged providing strict ordered convergence and mFIB/mLFIB installation.
Firstly, therefore, referring to Fig. 20 which is a flow diagram illustrating steps carried out in a clean up procedure of the data plane at step 2000 the local data plane re- writes are cleaned up and, at step 2002, if the nexthop is changed label withdraw and release for the local and remote labels are sent immediately. It will be noted that in this case a multicast LDP label hold time period is set at zero. At step 2004 local label, FEC mappings/binding and rewrites are removed. All of these steps are carried out substantially immediately such that the time during which micro looping can occur is minimized because the data plane converges as quickly as possible. At step 2006, if an ingress interface of a P2MP tree changed as a node then all associated MP2MP local labels are changed at that hop to make the old and new multicast LPS's disjoint. As a result, even if a remote node is using an old label it will not create a loop and in particular traffic will not be forwarded back and forth between the old and the new tree which could be the case if labels were shared. It will be noted that the new label need only be allocated if the nexthop is changed which is preferable as label space is not unnecessarily reduced.
Furthermore, ordered mFIB/mLFIB installation is achieved by using a strict ordered convergence which may be for down traffic or in both directions i.e., for up and down traffic. The manner in which this is achieved can be understood further with reference to
Figs. 21A and 21B which shows a flow diagram illustrating the steps in ensuring ordered convergence in relation to the network change set out above with reference to Figs 18 and 19. In particular node R2 acting as distributing node is considered. At step 2100 R2 allocates its new label R21 sends a pseudo label mapping to its new nexthop Rl with the label L21 and its new unicast path vector R2 - Rl - R4 - R5. It will be noted that there may be many nexthop changes associated with the same reroute in which case each node having nexthop changes will initiate the pseudo label request towards the route. However for the same FEC these label request messages are merged and a single message is sent to the root. When the response is received from the root (as discussed in more detail below), then each requesting node receives the response separately.
At step 2102 node Rl receives the pseudo label request (label mapping) and checks the received path vector against its own path vector to see if they match and/or checks whether it appears in the received path vector signifying a loop. At step 2104, if the path vectors are the same then this indicates that the downstream node to Rl (in the sense of up traffic — i.e., node R2) is converged. However upstream nodes may not be converged. Accordingly, optionally without sending a label mapping response to node R2 at this stage, node Rl sends its own pseudo label request (label mapping) message and path vector to its next hop node R4 at step 2106. If non-convergence/a loop are detected, then at step 2105 node Rl will reject the pseudo label request (label mapping) message and notify node R2 accordingly. Then, for example, node R2 may wait a pre-determined period (for example slightly exceeding the maximum convergence time for the network) and resend its label mapping message and path vector which should then be converged. At step 2108 node R4 checks its path vector against the received path vector and if a match is established forwards its label request on to the nexthop to node R5, again optionally without returning a label mapping for up traffic to node Rl .
In each case, node Rl and node R4 will, upon path vector match, update their down traffic forwarding tables to include the label learnt from their down stream neigh- bor in the down traffic sense (node R2 and Rl respectively). In other words node Rl will install node R2's label L12, node R4 will install node RIs label L41 and so forth. As a result it will be seen that, for down traffic, ordered convergence is achieved in the up stream direction from the leaf node to the root. Hence, down traffic from the root will not be subjected to loops once the root has been updated as there is no possibility of down traffic traveling in a down stream direction from a converged node to a non- converged node.
At step 2110, root R5 receives a pseudo label request (label mapping) message, checks the path vector against its own path vector or loop defects and, if there is a match, at step 2112, sends a label mapping response and its path vector back up stream, in the up traffic sense. At step 2114, the nexthop node for example node R4, checks the path vector against its path vector and, if they are matched at step 2116, then at step 2118 if it has optionally withheld doing so node R4 sends its label mapping for up traffic and path vector to node Rl which carries out the same steps at 2120. It will be noted that if the path vector does not match at step 2116 then, at step 2117, node R4 sends a reject and notify message to its downstream node in the up traffic sense which can then delay resending its label mapping and path vector for a suitable period as discussed above with reference to step 2105.
In the case where the up traffic label mapping was previously withheld at each node in response to a pseudo label request (label mapping) message then, at step 2122 node R2 receives the up traffic label mapping response from node Rl . In this case as each node receives its up traffic label mapping from the preceding node starting at the root node, and checks the path vector is matched, then it is assured that the down stream nodes, in the up traffic sense, are converged and it can install the received label. As a result ordered label installation is carried out in the up stream direction in the up traffic sense in much the same way that it was in the opposite direction for down traffic such that up traffic from sender node R2 cannot loop once it has updated its up traffic forwarding table accordingly.
In the alternative optional approach whereby each node sends its up traffic label mapping once the path vector accompanying a pseudo label request (label mapping) message has been verified, the path vector matching step in any case ensures that all of the nodes are converged. This alternative approach has the advantage that there is potentially less delay in updating the up traffic forwarding table at each node as it does not have to wait for a "cascade" of up traffic label mappings down from the root. This approach can be further understood with reference to Fig 22 which is a network diagram corresponding to Fig.l 8. Node R2 sends a pseudo label request 2202 with its down traffic label mapping FEC 200, root R5, label L21 and its converged path vector R2 - Rl - R4 - ... R5. Node Rl, which is also converged, matches this path vector and forwards its pseudo label request (label mapping) 2204 to node R4. However node R4's path vector is R4 - R2 - R5 - and, as node R4 appears, implying a loop, it sends a reject and notify message to Rl .
Similarly if node R4 has received a label release it sends a pseudo label request (label mapping) 2206 with FEC 200, root R5, label R4 and path vector R4 - R2 - R5 which is rejected and notified by node R2.
It can further be seen that other nodes may also be triggered to send pseudo label requests according to the route table. For example if there has been a unicast path vector change at node R3 it sends a pseudo label request with FEC 200, route R5, label L3 and path vector R3 - Rl - R4 - ... R5 to node Rl . It will be appreciated that further optimizations are available. For example if the ordered convergence is also being used for unicast, then instead of sending the path vectors multiple times for each, the unicast path vector can be obtained and used to update the forwarding tables and carry out additional steps as described above appropriately. If there is a path vector change downstream in the up traffic sense it can be notified to an up stream node in an event driven mode allowing a separate check as to where the convergence has taken place.
It will be noted that additional signaling is required according to the methods described above and optimizations to reduce the amount of signaling are available. For example the pseudo label request message and unicast vector need only be sent to "affected nodes", that is all those down stream nodes in the up traffic sense which have a nexthop change. When the request reaches a destination node where there is no nexthop change it can act as though it were the root node in as much as it sends a label mapping up traffic response. It will be seen that nodes unaffected by the change do not need to update their forwarding tables and hence additional signaling and computing time is avoided using this approach. ****
During the route change, the downstream node path is unaffected, downstream nodes can start using the path immediately. It will be seen that as a result of the approaches described above looping is reduced. In particular effectively unicast routing is used to keep the base P2MP (down traffic) tree loop free, relying on ordered label distribution up stream (in the down traffic sense) through the affected nodes as a result of which convergence progresses in an or- dered manner in an up stream direction. However up traffic is also accommodated in the MP2MP case as the up traffic label mappings are distributed in the opposite direction on the same tree but again corresponding to ordered convergence in the up stream direction in the up traffic sense.
Although the procedures are discussed above in relation to MP2MP trees, it will be noted that the same approach can be adapted in relation to P2MP trees simply by replacement of the pseudo label request with a standard label request message and/or ensuring that up traffic labels are not installed in the opposite direction.
The manner in which the method described herein is implemented may be in software, firmware, hardware or any combination thereof and with any appropriate co- changes as will be apparent to the skilled reader without the need for detailed description here. In particular it will be appreciated that the new signaling and label distribution approach described herein can be implemented in any appropriate manner.
4.0 IMPLEMENTATION MECHANISMS - HARDWARE OVERVIEW FIG. 23 is a block diagram that illustrates a computer system 40 upon which the method may be implemented. The method is implemented using one or more computer programs running on a network element such as a router device. Thus, in this embodiment, the computer system 140 is a router.
Computer system 140 includes a bus 142 or other communication mechanism for communicating information, and a processor 144 coupled with bus 142 for processing information. Computer system 140 also includes a main memory 146, such as a random access memory (RAM), flash memory, or other dynamic storage device, coupled to bus 142 for storing information and instructions to be executed by processor 144. Main memory 146 may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 144. Computer system 140 further includes a read only memory (ROM) 148 or other static storage device coupled to bus 142 for storing static information and instructions for processor 144. A storage device 150, such as a magnetic disk, flash memory or optical disk, is provided and coupled to bus 142 for storing information and instructions.
A communication interface 158 may be coupled to bus 142 for communicating information and command selections to processor 144. Interface 158 is a conventional serial interface such as an RS-232 or RS-422 interface. An external terminal 152 or other computer system connects to the computer system 140 and provides commands to it using the interface 158. Firmware or software running in the computer system 140 provides a terminal interface or character-based command interface so that external commands can be given to the computer system.
A switching system 156 is coupled to bus 142 and has an input interface and a re- spective output interface (commonly designated 159) to external network elements. The external network elements may include a plurality of additional routers 160 or a local network coupled to one or more hosts or routers, or a global network such as the Internet having one or more servers. The switching system 156 switches information traffic arriving on the input interface to output interface 159 according to pre-determined protocols and conventions that are well known. For example, switching system 156, in cooperation with processor 144, can determine a destination of a packet of data arriving on the input interface and send it to the correct destination using the output interface. The destinations may include a host, server, other end stations, or other routing and switching devices in a local network or Internet. The computer system 140 implements as a node acting as root, leaf or transit node, the above described method. The implementation is provided by computer system 140 in response to processor 144 executing one or more sequences of one or more instructions contained in main memory 146. Such instructions may be read into main memory 146 from another computer-readable medium, such as storage device 150. Exe- cution of the sequences of instructions contained in main memory 146 causes processor 144 to perform the process steps described herein. One or more processors in a multiprocessing arrangement may also be employed to execute the sequences of instructions contained in main memory 146. In alternative embodiments, hard- wired circuitry may be used in place of or in combination with software instructions to implement the method. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
The term "computer-readable medium" as used herein refers to any medium that participates in providing instructions to processor 144 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 150. Volatile media includes dynamic memory, such as main memory 146. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 142. Transmission media can also take the form of wireless links such as acoustic or electromagnetic waves, such as those generated during radio wave and infrared data communications.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 144 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 140 can receive the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector coupled to bus 142 can receive the data carried in the infrared signal and place the data on bus 142. Bus 142 carries the data to main memory 146, from which processor 144 retrieves and executes the instructions. The instructions received by main memory 146 may optionally be stored on storage device 150 either before or after execution by processor 144. Interface 159 also provides a two-way data communication coupling to a network link that is connected to a local network. For example, the interface 159 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the interface 159 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the interface 159 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
The network link typically provides data communication through one or more networks to other data devices. For example, the network link may provide a connection through a local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the "Internet". The local network and the Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on the network link and through the interface 159, which carry the digital data to and from computer system 140, are exemplary forms of carrier waves transporting the information.
Computer system 140 can send messages and receive data, including program code, through the network(s), network link and interface 159. In the Internet example, a server might transmit a requested code for an application program through the Internet, ISP, local network and communication interface 158. One such downloaded application provides for the method as described herein. The received code may be executed by processor 144 as it is received, and/or stored in storage device 150, or other non-volatile storage for later execution. In this manner, computer system 140 may obtain application code in the form of a carrier wave.
5.0 EXTENSIONS AND ALTERNATIVES
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustra- tive rather than a restrictive sense.
The method steps set out can be carried out in any appropriate order and aspects from the examples and the embodiments described juxtaposed or interchanged as appropriate the method can be applied in any network of any topology supporting multicast in relation to any component change in the network for example a link or node failure or the introduction or removal of a network component by an administrator and in relation to up or down traffic loops.
What is claimed is:

Claims

CLAIMS 1. A method for distributing labels in a label distribution protocol multicast network having a root node and at least one leaf node, the method comprising the steps, performed at a receiving node, of: receiving a label and path vector from a distributing node; carrying out at least one of a loop and a convergence detection from the received path vector; and in response to detecting at least one of convergence and no loops, sending a re- ceiving node label and path vector to a nexthop node in the network.
2. A method as claimed in claim 1 , further comprising the steps, performed at a dis- tributing node, of: forwarding a label to a nexthop node in the network together with a path vector representing the path from the distributing node to the root node.
3. A method as claimed in claim 2, further comprising: updating, at the receiving node, a label forwarding table entry with the received label in response to detecting at least one of convergence and no loops.
4. A method as claimed in claim 1 , further comprising the steps, performed at a des- tination node, of: receiving a label and path vector; carrying out at least one of a loop and a convergence detection from the record path vector; and in response to detecting at least one of convergence and no loops, forwarding a destination node label and path vector to a nexthop node in the network.
5. A method as claimed in claim 4, wherein the destination node is the root node.
6. A method as claimed in claim 4, wherein the nexthop of the destination node is unchanged.
7. A method is claimed in claim 4, further comprising the steps, performed at a re- ceiving node, of: receiving a label and path vector from a nexthop node in the direction of the root node; carrying out at least one of a loop and a convergence detection from the received path vector; and in response to detecting at least one of convergence and no loops, forwarding a receiving node label and path vector to a nexthop node in the direction away from the root node.
8. A method as claimed in claim 7, further comprising: updating the receiving node forwarding table in the direction towards the root node with the received label in response to detecting at least one of convergence and no loops.
9. A method as claimed in claim 1 , wherein the network is one of a P2MP and a MP2MP network.
10. A method as claimed in claim 1 , wherein the receiving node is selected from a group comprising: a leaf node, a root node, and a transit node between a leaf node and a root node.
11. A method as claimed in claim 1 , further comprising the steps, performed at the receiving node, of: updating a forwarding table in the direction away from the root with the path label at the time of sending the path label and path vector.
12. A method as claimed in claim 11 , further comprising: removing outdated forwarding information from the forwarding table at the time of sending the path label and path vector.
13. A method as claimed in claim 1 , further comprising the steps, performed at the receiving node, of: receiving a label and path vector from a nexthop node in a direction towards the root node; carrying out at least one of a loop and a convergence detection from the received path vector; and in response to detecting at least one of convergence and no loops, sending a label and path vector to a nexthop node in a direction away from the root node.
14. A method as claimed in claim 1 , further comprising the steps, performed at the receiving node, of: receiving a label and path vector from a nexthop node in a direction away from the root node; carrying out at least one of a loop and a convergence detection from the received path vector; and in response to detecting at least one of convergence and no loops, forwarding a label and path vector to a nexthop node in the direction towards the root node.
15. A method as claimed in claim 1, further comprising: allocating a new label to a route that is distinguishable from an existing label.
16. A method as claimed in claim 1 , wherein a label and path vector are forwarded by a receiving node in the event of a component change in the network.
17. A method for distributing labels in a label distribution protocol multicast network having a root node and at least one leaf node comprising the steps, performed at a receiv- ing node, of: receiving a pseudo label mapping message from a nexthop node in a direction fur- ther from the root node; forwarding a pseudo label mapping message to a nexthop node in a direction to- wards the root node; and forwarding a label mapping message to the nexthop node in the direction away from the root node upon receipt of a label mapping message from the nexthop node in the direction towards the root node.
18. A computer readable medium comprising one or more sequences of instructions for distributing labels in a label distribution protocol multicast network, which, when executed by one or more processors, causes the one or more processors to perform the steps of: receiving a label and path vector from a distributing node; carrying out at least one of a loop and a convergence detection from the received path vector; and in response to detecting at least one of convergence and no loops, sending a re- ceiving node label and path vector to a nexthop node in the network.
19. An apparatus for distributing labels in a label distribution protocol multicast net- work having a route node and at least one leaf node comprising the step, performed at a distributing node, of: sending a label to a nexthop node in the network in a direction selected from to- wards and away from the route node together with a half vector representing the path from the distributing node to the root node.
20. An apparatus for distributing labels in a label distribution multicast network comprising: one or more processors; a network interface communicatively coupled to the one or more processors and configured to communicate one or more packet flows among the one or more processors in a network; and a computer readable medium comprising one or more sequences of instructions for distributing labels in a label distribution protocol multicast network which, when exe- cuted by the one or more processors, cause the one or more processors to perform the steps of: receiving a label and path vector from a distributing node; carrying out at least one of a loop and a convergence detection from the received path vector; and in response to detecting at least one of convergence and no loops, sending ing node label and path vector to a nexthop node in the network.
EP07779743.9A 2006-03-16 2007-03-15 A method and apparatus for distributing labels in a label distribution protocol multicast network Active EP1994666B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/377,807 US7684350B2 (en) 2006-03-16 2006-03-16 Method and apparatus for distributing labels in a label distribution protocol multicast network
PCT/US2007/064025 WO2007121016A2 (en) 2006-03-16 2007-03-15 A method and apparatus for distributing labels in a label distribution protocol multicast network

Publications (3)

Publication Number Publication Date
EP1994666A2 true EP1994666A2 (en) 2008-11-26
EP1994666A4 EP1994666A4 (en) 2010-04-07
EP1994666B1 EP1994666B1 (en) 2014-01-15

Family

ID=38517743

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07779743.9A Active EP1994666B1 (en) 2006-03-16 2007-03-15 A method and apparatus for distributing labels in a label distribution protocol multicast network

Country Status (3)

Country Link
US (1) US7684350B2 (en)
EP (1) EP1994666B1 (en)
WO (1) WO2007121016A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644186B1 (en) 2008-10-03 2014-02-04 Cisco Technology, Inc. System and method for detecting loops for routing in a network environment

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070140265A1 (en) * 2003-12-26 2007-06-21 France Telecom Marking of a datagram transmitted over an ip network and transmission of one such datagram
US7899044B2 (en) * 2006-06-08 2011-03-01 Alcatel Lucent Method and system for optimizing resources for establishing pseudo-wires in a multiprotocol label switching network
US8391185B2 (en) * 2007-05-29 2013-03-05 Cisco Technology, Inc. Method to transport bidir PIM over a multiprotocol label switched network
JP4885819B2 (en) * 2007-10-22 2012-02-29 富士通株式会社 Communication device
US8355347B2 (en) * 2007-12-19 2013-01-15 Cisco Technology, Inc. Creating multipoint-to-multipoint MPLS trees in an inter-domain environment
US8223669B2 (en) * 2008-04-07 2012-07-17 Futurewei Technologies, Inc. Multi-protocol label switching multi-topology support
US8804718B2 (en) 2008-04-23 2014-08-12 Cisco Technology, Inc. Preventing traffic flooding to the root of a multi-point to multi-point label-switched path tree with no receivers
US8811388B2 (en) * 2008-11-14 2014-08-19 Rockstar Consortium Us Lp Service instance applied to MPLS networks
US8804719B2 (en) * 2010-06-29 2014-08-12 Cisco Technology, Inc. In-band multicast trace in IP and MPLS networks
CN102143018B (en) * 2010-12-07 2014-04-30 华为技术有限公司 Message loop detection method, routing agent equipment and networking system
CN103001871A (en) * 2011-09-13 2013-03-27 华为技术有限公司 Label distribution method and device
CN103716169B (en) * 2012-09-29 2017-11-24 华为技术有限公司 Point-to-multipoint method of realizing group broadcasting, network node and system
US20140241351A1 (en) * 2013-02-28 2014-08-28 Siva Kollipara Dynamic determination of the root node of an mldp tunnel
US9553796B2 (en) 2013-03-15 2017-01-24 Cisco Technology, Inc. Cycle-free multi-topology routing
US9128902B2 (en) * 2013-04-25 2015-09-08 Netapp, Inc. Systems and methods for managing disaster recovery in a storage system
US9148290B2 (en) 2013-06-28 2015-09-29 Cisco Technology, Inc. Flow-based load-balancing of layer 2 multicast over multi-protocol label switching label switched multicast
CN103401796B (en) * 2013-07-09 2016-05-25 北京百度网讯科技有限公司 Network flux cleaning system and method
US9489442B1 (en) * 2014-02-04 2016-11-08 Emc Corporation Prevention of circular event publication in publish/subscribe model using path vector
US10097622B1 (en) * 2015-09-11 2018-10-09 EMC IP Holding Company LLC Method and system for communication using published events
US9973411B2 (en) * 2016-03-21 2018-05-15 Nicira, Inc. Synchronization of data and control planes of routers
US10212069B2 (en) * 2016-12-13 2019-02-19 Cisco Technology, Inc. Forwarding of multicast packets in a network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374303B1 (en) * 1997-11-17 2002-04-16 Lucent Technologies, Inc. Explicit route and multicast tree setup using label distribution

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845087A (en) * 1996-03-04 1998-12-01 Telebit Corporation Internetwork zone name filtering with selective placebo zone name substitution in a response to a request for zone name information
US5964841A (en) 1997-03-03 1999-10-12 Cisco Technology, Inc. Technique for handling forwarding transients with link state routing protocol
JP3688877B2 (en) * 1997-08-08 2005-08-31 株式会社東芝 Node device and label switching path loop detection method
US6202114B1 (en) 1997-12-31 2001-03-13 Cisco Technology, Inc. Spanning tree with fast link-failure convergence
US6512768B1 (en) 1999-02-26 2003-01-28 Cisco Technology, Inc. Discovery and tag space identifiers in a tag distribution protocol (TDP)
US6636509B1 (en) 1999-05-11 2003-10-21 Cisco Technology, Inc. Hardware TOS remapping based on source autonomous system identifier
US6879594B1 (en) * 1999-06-07 2005-04-12 Nortel Networks Limited System and method for loop avoidance in multi-protocol label switching
US7644177B2 (en) 2003-02-28 2010-01-05 Cisco Technology, Inc. Multicast-routing-protocol-independent realization of IP multicast forwarding
US6970464B2 (en) 2003-04-01 2005-11-29 Cisco Technology, Inc. Method for recursive BGP route updates in MPLS networks
US8176006B2 (en) 2003-12-10 2012-05-08 Cisco Technology, Inc. Maintaining and distributing relevant routing information base updates to subscribing clients in a device
US7423986B2 (en) 2004-03-26 2008-09-09 Cisco Technology, Inc. Providing a multicast service in a communication network
JP2006033275A (en) * 2004-07-14 2006-02-02 Fujitsu Ltd Loop frame detector and loop frame detection method
US7646772B2 (en) 2004-08-13 2010-01-12 Cisco Technology, Inc. Graceful shutdown of LDP on specific interfaces between label switched routers

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6374303B1 (en) * 1997-11-17 2002-04-16 Lucent Technologies, Inc. Explicit route and multicast tree setup using label distribution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C-Y LEE L ANDERSSON NORTEL NETWORKS Y OHBA TOSHIBA: "Avoiding Loops in MPLS; draft-leecy-mpls-loop-avoidance-00.txt" IETF STANDARD-WORKING-DRAFT, INTERNET ENGINEERING TASK FORCE, IETF, CH, 1 June 1999 (1999-06-01), XP015031452 ISSN: 0000-0004 *
DAVIE J LAWRENCE K MCCLOGHRIE E ROSEN G SWALLOW CISCO SYSTEMS B ET AL: "MPLS using LDP and ATM VC Switching; rfc3035.txt" IETF STANDARD, INTERNET ENGINEERING TASK FORCE, IETF, CH, 1 January 2001 (2001-01-01), XP015008818 ISSN: 0000-0003 *
See also references of WO2007121016A2 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644186B1 (en) 2008-10-03 2014-02-04 Cisco Technology, Inc. System and method for detecting loops for routing in a network environment

Also Published As

Publication number Publication date
EP1994666A4 (en) 2010-04-07
US7684350B2 (en) 2010-03-23
US20070217420A1 (en) 2007-09-20
WO2007121016A2 (en) 2007-10-25
WO2007121016A3 (en) 2008-07-10
EP1994666B1 (en) 2014-01-15

Similar Documents

Publication Publication Date Title
EP1994666B1 (en) A method and apparatus for distributing labels in a label distribution protocol multicast network
US8004960B2 (en) Method and apparatus for forwarding label distribution protocol multicast traffic during fast reroute
US9860163B2 (en) MPLS traffic engineering for point-to-multipoint label switched paths
US9832031B2 (en) Bit index explicit replication forwarding using replication cache
US7848224B2 (en) Method and apparatus for constructing a repair path for multicast data
US7675912B1 (en) Method and apparatus for border gateway protocol (BGP) auto discovery
US9998368B2 (en) Zone routing system
US7925778B1 (en) Method and apparatus for providing multicast messages across a data communication network
US7447225B2 (en) Multiple multicast forwarder prevention during NSF recovery of control failures in a router
EP2625829B1 (en) System and method for computing a backup egress of a point-to-multi-point label switched path
EP1913731B1 (en) Method and apparatus for enabling routing of label switched data packets
US7920566B2 (en) Forwarding data in a data communication network
US10021017B2 (en) X channel to zone in zone routing
US7835312B2 (en) Method and apparatus for updating label-switched paths
US11394578B2 (en) Supporting multicast over a network domain
US20070030852A1 (en) Method and apparatus for enabling routing of label switched data packets
EP2186270A1 (en) Forwarding data in a data communications network
CN113709033B (en) Segment traceroute for segment routing traffic engineering
CN113826362A (en) Transport endpoint segmentation for inter-domain segment routing
JP2010050719A (en) Communication system, control node, communication method, and program

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080929

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

A4 Supplementary search report drawn up and despatched

Effective date: 20100310

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/28 20060101ALI20100304BHEP

Ipc: H04L 1/00 20060101ALI20100304BHEP

Ipc: H04L 12/56 20060101AFI20100304BHEP

17Q First examination report despatched

Effective date: 20100715

DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602007034815

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0001000000

Ipc: H04L0012751000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/761 20130101ALI20130708BHEP

Ipc: H04L 12/723 20130101ALI20130708BHEP

Ipc: H04L 12/751 20130101AFI20130708BHEP

INTG Intention to grant announced

Effective date: 20130805

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 650249

Country of ref document: AT

Kind code of ref document: T

Effective date: 20140215

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602007034815

Country of ref document: DE

Effective date: 20140227

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20140115

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 650249

Country of ref document: AT

Kind code of ref document: T

Effective date: 20140115

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140515

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007034815

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140315

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

26N No opposition filed

Effective date: 20141016

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140331

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140331

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140315

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007034815

Country of ref document: DE

Effective date: 20141016

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: SE

Payment date: 20160325

Year of fee payment: 10

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140416

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20070315

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140115

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: SE

Ref legal event code: EUG

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170316

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602007034815

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012751000

Ipc: H04L0045020000

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240319

Year of fee payment: 18

Ref country code: GB

Payment date: 20240320

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240327

Year of fee payment: 18