WO2021035014A1 - Service group flooding in is-is networks - Google Patents

Service group flooding in is-is networks Download PDF

Info

Publication number
WO2021035014A1
WO2021035014A1 PCT/US2020/047119 US2020047119W WO2021035014A1 WO 2021035014 A1 WO2021035014 A1 WO 2021035014A1 US 2020047119 W US2020047119 W US 2020047119W WO 2021035014 A1 WO2021035014 A1 WO 2021035014A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
service group
service
path
link
Prior art date
Application number
PCT/US2020/047119
Other languages
French (fr)
Inventor
Uma S. Chunduri
Alvaro Retana
Renwei Li
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2021035014A1 publication Critical patent/WO2021035014A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/026Details of "hello" or keep-alive messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/32Flooding

Definitions

  • the present disclosure relates to the field of routing in a network, and in particular, to path scope flooding in an Intermediate System to Intermediate System (IS-IS).
  • IS-IS Intermediate System to Intermediate System
  • each node needs to be aware of the topological relationships (i.e. , adjacencies) of all other nodes, such that all nodes may build a topological map (topology) of the AS.
  • Nodes may learn about one another's adjacencies by distributing (e.g., flooding) link-state information throughout the network according to an Interior Gateway Protocol (IGP), such as intermediate system (IS) to IS (IS-IS).
  • IGP Interior Gateway Protocol
  • IS intermediate system
  • IS-IS IS-IS
  • the IS-IS protocol uses several different types of protocol data units (PDUs).
  • a PDU may also be referred to herein as a packet ora message.
  • Hello PDUs are exchanged periodically between nodes (e.g., routers) to establish adjacency.
  • An IS-IS adjacency can be either point-to-point (P2P) or broadcast (also referred to herein as Local Area Network (LAN).
  • a link-state PDU (LSP) contains the link-state information. LSPs are flooded periodically throughout the AS to distribute the link- state information.
  • a node that receives the LSPs constructs and maintains a link-state database (LSDB) based on the link-state information.
  • LSDB link-state database
  • a complete sequence number PDU contains a complete list of all LSPs in the LSDB.
  • CSNPs are sent periodically by a node on all interfaces. The receiving nodes use the information in the CSNP to update and synchronize their LSDBs.
  • Partial sequence number PDUs are sent by a receiver when it detects that it is missing a link-state PDU (e.g., when its LSDB is out of date). The receiver sends the PSNP to a node that transmitted the CSNP.
  • IS-IS is a link-state protocol that uses a reliable flooding mechanism to distribute the link-state information across the entire area or domain.
  • Each IS-IS router distributes information about its local state (e.g., usable interfaces and reachable neighbors, and the cost of using each interface) to other routers using an LSP.
  • Each router uses the received link-state information to build up identical link-state databases (LSDBs) that describes the topology of the AS. From its LSDB, each router calculates its own routing table using a Shortest Path First (SPF) or Dijkstra algorithm. This routing table contains all the destinations the routing protocol learns, associated with a next hop node. The protocol recalculates routes when the network topology changes, using the SPF or Dijkstra algorithm, and minimizes the routing protocol traffic that it generates.
  • SPF Shortest Path First
  • Every node in the network is to have the same LSDB to have a consistent decision process.
  • the reliable flooding mechanism of the !S-IS protocol uses a simple rule to distribute the link-state information. Specifically, the reliable flooding mechanism sends or floods the link-state information on all the interfaces except the interface from where the link-state information was received. Though this is inefficient, it ensures the consistent decision process.
  • a node for distributing link-state information over a network using protocol data units (PDUs).
  • the node comprises a plurality of network interfaces, and one or more processors in communication with the plurality of network interfaces.
  • the one or more processors are configured to cause the node to receive a link state PDU (LSP) on a first network interface of the plurality of network interfaces.
  • LSP link state PDU
  • the LSP includes a flooding scope of service group, a service group identifier for a service group comprising a plurality of nodes.
  • the LSP further includes link-state information for the service group.
  • the one or more processors are configured to cause the node to select one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group.
  • the one or more processors are configured to cause the node to distribute the LSP on the selected one or more of the plurality of network interfaces.
  • the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to the nodes in the service group.
  • the one or more processors are further configured to cause the node to select only network interfaces that correspond to the nodes in the service group.
  • the one or more processors are further configured to cause the node to store the service group identifier and the link- state information for the service group in a service group link-state database (SG- LSDB).
  • SG- LSDB service group link-state database
  • the link-state information includes explicit path information for one or more explicit paths in the service group, wherein each explicit path comprises nodes in the service group.
  • the explicit path information includes a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path.
  • PDEs topological path description elements
  • the one or more of the PDEs describe a preferred path routing (PPR) of a preferred path for routing packets through the service group.
  • PPR preferred path routing
  • the preferred path for routing packets represents a data path from a source node to a destination node in the service group, wherein the data path has at least one intermediate node between the source node and the destination node.
  • the preferred path for routing packets represents a data path from a plurality of source nodes in the network to at least one destination node in the service group, wherein the data path has at least one intermediate node between the plurality of source nodes and the destination node.
  • the preferred path for routing packets represents a data path from at least one source node in the network to a plurality of destination nodes in the service group, wherein the data path has at least one intermediate node between the source node and the plurality of destination nodes.
  • the one or more processors are further configured to cause the node to forward packets that contain a path identifier of any of the one or more explicit paths to a next PDE in the sequentially ordered topological PDEs identified by the path identifier.
  • the sequentially ordered topological PDEs identified by the path identifier comprises a loose path that does not contain all nodes on the explicit path on which a packet is forwarded.
  • the one or more of the PDEs describe a traffic engineering (TE) path through the service group.
  • TE traffic engineering
  • the one or more processors are further configured to cause the node to advertise that the node supports the flooding scope of service group.
  • the one or more processors are further configured to cause the node to advertise identifiers of service groups of which the node is a member.
  • the one or more processors are further configured to cause the node to send an intermediate system (IS) to IS hello message that advertises that the node supports the flooding scope of service group and an identifier of a service group of which the node is a member.
  • IS intermediate system
  • the one or more processors are further configured to cause the node to send a router capability message that advertises that the node supports the flooding scope of service group and one or more service group identifiers indicating a corresponding one or more service groups of which the node is a member.
  • the one or more processors are further configured to cause the node to generate a packet that, for each node in a set of nodes, identifies all service groups in which the respective node is a member.
  • the one or more processors are further configured to cause the node to advertise the packet to all nodes in the network.
  • the flooding scope of service group is a level-1 service group flooding scope that includes only intermediate system (IS) to IS level-1 nodes.
  • IS intermediate system
  • the flooding scope of service group is a level-1 service group flooding scope that includes only intermediate system (IS) to IS level-2 nodes.
  • the flooding scope of service group is a level-1 service group flooding scope that includes both intermediate system (IS) to IS level-1 and IS-IS level-2 nodes.
  • IS intermediate system
  • the one or more processors are further configured to cause the node to participate in synchronization of the link-state information identified by the service group identifier and stored at the node with link- state information identified by the service group identifier and stored at an adjacent node in the network.
  • the node comprises a plurality of service group link-state databases (SG-LSDBs), wherein the one or more processors are further configured to cause the node to store the service group identifier and the link-state information for the service group in a first SG-LSDB of the plurality of SG-LSDBs.
  • the one or more processors are further configured to cause the node to store service group identifiers for other service groups with corresponding link-state information for the other service groups in other SG-LSDBs of the plurality of SG-LSDBs.
  • the one or more processors are further configured to cause the node to participate in synchronization of the link-state information in the plurality of plurality of SG-LSDBs for all of the service groups shared by the node and an adjacent nod.
  • the node comprises a plurality of service group link-state databases (SG-LSDBs).
  • the one or more processors are further configured to cause the node to store the service group identifier and the link- state information for the service group in a first SG-LSDB of the plurality of SG-LSDBs.
  • the one or more processors are further configured to cause the node to store service group identifiers for other service groups with corresponding link-state information for the other service groups in other SG-LSDBs of the plurality of SG-LSDBs.
  • the one or more processors are further configured to cause the node to participate in synchronization of zero or more of the plurality of SG-LSDBs with a designated intermediate system (DIS).
  • the one or more processors are further configured to cause the node to advertise any SG-LSDBs that were not synchronized with the DIS.
  • the node comprises a plurality of service group link-state databases (SG-LSDBs).
  • the one or more processors are further configured to cause the node to store the service group identifier and the link- state information for the service group in a first SG-LSDB of the plurality of SG-LSDBs.
  • the one or more processors are further configured to cause the node to store service group identifiers for other service groups with corresponding link-state information for the other service groups in other SG-LSDBs of the plurality of SG-LSDBs.
  • the one or more processors are further configured to cause the node to participate in synchronization of zero or more of the plurality of SG-LSDBs with a designated intermediate system (DIS).
  • DIS intermediate system
  • the one or more processors are further configured to cause the node to participate in synchronization of, with other nodes, any SG-LSDBs that were not synchronized with the DIS.
  • each of the SG-LSDBs includes explicit path information for one or more explicit paths in the service group.
  • PDEs path description elements
  • a method for distributing link-state information over a network using protocol data units comprises receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that specifies a flooding scope of service group.
  • LSP link-state PDU
  • the LSP includes a service group identifier for a service group comprising a plurality of nodes and link-state information for the service group.
  • the method comprises selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group.
  • the method comprises distributing the LSP on the selected one or more of the plurality of network interfaces.
  • a non-transitory computer-readable medium storing computer instructions for distributing link-state information over a network using protocol data units (PDUs), that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that specifies a flooding scope of service group, the LSP includes a service group identifier for a service group comprising a plurality of nodes and link-state information for the service group; selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group; and distributing the LSP on the selected one or more of the plurality of network interfaces.
  • PDUs protocol data units
  • FIG. 1A illustrates a network configured to implement an embodiment of service group flooding.
  • FIG 1 B depicts network 100 of FIG. 1A with various link-state databases maintained by the nodes.
  • FIG. 2A depicts a TLV for announcing scope flooding support.
  • FIG. 2B depicts a table that contains information for a number of different types of service group flooding, in accordance with one embodiment.
  • FIG. 2C is a flowchart of one embodiment of a process of nodes exchanging hello PDUs to advertise service group flooding capability.
  • FIG. 2D depicts one embodiment of a TLV that may be included in the hello message.
  • FIG. 3A depicts a format for a TLV that a node sends in an LSP, in one embodiment, to advertise service group membership.
  • FIG. 3B depicts a flowchart of one embodiment of a process of a node advertising its service group memberships.
  • FIG. 4A depicts one embodiment of a TLV that is sent by a node to dynamically advertise service groupings.
  • FIG. 4B depicts one embodiment of a flowchart of a node dynamically advertising service groupings.
  • FIG. 5 depicts a flowchart of one embodiment of a process of service scope flooding.
  • FIG. 6 depicts two nodes and link-state databases maintained by the nodes, in accordance with one embodiment.
  • FIG. 7A depicts a flowchart of one embodiment of a process of two nodes synchronizing SG-LSDBs.
  • FIG. 7B provides further details of one embodiment of synchronizing SG- LSDBs between two nodes.
  • FIG. 7C depicts a format for an embodiment of an FS-CSNP.
  • FIG. 7D depicts a format for one embodiment of an FS-PSNP.
  • FIG. 8A depicts nodes in a LAN, as well as link-state databases maintained by the nodes, in accordance with one embodiment.
  • FIG. 8B depicts a flowchart of one embodiment of a process of nodes synchronizing SG-LSDBs in a LAN.
  • FIG. 9A depicts one embodiment of a network showing a state of link-state databases immediately after a node restarts.
  • FIG. 9B depicts a flowchart of one embodiment of a process of a graceful restart of a node having a SG-LSDB.
  • FIG. 9C depicts a flowchart of one embodiment of a process of re establishing a SG-LSDB during a graceful restart of a node.
  • FIG. 10 illustrates an embodiment of a node.
  • FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.
  • FIG. 12 illustrates an embodiment of a TLV in an IS-IS LSP.
  • a service group of nodes in a network (or more briefly “service group”) is defined as a group of nodes that provide a level of service to a user or customer the like (or group of users, group customers, etc.) of the network.
  • the level of service is provided as a part of a service-level-agreement (SLA). Examples of a level of service include, but are not limited to, network throughput, network bandwidth, downlink speed, uplink speed, etc.
  • the service group is a collection of nodes that provide the level of service.
  • the service group includes a subset of nodes in the network.
  • the phrase “a subset of nodes in a network” or the like means less than all nodes in the network.
  • link-state information associated with the service group is sent to nodes based on membership in the service group.
  • link-state information associated with the service group is sent to nodes that are part of the service group.
  • link-state information associated with the service group is sent only to nodes that are part of a service group. Hence, by distributing the link-state information associated with the service group based on membership in the service group, the link-state information need not be sent to all nodes in the network.
  • the link-state information associated with the service group contains information for an explicit path.
  • the explicit path is identified by a path identifier and may be described by sequentially ordered topological path description elements (PDEs).
  • PDE represents a segment of the explicit path.
  • a PDE can be, but is not limited to, a node, a backup node, a link, etc.
  • An explicit path comprises at least three nodes, but includes a subset of fewer than all of the nodes in the network. Distributing the link-state information to nodes based on membership in the service group allows the explicit path information to be distributed without sending the explicit path information to all nodes in the network.
  • FIG. 1A illustrates a network 100 configured to implement an embodiment of service group scope flooding.
  • link- state information associated with the service group is sent to nodes based on membership in the service group.
  • link-state information associated with the service group is sent only to nodes that are members of the service group.
  • the network 100 includes a number of nodes R1 — R15, which may also be referred to herein as “network nodes” or “network elements.”
  • the network 100 includes one or more areas, as the term is used in the IS-IS protocol.
  • a node could be an IS-IS level-1 router, an IS-IS level-2 router, or an IS-IS level 1/2 router.
  • the nodes R1 - R15 are connected by links 106 (the links between R3/R4, R4/R5 and R11/R12 are labeled with reference numeral 106).
  • Each of the nodes R1 - R15 may be a physical device, such as a router, a bridge, a virtual machine, a network switch, or a logical device configured to perform switching, routing, as well as distributing and maintaining link-state information as described herein.
  • any of nodes R1 - R15 may be headend nodes positioned at an edge of the network 100, an egress node from which traffic is transmitted, or an intermediate node or any other type of node.
  • the links 106 may be wired or wireless links or interfaces interconnecting respective pairs of the nodes together.
  • the network 100 may include any number of nodes.
  • the nodes are configured to implement various packet forwarding protocols, such as, but not limited to, MPLS, IPv4, IPv6, and Big Packet Protocol.
  • Service group 110-1 contains nodes R1 , R2, R3, R4, R6, R7, R8, R10, and R11.
  • Service group 110-2 contains nodes R5, R6, R7, R8, R9, R10, R11 , and R12.
  • a service group is a group of two or more nodes that provide a type of service. In one embodiment, the service group has two or more nodes that provide a type of service. In one embodiment, the service group has four or more nodes that provide a type of service. In one embodiment, the service group has five or more nodes that provide a type of service.
  • the service groups 110 are established in order to meet a level of service.
  • the level of service may be specified in an SLA.
  • a network operator may select nodes R1 , R2, R3, R4, R6, R7, R8, R10, and R11 to meet a certain level of service (e.g., a certain minimum bandwidth, uplink speed, downlink speed, etc.).
  • Path 108-1 includes nodes R1-R2-R3-R7.
  • Path 108-2 includes nodes R1-R2-R6-R10-R11-R7-R8-R4.
  • Path 108-3 includes node R5-R9-R10-R11-R7-R8-R4.
  • Each explicit path has a sequence of nodes. In one embodiment, the sequential order is bi-directional.
  • an explicit path 108 includes at least three nodes.
  • an explicit path 108 includes at least four nodes.
  • an explicit path 108 includes at least five nodes.
  • a path (e.g., path 108-1 , 108-2) is identified by a path identifier and is described by sequentially ordered topological path description elements (PDEs).
  • PDEs represents a segment of the path.
  • a PDE can be, but is not limited to, a node, a backup node, a link 106, etc.
  • Each explicit path is assigned a unique path identifier.
  • Each explicit path is contained within at least one of the service groups. By being contained with a service group, it is meant that all of the nodes on the path are members of the service group.
  • Explicit path 108-1 is contained within service group 110-1.
  • Explicit path 108-2 is contained within service group 110-1.
  • Explicit path 108-3 is contained within service group 110-2.
  • the example paths in FIG. 1A are what are referred to herein as “strict paths.”
  • the description of a strict path explicitly lists all nodes from one end of the path to the other end of the path. This is in contrast to a “loose path”, whose description does not explicitly lists all nodes from one end of the path to the other end of the path.
  • An example of a loose path is R1-R2-R7-R8-R4. This description of the loose path does not list all nodes on a path between R1 and R4.
  • One way to fulfil this loose path R1-R2-R7-R8-R4 is with path 108-2 (R1-R2-R6-R10-R11-R7-R8-R4).
  • the loose path R1-R2-R7-R8-R4 could be fulfilled in another way such as R1-R2-R6-R7-R8-R4, or R1-R2-R3-R7-R8-R4.
  • Another example of a loose path is R5-R10-R11-R8. This description of the loose path does not list all nodes on a path between R5 and R8.
  • One way to fulfil this loose path is with path 108-3 (R5-R9-R10- R11-R7- R8).
  • link-state information for such loose paths is distributed in the network 100 based on service group membership.
  • the link-state information for a loose path that is contained in a particular service group is distributed to nodes based on a node’s membership in the particular service group. In an embodiment, the link-state information for a loose path that is contained in a particular service group is distributed to only nodes in the particular service group. In an embodiment, the link-state information for a loose path that is contained in a particular service group is distributed to all nodes in the particular service group, and only to nodes in the particular service group.
  • Link-state information for strict paths contained within a service group may also be distributed based on membership in the service group.
  • the link-state information for a strict path that is contained in a particular service group is distributed to nodes based on a node’s membership in the particular service group.
  • the link-state information for a strict path that is contained in a particular service group is distributed to only nodes in the particular service group.
  • the link-state information for a strict path that is contained in a particular service group is distributed to all nodes in the particular service group, and only to nodes in the particular service group.
  • each node may have interfaces with many additional nodes.
  • a node could have interfaces with ten or even more nodes.
  • unnecessary link-state information need not be transferred in the network 100.
  • unnecessary link-state information need not be transferred in the network 100.
  • nodes outside of the service group need not have link-state information for path contained within the service group, the fact that nodes outside the service group do not have the exact same link-state information does not present problems. In other words, a consistent decision process with respect to, for example, routing packets is still achieved.
  • the nodes maintain one or more databases of link- state information.
  • FIG 1 B depicts network 100 with various link-state databases maintained by the nodes R1 - R15.
  • each node maintains a conventional LSDB 102.
  • Conventional LSDBs 102 are depicted at each node R1 to R15.
  • each node maintains a conventional LSDB 102.
  • Each node could be an IS-IS level-1 router, an IS-IS level-2 router, or an IS-IS level 1/2 router. If the node is an IS-IS level-1 router it may maintain a conventional IS-IS level-1 LSDB. If the node is an IS-IS level-2 router it may maintain a conventional IS-IS level-2 LSDB. If the node is an IS-IS level 1/2 router it may maintain a conventional IS-IS level-1 LSDB and a conventional IS-IS level-2 LSDB.
  • link-state information for a service group is maintained separately from the link-state information in the conventional level-1 LSDBs and the conventional level-2 LSDBs.
  • a node if a node is a member of in a service group, it maintains a service group (SG) LSDB for that service group.
  • the SG-LSDB for a given service group stores link-state information for that service group.
  • the link-state information for that service group includes link-state information for one or more explicit paths within that service group.
  • the link-state information for a given explicit path describes the topology for that explicit path.
  • a node that is a member of service group 110-1 maintains a SG-LSDB 104-1 for service group 110-1.
  • SG-LSDB 104-1 contains link-state information for service group 110-1.
  • service group 110-1 may contain one or more explicit paths.
  • service group 110-1 may contain paths 108-1 and 108-2.
  • SG-LSDB 104-1 contains link-state information may, for example, contain link-state information for paths 108-1 and 108- 2.
  • a node that is a member of service group 110-2 maintains SG-LSDB 104-2.
  • SG-LSDB 104-2 contains link-state information for service group 110-2.
  • service group 110-1 may contain path 108-3. Therefore, SG-LSDB 104-2 may contain, for example, link-state information for path 108-3.
  • the reference numeral 104 is used herein to refer to an SG-LSDB in general, without reference to a specific SG-LSDB.
  • the reference numeral 108 is used herein to refer to an explicit path in general, without reference to a specific explicit path.
  • a SG-LSDB 104 contains scope qualified flooding (SQF) LSPs for a service group 110.
  • SQF scope qualified flooding
  • scope qualified flooding refers to flooding of link-state information based on a service group.
  • the link-state information is flooded to only nodes in the service group.
  • Each SQF-LSP contains link-state information for a service group 110.
  • SG-LSDB 104-1 contains one or more SQF-LSPs for service group 110-1
  • SG- LSDB 104-2 contains one or more SQF-LSPs for service group 110-2.
  • each service group has a priority.
  • the priority is used to determine an order in which the SG-LSDBs for the different service groups are synchronized. For example, service group 110-1 may be have a higher priority than service group 110-2.
  • service group 110-1 may be assigned to a customer that is paying for a higher level of service than is a customer associated with service group 110-2.
  • the node if a node is a member of more than one service group, the node gives a higher priority to synchronizing the SG-LSDBs 104 associated with the higher priority service group. For example, node R6 may synchronize SG-LSDBs 104-1 prior to synchronizing SG- LSDBs 104-2.
  • the link-state information in the SG-LSDB 104 for a given service group may be used to forward packets in the service group.
  • an explicit path describes a route for forwarding data packets that contain a path identifier for the path.
  • the sequentially ordered topological PDEs comprise a preferred path routing (PPR) of a preferred path for routing packets containing the PPRJD through the network.
  • PPR preferred path routing
  • a PPR indicates a preferred path over which packets containing the PPRJD should be forwarded.
  • a node updates a locally stored forwarding database to indicate that data packets including this particular PPRJD should be routed along the path identified by the PPR information instead of the predetermined shortest path, calculated using SPF.
  • the PPR represents a data path in the service group 110 from a source node to a destination node, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from multiple source nodes to a single destination node, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from a single source node to a multiple destination nodes, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from at least one source node to a multiple destination nodes in the network 100, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from multiple source nodes to at least one destination node in the network 100, and includes at least one intermediate node.
  • the node when a node receives a data packet, the node inspects the data packet to determine whether a PPRJD is included in the data packet.
  • the PPRJD may be included in a header of the data packet. If a PPRJD is included in the data packet, the node performs a lookup on the locally stored forwarding database to determine the next PDE associated with the PPRJD identified in the data packet.
  • the PDE in the locally stored forwarding database indicates a next hop (another network element, link, or segment) by which to forward the data packet. The node forwards the data packet to the next hop based on the PDE indicated in the locally stored forwarding database.
  • the nodes in the service group 110 are configured to transmit data packets via the PPR instead of the shortest path.
  • Further details of PPR are described in the link state routing (LSR) Working Group Draft Document entitled “Preferred Path Routing (PPR) in IS-IS,” dated July 8, 2019, by U. Chunduri, et al. (hereinafter, “Chunduri PPR 2019”); link state routing (LSR) Working Group Draft Document entitled “Preferred Path Routing (PPR) in IS-IS,” dated March 8, 2020, by U. Chunduri, et al. (hereinafter, “Chunduri PPR 2020”); and link state routing (LSR) Working Group Draft Document entitled “Preferred Path Route Graph Structure,” dated March 8, 2020, by U. Chunduri, et al. (hereinafter, “Chunduri Graph”, all of which are incorporated by reference herein in their entirety.
  • a path describes a traffic engineering (TE) path.
  • the sequentially ordered topological PDEs describe a traffic engineering (TE) path.
  • the path may be used to distribute TE characteristics of the links and nodes.
  • the TE information is not required to be on all nodes in the network 100.
  • the path includes nodes that act as a TE path computation element (PCE).
  • the TE PCE may implement a decentralized path computation model used by a Resource Reservation Protocol (RSVP-TE). Further details of traffic engineering (TE) are described in the Internet Engineering Task Force (IETF) Request for Comments (RFC) 7810, entitled “IS-IS, Traffic Engineering (TE) Metric Extensions” dated May, 2016, by S.
  • IETF Internet Engineering Task Force
  • RRC Request for Comments
  • nodes may exchange a hello PDU periodically to establish adjacency.
  • a node sends a hello PDU in order to inform its neighbors that the node is configured to perform service group flooding of link-state information.
  • the node may also receive hello PDUs from its neighbors to learn that what neighbors support service group flooding.
  • there are multiple types of service group flooding to cover different IS-IS levels e.g., IS-IS level-1 router, IS-IS level-2 router, IS-IS level 1/2 router
  • TLV Type Length Value
  • the hello PDU advertises the type of service group flooding supported by the node.
  • the node sends an IS-IS Hello Packet (IIH) to advertise the type of service group flooding supported by the node.
  • the hello message contains a Type Length Value (TLV) to indicate the supported flooding scope(s).
  • TLV is an extension of a TLV described in Internet Engineering Task Force (IETF), Request for Comments (RFC): 7356, entitled “IS-IS Flooding Scope Link State PDUs (LSPs),” by L. Ginsberg et al. (hereinafter RFC 7356), dated September 2014, which is incorporated herein in its entirety.
  • RFC 7356 describes a TLV (TLV type 243) for announcing scope flooding support.
  • FIG. 2A depicts a TLV that is described in RFC 7246 for announcing scope flooding support.
  • the TLV 200 has a number of fields 202a - 202n for specifying a supported flooding scope.
  • a service group flooding identifier may be placed into one of the fields 202 to indicate a type of service group flooding that is supported by a node.
  • FIG. 2B depicts a table 220 that contains information for a number of different types of service group flooding, in accordance with one embodiment.
  • Each row in table 220 is for a different type of service group flooding, in accordance with one embodiment.
  • Table 220 has an LSP Scope Identifier column 222.
  • the table 220 has example values for the entries in the LSP Scope Identifier column 222. However, other values may be used.
  • the values in column 222 may be assigned by the Internet Assigned Numbers Authority (IANA).
  • the LSP Scope Identifier is a unique identifier (e.g., a unique number) for the type of service group for that row. In one embodiment, the unique identifier is inserted in a supported scope field 202 in the TLV 200 depicted in FIG. 2A.
  • Table 220 contains a description column 224, which contains a description of the service group flooding for each row.
  • Standard Level-1 Service Group Scope Qualified LSP refers to service group flooding for IS-IS level 1.
  • Standard Level-2 Service Group Scope Qualified LSP refers to service group flooding for IS-IS level-2.
  • Standard Domain Service Group Scope Qualified LSP refers to service group flooding for IS-IS level 1/2.
  • Table 220 contains an LSPID Format/TLV Format column 226.
  • the LSPID Format for each entry is extended.
  • the TLV format for the top three table entries is Standard, whereas the TLV format for the bottom three table entries is Extended. The difference is the length of the TLV entry.
  • example values are provided for the LSP Scope Identifiers in table 220 .
  • RFC 7356 indicates that Scope Identifiers between 1 - 63 are reserved for flooding scopes with standard TLVs.
  • RFC 7356 indicates that Scope Identifiers between 64 - 127 are reserved for flooding scopes with extended TLVs.
  • FIG. 2C is a flowchart of one embodiment of a process 250 of nodes exchanging hello PDUs to advertise service group flooding capability.
  • the process 250 may be used to establish adjacencies, as well as to inform neighbors about support for service group flooding.
  • Step 252 includes a node sending a hello message to its neighbors.
  • the hello message advertises that the node supports service group flooding.
  • the hello message contains a TLV such as depicted in FIG. 2A.
  • the hello message is not limited to the TLV depicted in FIG. 2A.
  • the TLV is an extension of TLV 243 described in RFC 3756.
  • the TLV is not limited to being an extension of TLV 243 described in RFC 3756.
  • the node may specify more than one type of supported service group flooding.
  • the hello message specifies one or more of the types of service group flooding.
  • one or more of the supported scope fields 202 contains an LSP Scope Identifier that identifies a type of service group flooding that is supported by the node.
  • the hello message also advertises one or more service groups of which the node is a member.
  • FIG. 2D depicts one embodiment of a TLV that may be included in the hello message.
  • the TLV 280 contains a type field 282.
  • Length field 284 is used to indicate the length of the TLV 280.
  • Flags 286 may be used to indicate whether the optional sub-TLVs (see field 292) are used.
  • the Service Group Identifier 290 is used to specify a unique identifier for the service group.
  • the priority field 288 is used to specify the priority of the service group.
  • a sub-TLV field 292 is included in the TLV 280 to allow specifying additional information about the service group. As noted, sub-TLV field 292 is optional.
  • the TLV 280 is added as a IS-IS top level TLV registry. However, the TLV 280 is not limited to being in the TLV registry of IS-IS.
  • the node learns of the one or more service groups of which the node is a member by receiving configuration information from, for example, a control node or the like. For example, a network operator may select which nodes are to be part of a service group, and send messages to the nodes in the service group providing configuration information that informs the nodes that they are members of the service group.
  • configuration information for example, a control node or the like.
  • a network operator may select which nodes are to be part of a service group, and send messages to the nodes in the service group providing configuration information that informs the nodes that they are members of the service group.
  • Step 254 includes the node receiving Hello messages from neighbor nodes.
  • the Hello messages indicate what service group flooding is supported by each neighbor, as well as service group membership of the other nodes. These Hello messages may be similar to the Hello message sent by the node in step 252.
  • Step 256 includes the node building a table or a database that indicates service group membership of its neighbors. This table is based on the hello messages received in step 254. Other nodes also build such databases based on the hello message(s) sent by the node in step 252. As will be described below, this table may be updated from time to time.
  • nodes advertise service group capabilities and membership in hello messages, such as but not limited to IS-IS hello messages. Advertising service group capabilities and membership is not limited to hello messages.
  • nodes in the network advertise service group membership in LSPs.
  • FIG. 3A depicts a format for one embodiment of a TLV that a node sends in an LSP to advertise service group membership.
  • the TLV 300 is a sub-type TLV of TLV 242, which is referred to as “IS-IS router capability.” However, TLV 300 is not limited to being as a sub-type TLV of TLV 242.
  • TLV 242 for IS-IS router capability is described in Internet Engineering Task Force (IETF), Request for Comments (RFC): 7981 , entitled “IS-IS Extensions for Advertising Router Information,” by L. Ginsberg et al. (hereinafter, RFC 7981), dated October 2016, which is incorporated herein in its entirety.
  • the sub-type field 302 is used to specify the sub-type of TLV.
  • the length field 304 indicates the length of the TLV 300.
  • There are one or more fields 306a - 306n which to specify the Service Group Identifier.
  • There are one or more fields 306a - 306n which to specify the priority if the service group.
  • FIG. 3B depicts a flowchart of one embodiment of a process 350 of a node advertising its service group memberships.
  • Step 352 includes the node generating an LSP with router capability with service group related sub-TLVs that identify service groups in which the node is a member.
  • the LSP may also specify a priority for each service group.
  • the LSP is an IS-IS LSP.
  • the LSP contains a TLV 242, as defined in RFC 7981. However, the LSP is not limited to having a TLV 242, as defined in RFC 7981.
  • the LSP contains a TLV 300 as depicted in FIG. 3A.
  • Step 354 includes the node sending the LSP to other nodes in the network.
  • the node sends the LSP to all nodes in the network. In other words, the node does not consider whether a node is a member of any of the service groups listed in the packet to determine whether to send the LSP to the node.
  • Step 356 includes the nodes in the network 100 updating their service group tables, if necessary.
  • the node the sent the LSP in step 354 may also receive LSPs with router capability with service group related sub-TLVs that identify service groups in which the other nodes are members from other nodes.
  • the sending node may also update its service group table based on such received LSPs, if necessary.
  • a node dynamically advertises service groups.
  • the advertising node may be, for example, a central controller.
  • the central controller may also be referred to herein as a primary node. However, in some cases, if the primary node is down, another node could dynamically advertise the service groups.
  • the advertising node uses a southbound protocol (e.g., NETCONF/YANG (The Network Configuration Protocol / Yet Another Next Generation), PCEP (Path Computation Element Protocol), BGP-LS (Border Gateway Protocol Link-State).
  • FIG. 4A depicts one embodiment of a TLV 400 that is sent by a node to dynamically advertise service groupings.
  • the TLV 400 is sent in an IS-IS LSP.
  • the type field 402 is used to specify the type of TLV.
  • the TLV 400 is in the IS-IS top level registry.
  • the length field 404 specifies the length of the TLV 400.
  • a flags field 406 includes one or more flags. One possible flag may indicate if this is the primary advertisement or a back-up advertisement. Backup advertisement can be done by any other node in the network, which can be used when the primary node in not in service.
  • the flag field 406 is optional.
  • the TLV 400 next has one or more rows, each of which has a node identifier 408a - 408n, a service group identifier 410a - 41 On, and a priority 412a - 412n.
  • Each node identifier 408a - 408n identifies one of the nodes in the network.
  • the service group identifier 410 for a given row specifies a service group of which the node listed in that row is a member.
  • the priority 412 indicates the priority of the service group in that row.
  • FIG. 4B depicts one embodiment of a flowchart 450 of a node dynamically advertising service groupings.
  • Step 452 includes the node generating an LSP that, for each node, identifies one or more service groups of which the node is a member.
  • the node generates TLV 400 depicted in FIG. 4A to place in the LSP.
  • process 450 is not limited to the TLV depicted in FIG. 4A.
  • Step 454 includes the node advertising the LSP to nodes in the network.
  • the LSP may be advertised to all nodes in at least one level of the network.
  • the packet may be advertised to all IS-IS level-1 routers in the network.
  • the information in this TLV 400 should not be leaked to other levels in the network.
  • Step 456 includes the nodes that received the LSP updating their respective service group tables based on the LSP, if necessary.
  • nodes receiving the information in the LSP treat this as equivalent to the SG-ID and Priority information in the router capability TLV 300 (see FIG. 3A).
  • information received in the router capability TLV 300 override the information received via TLV 400.
  • Service group flooding as described herein, floods link-state information based on nodes in the service group.
  • FIG. 5 depicts a flowchart of one embodiment of a process 500 of service scope flooding. In some embodiments, the process 500 is performed in a network that uses the IS-IS protocol.
  • Step 510 includes a node receiving an LSP on a first network interface.
  • the LSP specifies a flooding scope of service group.
  • the flooding scope is one of the six service group flooding scopes depicted in table 220 of FIG. 2B.
  • the LSP also contains a service group identifier.
  • the LSP may also contains explicit path information for a particular path in the service group.
  • the explicit path information may include a path identifier of an explicit path in the service group and sequentially ordered topological path description elements (PDEs) that describe the explicit path.
  • the sequentially ordered topological path description elements (PDEs) includes a list of nodes that describe the explicit path. However, the PDEs are not required to be nodes.
  • the LSP contains explicit path information for only one path. However, the LSP could contain explicit path information for more than one path in the service group. The path(s) could be strict paths or loose paths. In some embodiments, the LSP is referred to herein as an SQF-LSP.
  • Step 520 includes selecting one or more network interfaces to distribute the LSP on based on the service group identifier.
  • the one or more network interfaces that are selected do not include the network interface on which the LSP was received.
  • the one or more network interfaces that are selected include all network interfaces that connect to other nodes in the service group.
  • the one or more network interfaces that are selected do not include any network interfaces that connect to nodes outside of the service group.
  • the one or more network interfaces that are selected coincide exactly with the other nodes in the service group (with the exception of the node from which the LSP was received.
  • Step 530 includes the node distributing the LSP on the one or more network interfaces that was/were selected step 520.
  • Step 530 includes flooding the link-state information in the received on the first interface based on the nodes in the service group.
  • a node that is a member of a service group maintains a table or database of link-state information for that service group.
  • a node maintains an SG-LSDB for each service group in which it is a member.
  • These SG-LSDBs are separate and independent from conventional LSDBs maintained by the nodes.
  • the SG-LSDBs are synchronized independently from the conventional LSDBs.
  • each SG-LSDB contains link state information for explicit paths in the service group associated with the SG-LSDB.
  • FIG. 6 depicts two nodes and link-state databases maintained by the nodes, in accordance with one embodiment.
  • the two nodes Ra, Rb have a point-to-point (P2P) adjacency (ADJ).
  • the nodes Ra, Rb could be two of the nodes in network 100 (see FIGs. 1A or 1 B), but are not limited to those examples.
  • Node Ra has a conventional IS-IS link-state database (LSBD) 102a, and two service group link-state databases (SG-LSDB) 104-3a, 104-4a.
  • Node Rb has a conventional IS-IS link-state database (LSBD) 102b, and two SG-LSDBs 104-3b, 104-5b.
  • the notation being used for the SG-LSDBs is for the number following “104” to refer to a service group number, and for the letter to refer to the node.
  • Each SG-LSDB 104 contains one or more SQF-LSPs.
  • SG- LSDB 104-3a contains SQF-LSP 604-3
  • SG-LSDB 104-4a contains SQF-LSP 604-4
  • SG-LSDB 104-3b contains SQF-LSP 604-3
  • SG-LSDB 104-5b contains SQF-LSP 604-5.
  • Synchronizing the SG-LSDBs 104 will synchronize the link-state information of the SQF-LSPs 604.
  • Synchronizing the link-state information of the SQF-LSPs 604 means to harmonize the SQF-LSPs 604 in the respective SG-LSDBs 104.
  • SQF-LSP 604-3 in SG-LSDB 104-3a is harmonized with SQF-LSP 604-3 in SG-LSDB 104-3b. If one node has a newer version of an SQF-LSP 604, then the newer version will replace the older version.
  • the given node may receive an SQF-LSP 604 from another node, and use the received SQF-LSP 604 to update the given node’s version of the SQF-LSP 604.
  • the given node may send its version of the SQF-LSP 604 to the other node, such that the other node may replace its version of the SQF-LSP 604 with the received SQF-LSP 604.
  • node Rb sends the newer version to node Ra.
  • Ra then replaces its older version of SQF-LSP 604-3 in SG-LSDB 104-3a with the newer version from node Rb.
  • node Ra sends the newer version to node Rb.
  • Rb then replaces its older version of SQF-LSP 604-3 in SG-LSDB 104- 3b with the newer version from node Ra.
  • FIG. 7A depicts a flowchart of one embodiment of a process 700 of two nodes synchronizing SG-LSDBs.
  • Step 702 includes node Ra informing node Rb of all service groups that Ra has in common with Rb. Stated another way, node Ra informs node Rb of the common SQF-LSPs 604 that the two nodes maintain. With respect to the example of FIG. 6, node Ra and Rb have SQF-LSPs 604-3 in common.
  • node Ra sends node Rb one or more complete sequence number PDUs (CSNPs) to list the service groups (or SQF-LSPs 604) in common.
  • CSNPs complete sequence number PDUs
  • Step 704 includes node Rb accessing the identifier of the next service group (or SQF-LSP 604) supported by Ra.
  • step 706 the two nodes synchronize their respective SG-LSDBs for this service group.
  • explicit path information for any paths for this service group are synchronized in the respective SG-LSDBs.
  • SG-LSDB 104-3a is synchronized with SG-LSDB 104-3b.
  • Step 708 includes a determination if there are more service groups for which SG-LSDBs have not yet been synchronized. In the example from FIG. 6, all common SG-LSDBs between Ra and Rb have been synchronized. Hence, the process concludes. Process 700 may then be performed with other pairs of nodes.
  • FIG. 7B provides further details of one embodiment of synchronizing SG- LSDBs between two nodes. Again, an example with respect to nodes Ra and Rb in FIG. 6 will be referenced. FIG. 7B provides further details for one embodiment of the process 700 depicted in FIG. 7A.
  • Step 720 includes node Ra sending one or more flooding scope complete sequence number PDUs (FS-CSNPs) to node Rb.
  • the one or more FS-CSNPs contain a complete list of all SQF-LSPs 604 in all SG-LSDBs 104 maintained by node Ra.
  • the one or more FS-CSNPs may be sent periodically by node Ra to node Rb.
  • the FS-CSNP is an extension to a PDU as defined in RFC 7356.
  • FIG. 7C depicts a format for an embodiment of an FS-CSNP 740.
  • the format has a number of fields, which will now be briefly discussed.
  • the first field 742 may have a value of 0x83, as defined in ISO/I EC 10589:2002, Second Edition, "Information technology - Telecommunications and information exchange between systems - Intermediate System to Intermediate System intradomain routeing information exchange protocol for use in conjunction with the protocol for providing the connectionless-mode network service (ISO 8473)", 2002 (hereinafter, “ISO 8473”), which is hereby incorporated by reference.
  • the length indicator 744 is the length of the fixed header in octets.
  • the version/protocol ID Extension 746 may be set to 1.
  • the ID length 748 may be as defined in ISO 8473.
  • the PDU type 750 may be 11 , as defined in ISO 8473.
  • the version 754 may be 1.
  • the scope 758 is used to specify the service group flooding scope. Referring back to FIG. 2B, an LSP Scope Identifier is placed in the scope field 758 of the FS-CSNP 740. Recall that the values for the LSP Scope Identifiers are left blank in FIG. 2B for generality.
  • the LSP Scope Identifier will identify one of the six service group types in table 220. There may be more or fewer than service group types. Note that if a node supports more than one type of service group flooding, then the node may send a separate FS-CSNP 740 for each type of supported service group flooding.
  • the PDU length 762 indicates the entire length of the FS-CSNP 740 (including the header).
  • the Source ID 764 is the system ID of the node that generated and sent the FS-CSNP 740.
  • the variable-length fields 769 that are allowed in an FS-CSNP are limited to those TLVs that are supported by standard CSNP.
  • the SQF-LSP I Ds 766, 768 are used to identity service groups of which the node is a member. Stated another ways, the SQF-LSP IDs 766, 768 are used to identify service groups for which the node has link-state information.
  • the SQF-LSP IDs 766, 768 indicate what SQF-LSPs 604 are maintained in an SG-LSDB 104.
  • the SQF-LSP IDs 766, 768 specify a range of SQF-LSPs 604.
  • the Start SQF-LSP ID 766 is the SQF-LSP ID of the first SQF- LSP having the specified scope (in scope field 758) in the range covered by this FS- CSNP.
  • the End SQF-LSP ID 768 is the SQF-LSP ID of the last SQF-LSP having the specified scope in the range covered by this FS-CSNP.
  • node Ra supports SQF-LSPs 604-3 and 604-4. These SQF-LSPs 604-3 and 604-4correspond to the two SG-LSDBs 104-3a, 104-4a.
  • the FS-CSNP 740 specifies a range of SQF-LSPs 604. Hence, if the SQF-LSPs 604supported by node Ra have a gap in the ID numbers, then node Ra may send more than one FS-CSNP 740.
  • Step 722 in FIG. 7B includes node Rb setting a Send Receive Message (SRM) bit (also referred as the SRM flag) for all SQF-LSPs 604that it supports but that are absent in the FS-CSNP 740.
  • the SRM bit indicates what SQF-LSPs are to be transmitted, and on what interfaces.
  • node Rb checks the SQF-LSP IDs 766, 768 to determine if there are any SQF-LSPs 604 supported by node Rb that are not listed in the SQF-LSP IDs 766, 768. For each such SQF-LSP 604, node Rb sets the SRM bit.
  • node Rb determines that SQF- LSP 604-5 is absent from the FS-CSNP 740.
  • the SRM bit is set for SQF-LSP 604-5.
  • Step 722 also includes setting the SRM bit each SQF-LSP 604for which node Rb has a newer version. For example, if node Rb has a newer version of any of SQF-LSP 604-3, then node Rb sets the SRM bit for the respective SQF-LSP 604. For the purpose of illustration, it will be assumed that node Rb has a newer version of SQF-LSP 604-3. Hence, the SRM bit for SQF-LSP 604-3 is set.
  • Step 724 includes node Rb sending LSP updates to node Rb in accordance with the SRM bits that were set. In accordance with the example provided in step 722, node Rb would send LSP updates for SQF-LSP 604-3 and SQF-LSP 604-5.
  • Step 726 includes node Ra installing the SQF-LSPs 604 of which it is a member. In the present example, node Ra would install the updates to the LSP for SQF-LSP 604-3. However, because node Ra does not a member of the service group associated with SQF-LSP 604-5 node Ra would not install SQF-LSP 604-5 for SQF- LSP 604-5.
  • Step 728 include node Ra setting the Send Sequence Number (SSN) bit (also referred as the SSN flag).
  • SSN Send Sequence Number
  • Step 730 includes node Ra sending a partial sequence number PDUs (PSNPs) to acknowledge the SQF-LSPs that it received.
  • PSNP will differ depending on whether node Ra is a member of the service group. In the present example, node Ra is a member of the service group associated with SQF-LSP 604-3, but does not is not a member of the service group associated with SQF-LSP 604-5.
  • FIG. 7D depicts a format for one embodiment of an FS-PSNP 770.
  • FS-PSNP 770 is an extension of an FS-PSNP described in RFC 7356.
  • the format of FS-PSNP 770 has a number of fields, which will now be briefly discussed.
  • the first field 772 may have a value of 0x83, as defined in ISO 8473.
  • the length indicator 774 is the length of the fixed header in octets.
  • the version/protocol ID Extension 776 may be set to 1.
  • the ID length 778 may be as defined in ISO 8473.
  • the PDU type 780 may be 12, as defined in ISO 8473.
  • the version 784 may be 1.
  • the SQF- LSP ID 788 is used to specify the service group ID.
  • the U bit 790 is used to indicate whether or not the node that sends the FS- PSNP 770 supports the SQF-LSP 604 that is specified by SQF-LSP ID 788.
  • a value of “0” for the U bit 790 indicates that the node supports the SQF- LSP that is identified in SQF-LSP ID
  • a value of ⁇ ” indicates that the node does not support the SQF-LSP that is identified in SQF-LSP ID.
  • the value for the U bit 790 is reversed such that a value of “1 ” for the U bit 790 indicates that the node supports the SQF-LSP that is identified in SQF-LSP ID, and a value of “0” indicates that the node does not support the SQF-LSP that is identified in SQF- LSP ID.
  • the PDU length 792 indicate the entire length of the FS-PSNP 770 (including the header).
  • the Source ID 794 is the system ID of the node that generated and sent the FS-PSNP 770.
  • the variable-length fields 796 that are allowed in an FS- PSNP are limited to those TLVs that are supported by standard CSNP.
  • Step 732 includes node Rb removing the SRM bit upon receiving the PSNP ACK from node Ra, if the U bit 790 is set.
  • node Ra does not a member of the service group associated with SQF-LSP (e.g., SQF-LSP 604-5)
  • the U bit would be set to 1 such that node Rb removes the SRM bit for SQF-LSP 604-5.
  • nodes have a P2P adjacency. However, in other embodiments, nodes have a broadcast (or LAN) adjacency.
  • FIG. 8 depicts nodes in a LAN 800, as well as link-state databases maintained by the nodes, in accordance with one embodiment.
  • Node Rc maintains a conventional IS-IS link-state database (LSBD) 102c, and two path state link-state databases (SG-LSDB) 104-6c, 104-7c.
  • Node Rd maintains a conventional IS-IS link-state database (LSBD) 102d, and two SG-LSDBs 104-6d, 104-8d.
  • Node Re maintains a conventional IS-IS link-state database (LSBD) 102e, and one SG-LSDBs 104-8e.
  • the various SG-LSDBs 104 maintain link-state information for service groups. For example, SG-LSDB 104-6c and 104-6d store SQF- LSP 604-6.
  • SG-LSDB 104-7c stores SQF-LSP 604-7.
  • SG-LSDB 104-8d and 104-8e store SQF-LSP 604-8.
  • one of the nodes serves as a designated intermediate system (DIS).
  • the DIS serves as a coordinator to synchronize all SG-LSDBs that the DIS has with at least one other node in the LAN 800.
  • node Rc is the DIS will be discussed.
  • node Rc would coordinate the synchronizing of: SG-LSDB 104-6c with SG-LSDB 104-6d.
  • Node Rc would also coordinate the synchronizing of: SG-LSDB 104-7c with SG-LSDB 104-7e.
  • any type of SG-LSDB in the LAN that node Rc does not maintain would at this point remain un-synchronized.
  • other nodes in the LAN 800 advertise their un- synchronized SG-LSDBs 104. Stated another way, the other nodes may announce the service group ID associated with the un-synchronized SG-LSDBs 104.
  • Another node that maintains the type of SG-LSDB for which such an advertisement is received may then synchronize its SG-LSDB with the advertiser.
  • FIG. 8B depicts a flowchart of one embodiment of a process 820 of synchronizing SG-LSDBs in a LAN.
  • the process 820 will be described with respect to the LAN 800 of FIG. 8A.
  • Step 822 includes synchronizing all SG-LSBDs that the DIS has in common with at least one other node in the LAN.
  • the DIS sends one or more FS-CSNPs to each node to which the DIS shares a SG-LSBD.
  • the node Rc may send one or more FS-CSNPs to node Rd for SG-LSBD 104-6 (which stores SQF-LSP 604-6).
  • the node Rc may send one or more FS- CSNPs to node Re for SG-LSBD 104-7 (which stores SQF-LSP 604-7). The other nodes then respond accordingly to synchronize the SG-LSDBs 104.
  • Step 824 includes the other nodes in the LAN advertising any SQF-LSPs 604 that were not synchronized with the DIS.
  • the node sends an FS-CNSP 740 having the format depicted in FIG. 7C.
  • the A bit 760 is used to advertise the SQF-LSPs that were not synchronized with the DIS.
  • node Rd has SG-LSDB 104-8d (corresponding to SQF-LSP 604-8), which was not synchronized with node Rc (the DIS in this example).
  • node Rd may send an FS-CNSP 740 with the A bit 760 set to “1 ”.
  • SQF-LSP 604-8 may be specified in field 666.
  • Node Re may also send one or more FS-CNSPs 740 with the A bit 760 set to advertise its unsynchronized SG-LSDB 104-8e.
  • Step 826 includes the other nodes in the LAN that received the advertisements in step 824 synchronizing any SG-LSDBs 104 that were not synchronized with the DIS. Thus, nodes Rd and Re would synchronize SG-LSDB 104- 8d with SG-LSDBs 104-8e.
  • FIG. 9A depicts one embodiment of a network 900 showing a state of link-state databases immediately after a node restarts.
  • node Rf is just being re-started after having gone down.
  • the link-state information from the data bases of nodes Rg and Rh may be used for the re-build.
  • Node Rg has conventional LSDB 102g and SG-LSDBs 104- 9g and 104-10g.
  • Node Rh has conventional LSDB 102h and SG-LSDBs 104-9h and 104-1 Oh. Note that the node being re-started may have a P2P connection or LAN connection with the other nodes.
  • node Rf has a P2P connection with node Rh and a LAN connection with node Rg.
  • FIG. 9B depicts a flowchart of one embodiment of a process 910 of a graceful restart of a node having a SG-LSDB.
  • Step 912 of process 910 includes the node re-starting.
  • Step 914 includes re-establishing a conventional LSDB 102 at the node.
  • the conventional LSDB 102 may be re-established by synchronizing the LSDB 102 with the other LSDBs 102 in the network 900.
  • Step 916 includes re-establishing each SG-LSDB 104 at the node being re started. Further details of one embodiment of re-establishing one SG-LSDB 104 at the node are described in connection with FIG. 9C.
  • FIG. 9C depicts a flowchart of one embodiment of a process 920 of re establishing a SG-LSDB 104 during a graceful restart of a node.
  • the process 920 will use node Rf as an example of the node being re-started.
  • Step 922 includes node Rf entering GR.
  • Step 923 includes the node synchronizing its conventional LSDB 102f with the LSDB 102 of other nodes.
  • Step 924 include starting a T2 timer for this SG-LSDB.
  • Steps 926 includes initiating synchronization of this SG-LSDB.
  • Steps 928-930 include a determination of whether the T2 timer has expired (step 928) prior to the synchronization of this SG- LSDB being complete (930). If the synchronization of this SG-LSDB completes prior to the T2 timer expiring, then a determination is made whether there is another SG- LSDB to synchronize (step 932). If there is another SG-LSDB to synchronize, then the process returns to step 924 upon the node entering GR for the next SG-LSDB to synchronize. If the T2 timer expires prior to a SG-LSDB completing synchronizing (step 928 is yes), then the adjacency is refreshed, in step 934.
  • FIG. 10 illustrates an embodiment of a node.
  • the node 1000 may be configured to implement and/or support the path scope flooding, SG-LSDB synchronization, etc. described herein.
  • the node 1000 may be implemented in a single node or the functionality of node 1000 may be implemented in a plurality of nodes.
  • One skilled in the art will recognize that the term NE encompasses a broad range of devices of which node 1000 is merely an example. While node 1000 is described as a physical device, such as a router or gateway, the node 1000 may also be a virtual device implemented as a router or gateway running on a server or a generic routing hardware (whitebox).
  • whitebox generic routing hardware
  • the node 1000 may comprise a plurality of network interfaces 1010/1030.
  • the network interfaces 1010/1030 may be configured to receive/transit wirelessly or via wirelines. Hence, the network interfaces 1010/1030 may connect to wired or wireless links.
  • the network interfaces 1010/1030 may include input/output ports.
  • the node 1000 may comprise receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes.
  • the node 1000 may comprise a processor 1020 to process data and determine which node to send the data to and a memory.
  • the node 1000 may also generate and distribute LSPs to describe and flood the various topologies and/or area of a network.
  • the processor 1020 is not so limited and may comprise multiple processors.
  • the processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs.
  • the processor 1020 may be implemented using hardware, software, or both.
  • the processor 1020 includes a service group engine 1024, which may perform processing functions of the nodes.
  • the service group engine 1024 may also be configured to perform the steps of the methods discussed herein. As such, the inclusion of the service group engine 1024 and associated methods and systems provide improvements to the functionality of the node 1000. Further, the service group engine 1024 effects a transformation of a particular article (e.g., the network) to a different state.
  • service group engine 1024 may be implemented as instructions stored in the memory 1022, which may be executed by the processor 1020. Alternatively, the memory 1022 stores the service group engine 1024 as instructions, and the processor 1020 executes those instructions.
  • the memory 1022 may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory 1022 may comprise a long-term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof.
  • the memory 1022 may be configured to store one or more SG-LSDBs 104 and one or more LSDPs 102.
  • the memory 1022 may be configured to store a service group table 1050 that stores information about service group membership. For example, the table 1050 may store a list of nodes that are members of various service groups.
  • FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.
  • the general-purpose network component or computer system 1100 includes a processor 1102 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 1104, and memory, such as ROM 1106 and RAM 1108, input/output (I/O) devices 1110, and a network 1112, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface.
  • a processor 1102 is not so limited and may comprise multiple processors.
  • the processor 1102 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs.
  • the processor 1102 may be configured to implement any of the schemes described herein.
  • the processor 1102 may be implemented using hardware, software, or both.
  • the secondary storage 1104 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 1108 is not large enough to hold all working data.
  • the secondary storage 1104 may be used to store programs that are loaded into the RAM 1108 when such programs are selected for execution.
  • the ROM 1106 is used to store instructions and perhaps data that are read during program execution.
  • the ROM 1106 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 1104.
  • the RAM 1108 is used to store volatile data and perhaps to store instructions. Access to both the ROM 1106 and the RAM 1108 is typically faster than to the secondary storage 1104.
  • At least one of the secondary storage 1104 or RAM 1108 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.
  • FIG. 12 illustrates an embodiment of a sub-TLV that may be incorporated within an SQF-LSP 604. It is appreciated that while the disclosed embodiment specifically refers to IS-IS and LSPs, any link state protocol may be applied to flood the network using link state packets.
  • the format of the sub-TLV 1200 includes a type field 1202, a length field 1204, and a Value field 1206.
  • the type field 1202 carries a value assigned by the Internet Assigned Numbers Authority (IANA).
  • the length field 1204 includes the total length of the value field in bytes.
  • the value field 1206 has the identifier of the service group (i.e., SG-ID). In one embodiment, an SG-ID of zero indicates that no service group is being specified. An SG-ID of zero can be used to indicate that a path or graph is not part of any specified service group.
  • the sub-TLV 1200 is included in a PPR-Attribute Sub- TLV field of a PPR TLV (a PPR-Attribute Sub-TLV field of a PPR TLV is described in Chunduri PPR 2020). In one embodiment, the sub-TLV 1200 is included in a PPR- Attribute Sub-TLV field of a PPR Tree TLV (a PPR-Attribute Sub-TLV field of a PPR Tree TLV is described in Chunduri graph). However, the sub-TLV 1200 is not limited to these examples.
  • the SG-ID be encoded in a sub-TLV.
  • the SG-ID is encoded as part of a main header field of the PPR TLV described in Chunduri PPR 2020.
  • the SG-ID is encoded as part of a main header field of a PPR Tree TLV described in Chunduri graph. For example, an SG-ID field may be added as a new field in the header of the PPR TLV or the PPR Tree TLV.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure relates to distributing link-state information over a network. According to one aspect, one or more processors of a node are configured to receive a link state packet (LSP) on a first network interface. The LSP includes a flooding scope of service group, a service group identifier for a service group comprising nodes, and explicit path information for the service group. The explicit path information includes a path identifier and sequentially ordered topological path description elements (PDEs) in the service group. The one or more processors are configured to select one or more network interfaces other than the first interface to distribute the LSP based on the service group. The one or more processors are configured to distribute the LSP on the selected one or more network interfaces.

Description

SERVICE GROUP FLOODING IN IS-IS NETWORKS
CLAIM FOR PRIORITY
[0001] This application claims the benefit of priority to U.S. Provisional App. 62/890,431 , filed August 22, 2019, the entire contents of which are hereby incorporated by reference.
FIELD
[0002] The present disclosure relates to the field of routing in a network, and in particular, to path scope flooding in an Intermediate System to Intermediate System (IS-IS).
BACKGROUND
[0003] In a network comprising a single autonomous system (AS), each node needs to be aware of the topological relationships (i.e. , adjacencies) of all other nodes, such that all nodes may build a topological map (topology) of the AS. Nodes may learn about one another's adjacencies by distributing (e.g., flooding) link-state information throughout the network according to an Interior Gateway Protocol (IGP), such as intermediate system (IS) to IS (IS-IS).
[0004] The IS-IS protocol uses several different types of protocol data units (PDUs). A PDU may also be referred to herein as a packet ora message. Hello PDUs are exchanged periodically between nodes (e.g., routers) to establish adjacency. An IS-IS adjacency can be either point-to-point (P2P) or broadcast (also referred to herein as Local Area Network (LAN). A link-state PDU (LSP) contains the link-state information. LSPs are flooded periodically throughout the AS to distribute the link- state information. A node that receives the LSPs constructs and maintains a link-state database (LSDB) based on the link-state information. A complete sequence number PDU (CSNP) contains a complete list of all LSPs in the LSDB. CSNPs are sent periodically by a node on all interfaces. The receiving nodes use the information in the CSNP to update and synchronize their LSDBs. Partial sequence number PDUs (PSNPs) are sent by a receiver when it detects that it is missing a link-state PDU (e.g., when its LSDB is out of date). The receiver sends the PSNP to a node that transmitted the CSNP.
[0005] IS-IS is a link-state protocol that uses a reliable flooding mechanism to distribute the link-state information across the entire area or domain. Each IS-IS router distributes information about its local state (e.g., usable interfaces and reachable neighbors, and the cost of using each interface) to other routers using an LSP. Each router uses the received link-state information to build up identical link-state databases (LSDBs) that describes the topology of the AS. From its LSDB, each router calculates its own routing table using a Shortest Path First (SPF) or Dijkstra algorithm. This routing table contains all the destinations the routing protocol learns, associated with a next hop node. The protocol recalculates routes when the network topology changes, using the SPF or Dijkstra algorithm, and minimizes the routing protocol traffic that it generates.
[0008] Thus, every node in the network is to have the same LSDB to have a consistent decision process. The reliable flooding mechanism of the !S-IS protocol uses a simple rule to distribute the link-state information. Specifically, the reliable flooding mechanism sends or floods the link-state information on all the interfaces except the interface from where the link-state information was received. Though this is inefficient, it ensures the consistent decision process.
BRIEF SUMMARY [0007] According to one aspect of the present disclosure, there is provided a node for distributing link-state information over a network using protocol data units (PDUs). The node comprises a plurality of network interfaces, and one or more processors in communication with the plurality of network interfaces. The one or more processors are configured to cause the node to receive a link state PDU (LSP) on a first network interface of the plurality of network interfaces. The LSP includes a flooding scope of service group, a service group identifier for a service group comprising a plurality of nodes. The LSP further includes link-state information for the service group. The one or more processors are configured to cause the node to select one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group. The one or more processors are configured to cause the node to distribute the LSP on the selected one or more of the plurality of network interfaces. [0008] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to the nodes in the service group.
[0009] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to select only network interfaces that correspond to the nodes in the service group.
[0010] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to store the service group identifier and the link- state information for the service group in a service group link-state database (SG- LSDB).
[0011] Optionally, in any of the preceding aspects, the link-state information includes explicit path information for one or more explicit paths in the service group, wherein each explicit path comprises nodes in the service group. For each respective explicit path the explicit path information includes a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path.
[0012] Optionally, in any of the preceding aspects, the one or more of the PDEs describe a preferred path routing (PPR) of a preferred path for routing packets through the service group. [0013] Optionally, in any of the preceding aspects, the preferred path for routing packets represents a data path from a source node to a destination node in the service group, wherein the data path has at least one intermediate node between the source node and the destination node.
[0014] Optionally, in any of the preceding aspects, the preferred path for routing packets represents a data path from a plurality of source nodes in the network to at least one destination node in the service group, wherein the data path has at least one intermediate node between the plurality of source nodes and the destination node. [0015] Optionally, in any of the preceding aspects, the preferred path for routing packets represents a data path from at least one source node in the network to a plurality of destination nodes in the service group, wherein the data path has at least one intermediate node between the source node and the plurality of destination nodes. [0016] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to forward packets that contain a path identifier of any of the one or more explicit paths to a next PDE in the sequentially ordered topological PDEs identified by the path identifier.
[0017] Optionally, in any of the preceding aspects, the sequentially ordered topological PDEs identified by the path identifier comprises a loose path that does not contain all nodes on the explicit path on which a packet is forwarded.
[0018] Optionally, in any of the preceding aspects, the one or more of the PDEs describe a traffic engineering (TE) path through the service group.
[0019] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to advertise that the node supports the flooding scope of service group.
[0020] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to advertise identifiers of service groups of which the node is a member.
[0021] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to send an intermediate system (IS) to IS hello message that advertises that the node supports the flooding scope of service group and an identifier of a service group of which the node is a member. [0022] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to send a router capability message that advertises that the node supports the flooding scope of service group and one or more service group identifiers indicating a corresponding one or more service groups of which the node is a member.
[0023] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to generate a packet that, for each node in a set of nodes, identifies all service groups in which the respective node is a member. The one or more processors are further configured to cause the node to advertise the packet to all nodes in the network.
[0024] Optionally, in any of the preceding aspects, the flooding scope of service group is a level-1 service group flooding scope that includes only intermediate system (IS) to IS level-1 nodes.
[0025] Optionally, in any of the preceding aspects, the flooding scope of service group is a level-1 service group flooding scope that includes only intermediate system (IS) to IS level-2 nodes.
[0026] Optionally, in any of the preceding aspects, the flooding scope of service group is a level-1 service group flooding scope that includes both intermediate system (IS) to IS level-1 and IS-IS level-2 nodes.
[0027] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to participate in synchronization of the link-state information identified by the service group identifier and stored at the node with link- state information identified by the service group identifier and stored at an adjacent node in the network.
[0028] Optionally, in any of the preceding aspects, the node comprises a plurality of service group link-state databases (SG-LSDBs), wherein the one or more processors are further configured to cause the node to store the service group identifier and the link-state information for the service group in a first SG-LSDB of the plurality of SG-LSDBs. The one or more processors are further configured to cause the node to store service group identifiers for other service groups with corresponding link-state information for the other service groups in other SG-LSDBs of the plurality of SG-LSDBs. The one or more processors are further configured to cause the node to participate in synchronization of the link-state information in the plurality of plurality of SG-LSDBs for all of the service groups shared by the node and an adjacent nod. [0029] Optionally, in any of the preceding aspects, the node comprises a plurality of service group link-state databases (SG-LSDBs). The one or more processors are further configured to cause the node to store the service group identifier and the link- state information for the service group in a first SG-LSDB of the plurality of SG-LSDBs. The one or more processors are further configured to cause the node to store service group identifiers for other service groups with corresponding link-state information for the other service groups in other SG-LSDBs of the plurality of SG-LSDBs. The one or more processors are further configured to cause the node to participate in synchronization of zero or more of the plurality of SG-LSDBs with a designated intermediate system (DIS). The one or more processors are further configured to cause the node to advertise any SG-LSDBs that were not synchronized with the DIS. [0030] Optionally, in any of the preceding aspects, the node comprises a plurality of service group link-state databases (SG-LSDBs). The one or more processors are further configured to cause the node to store the service group identifier and the link- state information for the service group in a first SG-LSDB of the plurality of SG-LSDBs. The one or more processors are further configured to cause the node to store service group identifiers for other service groups with corresponding link-state information for the other service groups in other SG-LSDBs of the plurality of SG-LSDBs. The one or more processors are further configured to cause the node to participate in synchronization of zero or more of the plurality of SG-LSDBs with a designated intermediate system (DIS). The one or more processors are further configured to cause the node to participate in synchronization of, with other nodes, any SG-LSDBs that were not synchronized with the DIS.
[0031] Optionally, in any of the preceding aspects each of the SG-LSDBs includes explicit path information for one or more explicit paths in the service group. For each respective explicit path there is a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path.
[0032] According to yet another aspect of the disclosure, there is provided a method for distributing link-state information over a network using protocol data units (PDUs). The method comprises receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that specifies a flooding scope of service group. The LSP includes a service group identifier for a service group comprising a plurality of nodes and link-state information for the service group. The method comprises selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group. The method comprises distributing the LSP on the selected one or more of the plurality of network interfaces.
[0033] According to still one other aspect of the disclosure, there is a non-transitory computer-readable medium storing computer instructions for distributing link-state information over a network using protocol data units (PDUs), that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that specifies a flooding scope of service group, the LSP includes a service group identifier for a service group comprising a plurality of nodes and link-state information for the service group; selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group; and distributing the LSP on the selected one or more of the plurality of network interfaces.
[0034] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0035] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements. [0036] FIG. 1A illustrates a network configured to implement an embodiment of service group flooding. [0037] FIG 1 B depicts network 100 of FIG. 1A with various link-state databases maintained by the nodes.
[0038] FIG. 2A depicts a TLV for announcing scope flooding support.
[0039] FIG. 2B depicts a table that contains information for a number of different types of service group flooding, in accordance with one embodiment.
[0040] FIG. 2C is a flowchart of one embodiment of a process of nodes exchanging hello PDUs to advertise service group flooding capability.
[0041] FIG. 2D depicts one embodiment of a TLV that may be included in the hello message.
[0042] FIG. 3A depicts a format for a TLV that a node sends in an LSP, in one embodiment, to advertise service group membership.
[0043] FIG. 3B depicts a flowchart of one embodiment of a process of a node advertising its service group memberships.
[0044] FIG. 4A depicts one embodiment of a TLV that is sent by a node to dynamically advertise service groupings.
[0045] FIG. 4B depicts one embodiment of a flowchart of a node dynamically advertising service groupings.
[0046] FIG. 5 depicts a flowchart of one embodiment of a process of service scope flooding.
[0047] FIG. 6 depicts two nodes and link-state databases maintained by the nodes, in accordance with one embodiment.
[0048] FIG. 7A depicts a flowchart of one embodiment of a process of two nodes synchronizing SG-LSDBs.
[0049] FIG. 7B provides further details of one embodiment of synchronizing SG- LSDBs between two nodes.
[0050] FIG. 7C depicts a format for an embodiment of an FS-CSNP.
[0051] FIG. 7D depicts a format for one embodiment of an FS-PSNP.
[0052] FIG. 8A depicts nodes in a LAN, as well as link-state databases maintained by the nodes, in accordance with one embodiment.
[0053] FIG. 8B depicts a flowchart of one embodiment of a process of nodes synchronizing SG-LSDBs in a LAN. [0054] FIG. 9A depicts one embodiment of a network showing a state of link-state databases immediately after a node restarts.
[0055] FIG. 9B depicts a flowchart of one embodiment of a process of a graceful restart of a node having a SG-LSDB.
[0056] FIG. 9C depicts a flowchart of one embodiment of a process of re establishing a SG-LSDB during a graceful restart of a node.
[0057] FIG. 10 illustrates an embodiment of a node.
[0058] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.
[0059] FIG. 12 illustrates an embodiment of a TLV in an IS-IS LSP.
DETAILED DESCRIPTION
[0060] The present disclosure will now be described with reference to the figures, which generally relates to distributing link-state information in a network. In an embodiment of service group scope flooding, link-state information associated with the service group is sent to nodes that are part of a service group of nodes in the network. A service group of nodes in a network (or more briefly “service group”) is defined as a group of nodes that provide a level of service to a user or customer the like (or group of users, group customers, etc.) of the network. In one embodiment, the level of service is provided as a part of a service-level-agreement (SLA). Examples of a level of service include, but are not limited to, network throughput, network bandwidth, downlink speed, uplink speed, etc. The service group is a collection of nodes that provide the level of service. By definition the service group includes a subset of nodes in the network. Herein, the phrase “a subset of nodes in a network” or the like means less than all nodes in the network. In one embodiment, link-state information associated with the service group is sent to nodes based on membership in the service group. In one embodiment, link-state information associated with the service group is sent to nodes that are part of the service group. In one embodiment, link-state information associated with the service group is sent only to nodes that are part of a service group. Hence, by distributing the link-state information associated with the service group based on membership in the service group, the link-state information need not be sent to all nodes in the network. [0061] In some embodiments, the link-state information associated with the service group contains information for an explicit path. The explicit path is identified by a path identifier and may be described by sequentially ordered topological path description elements (PDEs). A PDE represents a segment of the explicit path. A PDE can be, but is not limited to, a node, a backup node, a link, etc. An explicit path comprises at least three nodes, but includes a subset of fewer than all of the nodes in the network. Distributing the link-state information to nodes based on membership in the service group allows the explicit path information to be distributed without sending the explicit path information to all nodes in the network.
[0062] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0063] FIG. 1A illustrates a network 100 configured to implement an embodiment of service group scope flooding. In an embodiment of service group flooding, link- state information associated with the service group is sent to nodes based on membership in the service group. In an embodiment of service group scope flooding, link-state information associated with the service group is sent only to nodes that are members of the service group.
[0064] The network 100 includes a number of nodes R1 — R15, which may also be referred to herein as “network nodes” or “network elements.” In one embodiment, the network 100 includes one or more areas, as the term is used in the IS-IS protocol. In one embodiment, a node could be an IS-IS level-1 router, an IS-IS level-2 router, or an IS-IS level 1/2 router. [0065] The nodes R1 - R15 are connected by links 106 (the links between R3/R4, R4/R5 and R11/R12 are labeled with reference numeral 106). Each of the nodes R1 - R15 may be a physical device, such as a router, a bridge, a virtual machine, a network switch, or a logical device configured to perform switching, routing, as well as distributing and maintaining link-state information as described herein. In an embodiment, any of nodes R1 - R15 may be headend nodes positioned at an edge of the network 100, an egress node from which traffic is transmitted, or an intermediate node or any other type of node. The links 106 may be wired or wireless links or interfaces interconnecting respective pairs of the nodes together. The network 100 may include any number of nodes. In an embodiment, the nodes are configured to implement various packet forwarding protocols, such as, but not limited to, MPLS, IPv4, IPv6, and Big Packet Protocol.
[0066] Two service groups 110-1 , 110-2 are indicated by the dashed lines. Service group 110-1 contains nodes R1 , R2, R3, R4, R6, R7, R8, R10, and R11. Service group 110-2 contains nodes R5, R6, R7, R8, R9, R10, R11 , and R12. A service group, as the term is defined herein, is a group of two or more nodes that provide a type of service. In one embodiment, the service group has two or more nodes that provide a type of service. In one embodiment, the service group has four or more nodes that provide a type of service. In one embodiment, the service group has five or more nodes that provide a type of service. In one embodiment, the service groups 110 are established in order to meet a level of service. The level of service may be specified in an SLA. For example, a network operator may select nodes R1 , R2, R3, R4, R6, R7, R8, R10, and R11 to meet a certain level of service (e.g., a certain minimum bandwidth, uplink speed, downlink speed, etc.).
[0067] There are also three paths depicted in network 100. Path 108-1 includes nodes R1-R2-R3-R7. Path 108-2 includes nodes R1-R2-R6-R10-R11-R7-R8-R4. Path 108-3 includes node R5-R9-R10-R11-R7-R8-R4. Each explicit path has a sequence of nodes. In one embodiment, the sequential order is bi-directional. In one embodiment, an explicit path 108 includes at least three nodes. In one embodiment, an explicit path 108 includes at least four nodes. In one embodiment, an explicit path 108 includes at least five nodes. In one embodiment, a path (e.g., path 108-1 , 108-2) is identified by a path identifier and is described by sequentially ordered topological path description elements (PDEs). A PDE represents a segment of the path. A PDE can be, but is not limited to, a node, a backup node, a link 106, etc. Each explicit path is assigned a unique path identifier.
[0068] Each explicit path is contained within at least one of the service groups. By being contained with a service group, it is meant that all of the nodes on the path are members of the service group. Explicit path 108-1 is contained within service group 110-1. Explicit path 108-2 is contained within service group 110-1. Explicit path 108-3 is contained within service group 110-2.
[0069] The example paths in FIG. 1A are what are referred to herein as “strict paths.” The description of a strict path explicitly lists all nodes from one end of the path to the other end of the path. This is in contrast to a “loose path”, whose description does not explicitly lists all nodes from one end of the path to the other end of the path. An example of a loose path is R1-R2-R7-R8-R4. This description of the loose path does not list all nodes on a path between R1 and R4. One way to fulfil this loose path R1-R2-R7-R8-R4 is with path 108-2 (R1-R2-R6-R10-R11-R7-R8-R4). However, the loose path R1-R2-R7-R8-R4 could be fulfilled in another way such as R1-R2-R6-R7-R8-R4, or R1-R2-R3-R7-R8-R4. Another example of a loose path is R5-R10-R11-R8. This description of the loose path does not list all nodes on a path between R5 and R8. One way to fulfil this loose path is with path 108-3 (R5-R9-R10- R11-R7- R8). However, there are other options to fulfil loose path R5-R10-R11-R8. In some embodiments, link-state information for such loose paths is distributed in the network 100 based on service group membership.
[0070] In an embodiment, the link-state information for a loose path that is contained in a particular service group is distributed to nodes based on a node’s membership in the particular service group. In an embodiment, the link-state information for a loose path that is contained in a particular service group is distributed to only nodes in the particular service group. In an embodiment, the link-state information for a loose path that is contained in a particular service group is distributed to all nodes in the particular service group, and only to nodes in the particular service group.
[0071] Link-state information for strict paths contained within a service group may also be distributed based on membership in the service group. In an embodiment, the link-state information for a strict path that is contained in a particular service group is distributed to nodes based on a node’s membership in the particular service group. In an embodiment, the link-state information for a strict path that is contained in a particular service group is distributed to only nodes in the particular service group. In an embodiment, the link-state information for a strict path that is contained in a particular service group is distributed to all nodes in the particular service group, and only to nodes in the particular service group.
[0072] Note that there may be many more nodes in the network 100 than are depicted in FIG. 1A. Also, each node may have interfaces with many additional nodes. For example, a node could have interfaces with ten or even more nodes. Hence, by distributing the LSPs having link-state information for a service group based on membership in the service group, unnecessary link-state information need not be transferred in the network 100. For example, by limiting the sending of the LSPs having link-state information for a service group to only nodes in that service group, unnecessary link-state information need not be transferred in the network 100. Moreover, since nodes outside of the service group need not have link-state information for path contained within the service group, the fact that nodes outside the service group do not have the exact same link-state information does not present problems. In other words, a consistent decision process with respect to, for example, routing packets is still achieved.
[0073] In some embodiments, the nodes maintain one or more databases of link- state information. FIG 1 B depicts network 100 with various link-state databases maintained by the nodes R1 - R15. In one embodiment, each node maintains a conventional LSDB 102. Conventional LSDBs 102 are depicted at each node R1 to R15. Thus, each node maintains a conventional LSDB 102. Each node could be an IS-IS level-1 router, an IS-IS level-2 router, or an IS-IS level 1/2 router. If the node is an IS-IS level-1 router it may maintain a conventional IS-IS level-1 LSDB. If the node is an IS-IS level-2 router it may maintain a conventional IS-IS level-2 LSDB. If the node is an IS-IS level 1/2 router it may maintain a conventional IS-IS level-1 LSDB and a conventional IS-IS level-2 LSDB.
[0074] In one embodiment, link-state information for a service group is maintained separately from the link-state information in the conventional level-1 LSDBs and the conventional level-2 LSDBs. In one embodiment, if a node is a member of in a service group, it maintains a service group (SG) LSDB for that service group. The SG-LSDB for a given service group stores link-state information for that service group. In one embodiment, the link-state information for that service group includes link-state information for one or more explicit paths within that service group. In one embodiment, the link-state information for a given explicit path describes the topology for that explicit path. For example, a node that is a member of service group 110-1 maintains a SG-LSDB 104-1 for service group 110-1. Thus, SG-LSDB 104-1 contains link-state information for service group 110-1. Recall from FIG. 1A, that service group 110-1 may contain one or more explicit paths. For example, service group 110-1 may contain paths 108-1 and 108-2. Hence, SG-LSDB 104-1 contains link-state information may, for example, contain link-state information for paths 108-1 and 108- 2. A node that is a member of service group 110-2 maintains SG-LSDB 104-2. Thus, SG-LSDB 104-2 contains link-state information for service group 110-2. As one example, service group 110-1 may contain path 108-3. Therefore, SG-LSDB 104-2 may contain, for example, link-state information for path 108-3.
[0075] The reference numeral 104 is used herein to refer to an SG-LSDB in general, without reference to a specific SG-LSDB. The reference numeral 108 is used herein to refer to an explicit path in general, without reference to a specific explicit path. In some embodiments, a SG-LSDB 104 contains scope qualified flooding (SQF) LSPs for a service group 110. The term scope qualified flooding, as the term is used herein, refers to flooding of link-state information based on a service group. In one embodiment, the link-state information is flooded to only nodes in the service group. Each SQF-LSP contains link-state information for a service group 110. For example, SG-LSDB 104-1 contains one or more SQF-LSPs for service group 110-1 , and SG- LSDB 104-2 contains one or more SQF-LSPs for service group 110-2.
[0076] In some embodiments, from time to time the SG-LSDBs of a service group are synchronized with each other. For example, the nodes in service group 110-1 may synchronize their respective SG-LSDBs 104-1. Synchronizing the respective SG- LSDBs means to harmonize the SQF-LSPs in the respective SG-LSDBs. In some embodiments, each service group has a priority. In one embodiment, the priority is used to determine an order in which the SG-LSDBs for the different service groups are synchronized. For example, service group 110-1 may be have a higher priority than service group 110-2. A possible reason for this higher priority is that service group 110-1 may be assigned to a customer that is paying for a higher level of service than is a customer associated with service group 110-2. In one embodiment, if a node is a member of more than one service group, the node gives a higher priority to synchronizing the SG-LSDBs 104 associated with the higher priority service group. For example, node R6 may synchronize SG-LSDBs 104-1 prior to synchronizing SG- LSDBs 104-2.
[0077] The link-state information in the SG-LSDB 104 for a given service group may be used to forward packets in the service group. In one embodiment, an explicit path describes a route for forwarding data packets that contain a path identifier for the path. In one embodiment, the sequentially ordered topological PDEs comprise a preferred path routing (PPR) of a preferred path for routing packets containing the PPRJD through the network. A PPR indicates a preferred path over which packets containing the PPRJD should be forwarded. In one embodiment, a node updates a locally stored forwarding database to indicate that data packets including this particular PPRJD should be routed along the path identified by the PPR information instead of the predetermined shortest path, calculated using SPF. In one embodiment, the PPR represents a data path in the service group 110 from a source node to a destination node, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from multiple source nodes to a single destination node, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from a single source node to a multiple destination nodes, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from at least one source node to a multiple destination nodes in the network 100, and includes at least one intermediate node. In one embodiment, the PPR represents a data path in the service group 110 from multiple source nodes to at least one destination node in the network 100, and includes at least one intermediate node.
[0078] In one embodiment, when a node receives a data packet, the node inspects the data packet to determine whether a PPRJD is included in the data packet. In an embodiment, the PPRJD may be included in a header of the data packet. If a PPRJD is included in the data packet, the node performs a lookup on the locally stored forwarding database to determine the next PDE associated with the PPRJD identified in the data packet. The PDE in the locally stored forwarding database indicates a next hop (another network element, link, or segment) by which to forward the data packet. The node forwards the data packet to the next hop based on the PDE indicated in the locally stored forwarding database. In this way, the nodes in the service group 110 are configured to transmit data packets via the PPR instead of the shortest path. Further details of PPR are described in the link state routing (LSR) Working Group Draft Document entitled “Preferred Path Routing (PPR) in IS-IS,” dated July 8, 2019, by U. Chunduri, et al. (hereinafter, “Chunduri PPR 2019”); link state routing (LSR) Working Group Draft Document entitled “Preferred Path Routing (PPR) in IS-IS,” dated March 8, 2020, by U. Chunduri, et al. (hereinafter, “Chunduri PPR 2020”); and link state routing (LSR) Working Group Draft Document entitled “Preferred Path Route Graph Structure,” dated March 8, 2020, by U. Chunduri, et al. (hereinafter, “Chunduri Graph”, all of which are incorporated by reference herein in their entirety.
[0079] In one embodiment, a path describes a traffic engineering (TE) path. Thus, in one embodiment, the sequentially ordered topological PDEs describe a traffic engineering (TE) path. Thus, the path may be used to distribute TE characteristics of the links and nodes. The TE information is not required to be on all nodes in the network 100. In one embodiment, the path includes nodes that act as a TE path computation element (PCE). The TE PCE may implement a decentralized path computation model used by a Resource Reservation Protocol (RSVP-TE). Further details of traffic engineering (TE) are described in the Internet Engineering Task Force (IETF) Request for Comments (RFC) 7810, entitled “IS-IS, Traffic Engineering (TE) Metric Extensions” dated May, 2016, by S. Previdi, et al.; and the Internet Engineering Task Force (IETF) Request for Comments (RFC) 8570, entitled “IS-IS, Traffic Engineering (TE) Metric Extensions” dated March, 2019, by L. Ginsberg, et al., both of which are incorporated by reference herein in their entirety.
[0080] As noted above, in the IS-IS protocol, nodes may exchange a hello PDU periodically to establish adjacency. In one embodiment, a node sends a hello PDU in order to inform its neighbors that the node is configured to perform service group flooding of link-state information. The node may also receive hello PDUs from its neighbors to learn that what neighbors support service group flooding. As will be described more fully below, in some embodiments, there are multiple types of service group flooding to cover different IS-IS levels (e.g., IS-IS level-1 router, IS-IS level-2 router, IS-IS level 1/2 router), as well as to cover standard Type Length Value (TLV) versus extended TLV. Thus, in some embodiments, the hello PDU advertises the type of service group flooding supported by the node.
[0081] In one embodiment, the node sends an IS-IS Hello Packet (IIH) to advertise the type of service group flooding supported by the node. In one embodiment, the hello message contains a Type Length Value (TLV) to indicate the supported flooding scope(s). In one embodiment, the TLV is an extension of a TLV described in Internet Engineering Task Force (IETF), Request for Comments (RFC): 7356, entitled “IS-IS Flooding Scope Link State PDUs (LSPs),” by L. Ginsberg et al. (hereinafter RFC 7356), dated September 2014, which is incorporated herein in its entirety. RFC 7356 describes a TLV (TLV type 243) for announcing scope flooding support.
[0082] FIG. 2A depicts a TLV that is described in RFC 7246 for announcing scope flooding support. The TLV 200 has a number of fields 202a - 202n for specifying a supported flooding scope. A service group flooding identifier may be placed into one of the fields 202 to indicate a type of service group flooding that is supported by a node. There is a reserved field (R) 204a - 204n associated with each corresponding flooding scope field 202a - 202n.
[0083] FIG. 2B depicts a table 220 that contains information for a number of different types of service group flooding, in accordance with one embodiment. Each row in table 220 is for a different type of service group flooding, in accordance with one embodiment. In one embodiment, there are six different types of service group flooding, as depicted in table 220. In one embodiment, there are six types of service group flooding in order to cover each of IS-IS level 1 , IS-IS level-2, and IS-IS level 1/2 for both a standard length TLV and an extended length TLV.
[0084] Table 220 has an LSP Scope Identifier column 222. The table 220 has example values for the entries in the LSP Scope Identifier column 222. However, other values may be used. The values in column 222 may be assigned by the Internet Assigned Numbers Authority (IANA). The LSP Scope Identifier is a unique identifier (e.g., a unique number) for the type of service group for that row. In one embodiment, the unique identifier is inserted in a supported scope field 202 in the TLV 200 depicted in FIG. 2A.
[0085] Table 220 contains a description column 224, which contains a description of the service group flooding for each row. Standard Level-1 Service Group Scope Qualified LSP refers to service group flooding for IS-IS level 1. Standard Level-2 Service Group Scope Qualified LSP refers to service group flooding for IS-IS level-2. Standard Domain Service Group Scope Qualified LSP refers to service group flooding for IS-IS level 1/2.
[0086] Table 220 contains an LSPID Format/TLV Format column 226. The LSPID Format for each entry is extended. The TLV format for the top three table entries is Standard, whereas the TLV format for the bottom three table entries is Extended. The difference is the length of the TLV entry. As noted above, example values are provided for the LSP Scope Identifiers in table 220 . RFC 7356 indicates that Scope Identifiers between 1 - 63 are reserved for flooding scopes with standard TLVs. RFC 7356 indicates that Scope Identifiers between 64 - 127 are reserved for flooding scopes with extended TLVs.
[0087] FIG. 2C is a flowchart of one embodiment of a process 250 of nodes exchanging hello PDUs to advertise service group flooding capability. The process 250 may be used to establish adjacencies, as well as to inform neighbors about support for service group flooding. Step 252 includes a node sending a hello message to its neighbors. The hello message advertises that the node supports service group flooding. In one embodiment, the hello message contains a TLV such as depicted in FIG. 2A. However, the hello message is not limited to the TLV depicted in FIG. 2A. In one embodiment, the TLV is an extension of TLV 243 described in RFC 3756. However, the TLV is not limited to being an extension of TLV 243 described in RFC 3756. Note that the node may specify more than one type of supported service group flooding. With reference to table 220 in FIG. 2B, in one embodiment, the hello message specifies one or more of the types of service group flooding. Thus, with reference to FIG. 2A, one or more of the supported scope fields 202 contains an LSP Scope Identifier that identifies a type of service group flooding that is supported by the node. [0088] The hello message also advertises one or more service groups of which the node is a member. FIG. 2D depicts one embodiment of a TLV that may be included in the hello message. The TLV 280 contains a type field 282. Length field 284 is used to indicate the length of the TLV 280. Flags 286 may be used to indicate whether the optional sub-TLVs (see field 292) are used. The Service Group Identifier 290 is used to specify a unique identifier for the service group. The priority field 288 is used to specify the priority of the service group. In one embodiment, a sub-TLV field 292 is included in the TLV 280 to allow specifying additional information about the service group. As noted, sub-TLV field 292 is optional. In one embodiment, the TLV 280 is added as a IS-IS top level TLV registry. However, the TLV 280 is not limited to being in the TLV registry of IS-IS. In one embodiment, the node learns of the one or more service groups of which the node is a member by receiving configuration information from, for example, a control node or the like. For example, a network operator may select which nodes are to be part of a service group, and send messages to the nodes in the service group providing configuration information that informs the nodes that they are members of the service group.
[0089] Step 254 includes the node receiving Hello messages from neighbor nodes. The Hello messages indicate what service group flooding is supported by each neighbor, as well as service group membership of the other nodes. These Hello messages may be similar to the Hello message sent by the node in step 252.
[0090] Step 256 includes the node building a table or a database that indicates service group membership of its neighbors. This table is based on the hello messages received in step 254. Other nodes also build such databases based on the hello message(s) sent by the node in step 252. As will be described below, this table may be updated from time to time.
[0091] In embodiments of FIG. 2A - 2D, nodes advertise service group capabilities and membership in hello messages, such as but not limited to IS-IS hello messages. Advertising service group capabilities and membership is not limited to hello messages. In one embodiment, nodes in the network advertise service group membership in LSPs. FIG. 3A depicts a format for one embodiment of a TLV that a node sends in an LSP to advertise service group membership. In one embodiment, the TLV 300 is a sub-type TLV of TLV 242, which is referred to as “IS-IS router capability.” However, TLV 300 is not limited to being as a sub-type TLV of TLV 242. TLV 242 for IS-IS router capability is described in Internet Engineering Task Force (IETF), Request for Comments (RFC): 7981 , entitled “IS-IS Extensions for Advertising Router Information,” by L. Ginsberg et al. (hereinafter, RFC 7981), dated October 2016, which is incorporated herein in its entirety. The sub-type field 302 is used to specify the sub-type of TLV. The length field 304 indicates the length of the TLV 300. There are one or more fields 306a - 306n which to specify the Service Group Identifier. There are one or more fields 306a - 306n which to specify the priority if the service group.
[0092] FIG. 3B depicts a flowchart of one embodiment of a process 350 of a node advertising its service group memberships. Step 352 includes the node generating an LSP with router capability with service group related sub-TLVs that identify service groups in which the node is a member. The LSP may also specify a priority for each service group. In one embodiment, the LSP is an IS-IS LSP. In one embodiment, the LSP contains a TLV 242, as defined in RFC 7981. However, the LSP is not limited to having a TLV 242, as defined in RFC 7981. In one embodiment, the LSP contains a TLV 300 as depicted in FIG. 3A.
[0093] Step 354 includes the node sending the LSP to other nodes in the network. In one embodiment, the node sends the LSP to all nodes in the network. In other words, the node does not consider whether a node is a member of any of the service groups listed in the packet to determine whether to send the LSP to the node.
[0094] Step 356 includes the nodes in the network 100 updating their service group tables, if necessary. The node the sent the LSP in step 354 may also receive LSPs with router capability with service group related sub-TLVs that identify service groups in which the other nodes are members from other nodes. Thus, the sending node may also update its service group table based on such received LSPs, if necessary.
[0095] In one embodiment, a node dynamically advertises service groups. The advertising node may be, for example, a central controller. The central controller may also be referred to herein as a primary node. However, in some cases, if the primary node is down, another node could dynamically advertise the service groups. In one embodiment, the advertising node uses a southbound protocol (e.g., NETCONF/YANG (The Network Configuration Protocol / Yet Another Next Generation), PCEP (Path Computation Element Protocol), BGP-LS (Border Gateway Protocol Link-State). FIG. 4A depicts one embodiment of a TLV 400 that is sent by a node to dynamically advertise service groupings. In one embodiment, the TLV 400 is sent in an IS-IS LSP. The type field 402 is used to specify the type of TLV. In one embodiment, the TLV 400 is in the IS-IS top level registry. The length field 404 specifies the length of the TLV 400. A flags field 406 includes one or more flags. One possible flag may indicate if this is the primary advertisement or a back-up advertisement. Backup advertisement can be done by any other node in the network, which can be used when the primary node in not in service. The flag field 406 is optional. The TLV 400 next has one or more rows, each of which has a node identifier 408a - 408n, a service group identifier 410a - 41 On, and a priority 412a - 412n. Each node identifier 408a - 408n identifies one of the nodes in the network. The service group identifier 410 for a given row specifies a service group of which the node listed in that row is a member. The priority 412 indicates the priority of the service group in that row.
[0096] FIG. 4B depicts one embodiment of a flowchart 450 of a node dynamically advertising service groupings. Step 452 includes the node generating an LSP that, for each node, identifies one or more service groups of which the node is a member. In one embodiment, the node generates TLV 400 depicted in FIG. 4A to place in the LSP. However, process 450 is not limited to the TLV depicted in FIG. 4A.
[0097] Step 454 includes the node advertising the LSP to nodes in the network. The LSP may be advertised to all nodes in at least one level of the network. For example, the packet may be advertised to all IS-IS level-1 routers in the network. In one embodiment, the information in this TLV 400 should not be leaked to other levels in the network.
[0098] Step 456 includes the nodes that received the LSP updating their respective service group tables based on the LSP, if necessary. In one embodiment, nodes receiving the information in the LSP treat this as equivalent to the SG-ID and Priority information in the router capability TLV 300 (see FIG. 3A). In one embodiment, information received in the router capability TLV 300 override the information received via TLV 400. [0099] Service group flooding, as described herein, floods link-state information based on nodes in the service group. FIG. 5 depicts a flowchart of one embodiment of a process 500 of service scope flooding. In some embodiments, the process 500 is performed in a network that uses the IS-IS protocol.
[0100] Step 510 includes a node receiving an LSP on a first network interface. The LSP specifies a flooding scope of service group. In one embodiment, the flooding scope is one of the six service group flooding scopes depicted in table 220 of FIG. 2B. The LSP also contains a service group identifier. The LSP may also contains explicit path information for a particular path in the service group. The explicit path information may include a path identifier of an explicit path in the service group and sequentially ordered topological path description elements (PDEs) that describe the explicit path. In one embodiment, the sequentially ordered topological path description elements (PDEs) includes a list of nodes that describe the explicit path. However, the PDEs are not required to be nodes. In one embodiment, the LSP contains explicit path information for only one path. However, the LSP could contain explicit path information for more than one path in the service group. The path(s) could be strict paths or loose paths. In some embodiments, the LSP is referred to herein as an SQF-LSP.
[0101] Step 520 includes selecting one or more network interfaces to distribute the LSP on based on the service group identifier. In an embodiment, the one or more network interfaces that are selected do not include the network interface on which the LSP was received. In an embodiment, the one or more network interfaces that are selected include all network interfaces that connect to other nodes in the service group. In an embodiment, the one or more network interfaces that are selected do not include any network interfaces that connect to nodes outside of the service group. In an embodiment, the one or more network interfaces that are selected coincide exactly with the other nodes in the service group (with the exception of the node from which the LSP was received.
[0102] Step 530 includes the node distributing the LSP on the one or more network interfaces that was/were selected step 520. Step 530 includes flooding the link-state information in the received on the first interface based on the nodes in the service group. [0103] As noted above, a node that is a member of a service group maintains a table or database of link-state information for that service group. In some embodiments, a node maintains an SG-LSDB for each service group in which it is a member. These SG-LSDBs are separate and independent from conventional LSDBs maintained by the nodes. In some embodiments, the SG-LSDBs are synchronized independently from the conventional LSDBs. In one embodiment, each SG-LSDB contains link state information for explicit paths in the service group associated with the SG-LSDB.
[0104] FIG. 6 depicts two nodes and link-state databases maintained by the nodes, in accordance with one embodiment. The two nodes Ra, Rb have a point-to-point (P2P) adjacency (ADJ). The nodes Ra, Rb could be two of the nodes in network 100 (see FIGs. 1A or 1 B), but are not limited to those examples. Node Ra has a conventional IS-IS link-state database (LSBD) 102a, and two service group link-state databases (SG-LSDB) 104-3a, 104-4a. Node Rb has a conventional IS-IS link-state database (LSBD) 102b, and two SG-LSDBs 104-3b, 104-5b. The notation being used for the SG-LSDBs is for the number following “104” to refer to a service group number, and for the letter to refer to the node.
[0105] Each SG-LSDB 104 contains one or more SQF-LSPs. For example, SG- LSDB 104-3a contains SQF-LSP 604-3, SG-LSDB 104-4a contains SQF-LSP 604-4, SG-LSDB 104-3b contains SQF-LSP 604-3, and SG-LSDB 104-5b contains SQF-LSP 604-5. When nodes Ra and Rb synchronize their conventional link-state databases, LSDB 102a and LSDB 102b are synchronized with each other. When nodes Ra and Rb synchronize their service group link-state databases, SG-LSDB 104-3a and SG- LSDB 104-3b are synchronized with each other. Synchronizing the SG-LSDBs 104 will synchronize the link-state information of the SQF-LSPs 604. Synchronizing the link-state information of the SQF-LSPs 604 means to harmonize the SQF-LSPs 604 in the respective SG-LSDBs 104. Thus, SQF-LSP 604-3 in SG-LSDB 104-3a is harmonized with SQF-LSP 604-3 in SG-LSDB 104-3b. If one node has a newer version of an SQF-LSP 604, then the newer version will replace the older version. For any given node to participate in the synchronization of its SQF-LSP 604 with that of another node, the given node may receive an SQF-LSP 604 from another node, and use the received SQF-LSP 604 to update the given node’s version of the SQF-LSP 604. Alternatively, the given node may send its version of the SQF-LSP 604 to the other node, such that the other node may replace its version of the SQF-LSP 604 with the received SQF-LSP 604. For example, if the version of SQF-LSP 604-3 in SG- LSDB 104-3a is older than the version of SQF-LSP 604-3 in SG-LSDB 104-3b, then node Rb sends the newer version to node Ra. Ra then replaces its older version of SQF-LSP 604-3 in SG-LSDB 104-3a with the newer version from node Rb. Alternatively, if the version of SQF-LSP 604-3 in SG-LSDB 104-3b is older than the version of SQF-LSP 604-3 in SG-LSDB 104-3a, then node Ra sends the newer version to node Rb. Rb then replaces its older version of SQF-LSP 604-3 in SG-LSDB 104- 3b with the newer version from node Ra.
[0106] FIG. 7A depicts a flowchart of one embodiment of a process 700 of two nodes synchronizing SG-LSDBs. The process is described with respect to the example of FIG. 6 for purpose of illustration. Step 702 includes node Ra informing node Rb of all service groups that Ra has in common with Rb. Stated another way, node Ra informs node Rb of the common SQF-LSPs 604 that the two nodes maintain. With respect to the example of FIG. 6, node Ra and Rb have SQF-LSPs 604-3 in common. In one embodiment, node Ra sends node Rb one or more complete sequence number PDUs (CSNPs) to list the service groups (or SQF-LSPs 604) in common. An example CSNP that may be sent in step 702, to be discussed below, is depicted in FIG. 7C.
[0107] Step 704 includes node Rb accessing the identifier of the next service group (or SQF-LSP 604) supported by Ra.
[0108] In step 706, the two nodes synchronize their respective SG-LSDBs for this service group. In one embodiment, explicit path information for any paths for this service group are synchronized in the respective SG-LSDBs. For example, SG-LSDB 104-3a is synchronized with SG-LSDB 104-3b. Step 708 includes a determination if there are more service groups for which SG-LSDBs have not yet been synchronized. In the example from FIG. 6, all common SG-LSDBs between Ra and Rb have been synchronized. Hence, the process concludes. Process 700 may then be performed with other pairs of nodes.
[0109] FIG. 7B provides further details of one embodiment of synchronizing SG- LSDBs between two nodes. Again, an example with respect to nodes Ra and Rb in FIG. 6 will be referenced. FIG. 7B provides further details for one embodiment of the process 700 depicted in FIG. 7A. Step 720 includes node Ra sending one or more flooding scope complete sequence number PDUs (FS-CSNPs) to node Rb. The one or more FS-CSNPs contain a complete list of all SQF-LSPs 604 in all SG-LSDBs 104 maintained by node Ra. The one or more FS-CSNPs may be sent periodically by node Ra to node Rb. In one embodiment, the FS-CSNP is an extension to a PDU as defined in RFC 7356. FIG. 7C depicts a format for an embodiment of an FS-CSNP 740. The format has a number of fields, which will now be briefly discussed. The first field 742 may have a value of 0x83, as defined in ISO/I EC 10589:2002, Second Edition, "Information technology - Telecommunications and information exchange between systems - Intermediate System to Intermediate System intradomain routeing information exchange protocol for use in conjunction with the protocol for providing the connectionless-mode network service (ISO 8473)", 2002 (hereinafter, “ISO 8473”), which is hereby incorporated by reference.
[0110] The length indicator 744 is the length of the fixed header in octets. The version/protocol ID Extension 746 may be set to 1. The ID length 748 may be as defined in ISO 8473. The PDU type 750 may be 11 , as defined in ISO 8473. There are also several reserved bits 752. The version 754 may be 1. Next is a reserved field 756. The scope 758 is used to specify the service group flooding scope. Referring back to FIG. 2B, an LSP Scope Identifier is placed in the scope field 758 of the FS-CSNP 740. Recall that the values for the LSP Scope Identifiers are left blank in FIG. 2B for generality. However, in one embodiment, the LSP Scope Identifier will identify one of the six service group types in table 220. There may be more or fewer than service group types. Note that if a node supports more than one type of service group flooding, then the node may send a separate FS-CSNP 740 for each type of supported service group flooding.
[0111] There is a reserved bit “A” 760. The use of reserved bit “A” 760, in one embodiment, will be discussed below. The PDU length 762 indicates the entire length of the FS-CSNP 740 (including the header). The Source ID 764 is the system ID of the node that generated and sent the FS-CSNP 740. The variable-length fields 769 that are allowed in an FS-CSNP are limited to those TLVs that are supported by standard CSNP. [0112] The SQF-LSP I Ds 766, 768 are used to identity service groups of which the node is a member. Stated another ways, the SQF-LSP IDs 766, 768 are used to identify service groups for which the node has link-state information. Stated still another way, the SQF-LSP IDs 766, 768 indicate what SQF-LSPs 604 are maintained in an SG-LSDB 104. In an embodiment, the SQF-LSP IDs 766, 768 specify a range of SQF-LSPs 604. The Start SQF-LSP ID 766 is the SQF-LSP ID of the first SQF- LSP having the specified scope (in scope field 758) in the range covered by this FS- CSNP. The End SQF-LSP ID 768 is the SQF-LSP ID of the last SQF-LSP having the specified scope in the range covered by this FS-CSNP. With reference to the example of FIG. 6, node Ra supports SQF-LSPs 604-3 and 604-4. These SQF-LSPs 604-3 and 604-4correspond to the two SG-LSDBs 104-3a, 104-4a. In one embodiment, the FS-CSNP 740 specifies a range of SQF-LSPs 604. Hence, if the SQF-LSPs 604supported by node Ra have a gap in the ID numbers, then node Ra may send more than one FS-CSNP 740.
[0113] Step 722 in FIG. 7B includes node Rb setting a Send Receive Message (SRM) bit (also referred as the SRM flag) for all SQF-LSPs 604that it supports but that are absent in the FS-CSNP 740. The SRM bit indicates what SQF-LSPs are to be transmitted, and on what interfaces. For example, node Rb checks the SQF-LSP IDs 766, 768 to determine if there are any SQF-LSPs 604 supported by node Rb that are not listed in the SQF-LSP IDs 766, 768. For each such SQF-LSP 604, node Rb sets the SRM bit. With reference to the example of FIG. 6, node Rb determines that SQF- LSP 604-5 is absent from the FS-CSNP 740. Hence, the SRM bit is set for SQF-LSP 604-5.
[0114] Step 722 also includes setting the SRM bit each SQF-LSP 604for which node Rb has a newer version. For example, if node Rb has a newer version of any of SQF-LSP 604-3, then node Rb sets the SRM bit for the respective SQF-LSP 604. For the purpose of illustration, it will be assumed that node Rb has a newer version of SQF-LSP 604-3. Hence, the SRM bit for SQF-LSP 604-3 is set.
[0115] Step 724 includes node Rb sending LSP updates to node Rb in accordance with the SRM bits that were set. In accordance with the example provided in step 722, node Rb would send LSP updates for SQF-LSP 604-3 and SQF-LSP 604-5. [0116] Step 726 includes node Ra installing the SQF-LSPs 604 of which it is a member. In the present example, node Ra would install the updates to the LSP for SQF-LSP 604-3. However, because node Ra does not a member of the service group associated with SQF-LSP 604-5 node Ra would not install SQF-LSP 604-5 for SQF- LSP 604-5.
[0117] Step 728 include node Ra setting the Send Sequence Number (SSN) bit (also referred as the SSN flag). The SSN bit is set to acknowledge the SQF-LSP. Step 730 includes node Ra sending a partial sequence number PDUs (PSNPs) to acknowledge the SQF-LSPs that it received. The PSNP will differ depending on whether node Ra is a member of the service group. In the present example, node Ra is a member of the service group associated with SQF-LSP 604-3, but does not is not a member of the service group associated with SQF-LSP 604-5.
[0118] FIG. 7D depicts a format for one embodiment of an FS-PSNP 770. In one embodiment, FS-PSNP 770 is an extension of an FS-PSNP described in RFC 7356. The format of FS-PSNP 770 has a number of fields, which will now be briefly discussed. The first field 772 may have a value of 0x83, as defined in ISO 8473. The length indicator 774 is the length of the fixed header in octets. The version/protocol ID Extension 776 may be set to 1. The ID length 778 may be as defined in ISO 8473. The PDU type 780 may be 12, as defined in ISO 8473. There are also several reserved bits 782. The version 784 may be 1. Next is a reserved field 786. The SQF- LSP ID 788 is used to specify the service group ID.
[0119] The U bit 790 is used to indicate whether or not the node that sends the FS- PSNP 770 supports the SQF-LSP 604 that is specified by SQF-LSP ID 788. In one embodiment, a value of “0” for the U bit 790 indicates that the node supports the SQF- LSP that is identified in SQF-LSP ID, and a value of Ί” indicates that the node does not support the SQF-LSP that is identified in SQF-LSP ID. In another embodiment, the value for the U bit 790 is reversed such that a value of “1 ” for the U bit 790 indicates that the node supports the SQF-LSP that is identified in SQF-LSP ID, and a value of “0” indicates that the node does not support the SQF-LSP that is identified in SQF- LSP ID.
[0120] The PDU length 792 indicate the entire length of the FS-PSNP 770 (including the header). The Source ID 794 is the system ID of the node that generated and sent the FS-PSNP 770. The variable-length fields 796 that are allowed in an FS- PSNP are limited to those TLVs that are supported by standard CSNP.
[0121] Step 732 includes node Rb removing the SRM bit upon receiving the PSNP ACK from node Ra, if the U bit 790 is set. Thus, if node Ra does not a member of the service group associated with SQF-LSP (e.g., SQF-LSP 604-5), the U bit would be set to 1 such that node Rb removes the SRM bit for SQF-LSP 604-5.
[0122] As noted above in connection with the discussion of FIG. 6, in some embodiments nodes have a P2P adjacency. However, in other embodiments, nodes have a broadcast (or LAN) adjacency. FIG. 8 depicts nodes in a LAN 800, as well as link-state databases maintained by the nodes, in accordance with one embodiment. [0123] Node Rc maintains a conventional IS-IS link-state database (LSBD) 102c, and two path state link-state databases (SG-LSDB) 104-6c, 104-7c. Node Rd maintains a conventional IS-IS link-state database (LSBD) 102d, and two SG-LSDBs 104-6d, 104-8d. Node Re maintains a conventional IS-IS link-state database (LSBD) 102e, and one SG-LSDBs 104-8e. The various SG-LSDBs 104 maintain link-state information for service groups. For example, SG-LSDB 104-6c and 104-6d store SQF- LSP 604-6. SG-LSDB 104-7c stores SQF-LSP 604-7. SG-LSDB 104-8d and 104-8e store SQF-LSP 604-8.
[0124] In one embodiment of synchronizing SG-LSDBs of nodes on a LAN, one of the nodes serves as a designated intermediate system (DIS). In one embodiment, the DIS serves as a coordinator to synchronize all SG-LSDBs that the DIS has with at least one other node in the LAN 800. For purpose of discussion, an example in which node Rc is the DIS will be discussed. Hence, node Rc would coordinate the synchronizing of: SG-LSDB 104-6c with SG-LSDB 104-6d. Node Rc would also coordinate the synchronizing of: SG-LSDB 104-7c with SG-LSDB 104-7e. However, this would leave two of the SG-LSDBs (SG-LSDB 104-8d, SG-LSDB 104-8e) of the other nodes un-synchronized, in this example. In particular, any type of SG-LSDB in the LAN that node Rc does not maintain would at this point remain un-synchronized. Hence, in one embodiment, other nodes in the LAN 800 advertise their un- synchronized SG-LSDBs 104. Stated another way, the other nodes may announce the service group ID associated with the un-synchronized SG-LSDBs 104. Another node that maintains the type of SG-LSDB for which such an advertisement is received may then synchronize its SG-LSDB with the advertiser.
[0125] FIG. 8B depicts a flowchart of one embodiment of a process 820 of synchronizing SG-LSDBs in a LAN. The process 820 will be described with respect to the LAN 800 of FIG. 8A. Step 822 includes synchronizing all SG-LSBDs that the DIS has in common with at least one other node in the LAN. In one embodiment, the DIS sends one or more FS-CSNPs to each node to which the DIS shares a SG-LSBD. For example, the node Rc may send one or more FS-CSNPs to node Rd for SG-LSBD 104-6 (which stores SQF-LSP 604-6). Also, the node Rc may send one or more FS- CSNPs to node Re for SG-LSBD 104-7 (which stores SQF-LSP 604-7). The other nodes then respond accordingly to synchronize the SG-LSDBs 104.
[0126] Step 824 includes the other nodes in the LAN advertising any SQF-LSPs 604 that were not synchronized with the DIS. In one embodiment, the node sends an FS-CNSP 740 having the format depicted in FIG. 7C. In one embodiment, the A bit 760 is used to advertise the SQF-LSPs that were not synchronized with the DIS. For example, node Rd has SG-LSDB 104-8d (corresponding to SQF-LSP 604-8), which was not synchronized with node Rc (the DIS in this example). Hence, node Rd may send an FS-CNSP 740 with the A bit 760 set to “1 ”. SQF-LSP 604-8 may be specified in field 666. Node Re may also send one or more FS-CNSPs 740 with the A bit 760 set to advertise its unsynchronized SG-LSDB 104-8e.
[0127] Step 826 includes the other nodes in the LAN that received the advertisements in step 824 synchronizing any SG-LSDBs 104 that were not synchronized with the DIS. Thus, nodes Rd and Re would synchronize SG-LSDB 104- 8d with SG-LSDBs 104-8e.
[0128] From time to time, a node may go down. Some embodiments include performing a graceful restart after a node restarts. FIG. 9A depicts one embodiment of a network 900 showing a state of link-state databases immediately after a node restarts. In this example, node Rf is just being re-started after having gone down. As such the conventional LSDB 102f, and the SG-LSDBs 104-9f and 104-1 Of all need to be re-built. The link-state information from the data bases of nodes Rg and Rh may be used for the re-build. Node Rg has conventional LSDB 102g and SG-LSDBs 104- 9g and 104-10g. Node Rh has conventional LSDB 102h and SG-LSDBs 104-9h and 104-1 Oh. Note that the node being re-started may have a P2P connection or LAN connection with the other nodes. In FIG. 9, node Rf has a P2P connection with node Rh and a LAN connection with node Rg.
[0129] FIG. 9B depicts a flowchart of one embodiment of a process 910 of a graceful restart of a node having a SG-LSDB. For purpose of illustration, reference will be made to FIG. 9A. Step 912 of process 910 includes the node re-starting. Step 914 includes re-establishing a conventional LSDB 102 at the node. The conventional LSDB 102 may be re-established by synchronizing the LSDB 102 with the other LSDBs 102 in the network 900.
[0130] Step 916 includes re-establishing each SG-LSDB 104 at the node being re started. Further details of one embodiment of re-establishing one SG-LSDB 104 at the node are described in connection with FIG. 9C.
[0131] FIG. 9C depicts a flowchart of one embodiment of a process 920 of re establishing a SG-LSDB 104 during a graceful restart of a node. The process 920 will use node Rf as an example of the node being re-started. Step 922 includes node Rf entering GR. Step 923 includes the node synchronizing its conventional LSDB 102f with the LSDB 102 of other nodes.
[0132] Step 924 include starting a T2 timer for this SG-LSDB. Steps 926 includes initiating synchronization of this SG-LSDB. Steps 928-930 include a determination of whether the T2 timer has expired (step 928) prior to the synchronization of this SG- LSDB being complete (930). If the synchronization of this SG-LSDB completes prior to the T2 timer expiring, then a determination is made whether there is another SG- LSDB to synchronize (step 932). If there is another SG-LSDB to synchronize, then the process returns to step 924 upon the node entering GR for the next SG-LSDB to synchronize. If the T2 timer expires prior to a SG-LSDB completing synchronizing (step 928 is yes), then the adjacency is refreshed, in step 934.
[0133] FIG. 10 illustrates an embodiment of a node. The node 1000 may be configured to implement and/or support the path scope flooding, SG-LSDB synchronization, etc. described herein. The node 1000 may be implemented in a single node or the functionality of node 1000 may be implemented in a plurality of nodes. One skilled in the art will recognize that the term NE encompasses a broad range of devices of which node 1000 is merely an example. While node 1000 is described as a physical device, such as a router or gateway, the node 1000 may also be a virtual device implemented as a router or gateway running on a server or a generic routing hardware (whitebox).
[0134] The node 1000 may comprise a plurality of network interfaces 1010/1030. The network interfaces 1010/1030 may be configured to receive/transit wirelessly or via wirelines. Hence, the network interfaces 1010/1030 may connect to wired or wireless links. The network interfaces 1010/1030 may include input/output ports. The node 1000 may comprise receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes. The node 1000 may comprise a processor 1020 to process data and determine which node to send the data to and a memory. The node 1000 may also generate and distribute LSPs to describe and flood the various topologies and/or area of a network. Although illustrated as a single processor, the processor 1020 is not so limited and may comprise multiple processors. The processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 1020 may be implemented using hardware, software, or both.
[0135] The processor 1020 includes a service group engine 1024, which may perform processing functions of the nodes. The service group engine 1024 may also be configured to perform the steps of the methods discussed herein. As such, the inclusion of the service group engine 1024 and associated methods and systems provide improvements to the functionality of the node 1000. Further, the service group engine 1024 effects a transformation of a particular article (e.g., the network) to a different state. In an alternative embodiment, service group engine 1024 may be implemented as instructions stored in the memory 1022, which may be executed by the processor 1020. Alternatively, the memory 1022 stores the service group engine 1024 as instructions, and the processor 1020 executes those instructions.
[0136] The memory 1022 may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory 1022 may comprise a long-term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof. The memory 1022 may be configured to store one or more SG-LSDBs 104 and one or more LSDPs 102. The memory 1022 may be configured to store a service group table 1050 that stores information about service group membership. For example, the table 1050 may store a list of nodes that are members of various service groups.
[0137] The schemes described above may be implemented on any general- purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
[0138] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system. The general-purpose network component or computer system 1100 includes a processor 1102 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 1104, and memory, such as ROM 1106 and RAM 1108, input/output (I/O) devices 1110, and a network 1112, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface. Although illustrated as a single processor, the processor 1102 is not so limited and may comprise multiple processors. The processor 1102 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs. The processor 1102 may be configured to implement any of the schemes described herein. The processor 1102 may be implemented using hardware, software, or both.
[0139] The secondary storage 1104 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 1108 is not large enough to hold all working data. The secondary storage 1104 may be used to store programs that are loaded into the RAM 1108 when such programs are selected for execution. The ROM 1106 is used to store instructions and perhaps data that are read during program execution. The ROM 1106 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 1104. The RAM 1108 is used to store volatile data and perhaps to store instructions. Access to both the ROM 1106 and the RAM 1108 is typically faster than to the secondary storage 1104. At least one of the secondary storage 1104 or RAM 1108 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.
[0140] It is understood that by programming and/or loading executable instructions onto the node 1000, at least one of the processor 1020 or the memory 1022 are changed, transforming the node 1000 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. Similarly, it is understood that by programming and/or loading executable instructions onto the computer system 1100, at least one of the processor 1102, the ROM 1106, and the RAM 1108 are changed, transforming the computer system 1100 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well- known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an application specific integrated circuit that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
[0141] FIG. 12 illustrates an embodiment of a sub-TLV that may be incorporated within an SQF-LSP 604. It is appreciated that while the disclosed embodiment specifically refers to IS-IS and LSPs, any link state protocol may be applied to flood the network using link state packets. As shown in FIG. 12, the format of the sub-TLV 1200 includes a type field 1202, a length field 1204, and a Value field 1206. The type field 1202 carries a value assigned by the Internet Assigned Numbers Authority (IANA). The length field 1204 includes the total length of the value field in bytes. The value field 1206 has the identifier of the service group (i.e., SG-ID). In one embodiment, an SG-ID of zero indicates that no service group is being specified. An SG-ID of zero can be used to indicate that a path or graph is not part of any specified service group.
[0142] In one embodiment, the sub-TLV 1200 is included in a PPR-Attribute Sub- TLV field of a PPR TLV (a PPR-Attribute Sub-TLV field of a PPR TLV is described in Chunduri PPR 2020). In one embodiment, the sub-TLV 1200 is included in a PPR- Attribute Sub-TLV field of a PPR Tree TLV (a PPR-Attribute Sub-TLV field of a PPR Tree TLV is described in Chunduri graph). However, the sub-TLV 1200 is not limited to these examples.
[0143] Moreover, it is not required that the SG-ID be encoded in a sub-TLV. In another embodiment, the SG-ID is encoded as part of a main header field of the PPR TLV described in Chunduri PPR 2020. In one embodiment, the SG-ID is encoded as part of a main header field of a PPR Tree TLV described in Chunduri graph. For example, an SG-ID field may be added as a new field in the header of the PPR TLV or the PPR Tree TLV.
[0144] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[0145] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
[0146] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1. A node for distributing link-state information over a network using protocol data units (PDUs), comprising: a plurality of network interfaces; and one or more processors in communication with the plurality of network interfaces, wherein the one or more processors are configured to cause the node to: receive a link state PDU (LSP) on a first network interface of the plurality of network interfaces, the LSP includes a flooding scope of service group, a service group identifier for a service group comprising a plurality of nodes, wherein the LSP further includes link-state information for the service group; select one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group; and distribute the LSP on the selected one or more of the plurality of network interfaces.
2. The node of claim 1 , wherein the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to the nodes in the service group.
3. The node of claim 2, wherein the one or more processors are further configured to cause the node to select only network interfaces that correspond to the nodes in the service group.
4. The node of any of claims 1 to 3, wherein the one or more processors are further configured to cause the node to: store the service group identifier and the link-state information in a service group link-state database (SG-LSDB).
5. The node of any of claims 1 to 4, wherein the link-state information includes explicit path information for one or more explicit paths in the service group, wherein each explicit path comprises nodes in the service group, wherein for each respective explicit path the explicit path information includes a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path..
6. The node of claim 5, wherein one or more of the PDEs describe: a preferred path routing (PPR) of a preferred path for routing packets through the service group.
7. The node of claim 6, wherein the preferred path for routing packets represents a data path from a source node to a destination node in the explicit path in the service group, wherein the data path has at least one intermediate node between the source node and the destination node.
8. The node of claim 7, wherein the preferred path for routing packets represents a data path from a plurality of source nodes in the network to at least one destination node in the service group, wherein the data path has at least one intermediate node between the plurality of source nodes and the destination node.
9. The node of any of claims 7 to 8, wherein the preferred path for routing packets represents a data path from at least one source node in the network to a plurality of destination nodes in the service group, wherein the data path has at least one intermediate node between the source node and the plurality of destination nodes.
10. The node of any of claims 5 to 9, wherein the one or more processors are further configured to cause the node to forward packets that contain a path identifier of any of the one or more explicit paths to a next PDE in the sequentially ordered topological PDEs identified by the path identifier.
11. The node of any of claims 5 to 10, wherein the sequentially ordered topological PDEs identified by the path identifier comprises a loose path that does not contain all nodes on the explicit path on which a packet is forwarded.
12. The node of claim 5, wherein the one or more of the PDEs describe: a traffic engineering (TE) path through the service group.
13. The node of any of claims 1 to 12, wherein the one or more processors are further configured to cause the node to: advertise that the node supports the flooding scope of service group.
14. The node of any of claims 1 to 13, wherein the one or more processors are further configured to cause the node to: advertise identifiers of service groups of which the node is a member.
15. The node of any of claims 1 to 14, wherein the one or more processors are further configured to cause the node to: send an intermediate system (IS) to IS hello message that advertises that the node supports the flooding scope of service group and an identifier of a service group of which the node is a member.
16. The node of any of claims 1 to 14, wherein the one or more processors are further configured to cause the node to: send a router capability message that advertises that the node supports the flooding scope of service group and one or more service group identifiers indicating a corresponding one or more service groups of which the node is a member.
17. The node of any of claims 1 to 16, wherein the one or more processors are further configured to cause the node to: generate a packet that, for each node in a set of nodes, identifies all service groups in which the respective node is a member; and advertise the packet to all nodes in the network.
18. The node of any of claims 1 to 17, wherein the flooding scope of service group is a level-1 service group flooding scope that includes only intermediate system (IS) to IS level-1 nodes.
19. The node of any of claims 1 to 17, wherein the flooding scope of service group is a level-1 service group flooding scope that includes only intermediate system (IS) to IS level-2 nodes.
20. The node of any of claims 1 to 17, wherein the flooding scope of service group is a level-1 service group flooding scope that includes both intermediate system (IS) to IS level-1 and IS-IS level-2 nodes.
21. The node of any of claims 1 to 20, wherein the one or more processors are further configured to cause the node to: participate in synchronization of the link-state information identified by the service group identifier and stored at the node with link-state information identified by the service group identifier and stored at an adjacent node in the network.
22. The node of any of claims 1 to 3 or 5 to 21 , wherein the node comprises a plurality of service group link-state databases (SG-LSDBs), wherein the one or more processors are further configured to cause the node to: store the service group identifier and the link-state information in a first SG- LSDB of the plurality of SG-LSDBs; store service group identifiers for other service groups with corresponding link- state information for the other service groups in other SG-LSDBs of the plurality of SG- LSDBs; and participate in synchronization of the link-state information in the plurality of plurality of SG-LSDBs for all of the service groups shared by the node and an adjacent node.
23. The node of any of claims 1 to 3 or 5 to 21 , wherein the node comprises a plurality of service group link-state databases (SG-LSDBs), wherein the one or more processors are further configured to cause the node to: store the service group identifier and the link-state information in a first SG- LSDB of the plurality of SG-LSDBs; store service group identifiers for other service groups with corresponding link- state information for the other service groups in other SG-LSDBs of the plurality of SG- LSDBs; participate in synchronization of zero or more of the plurality of SG-LSDBs with a designated intermediate system (DIS); and advertise any SG-LSDBs that were not synchronized with the DIS.
24. The node of any of claims 1 to 3 or 5 to 21 , wherein the node comprises a plurality of service group link-state databases (SG-LSDBs), wherein the one or more processors are further configured to cause the node to: store the service group identifier and the link-state information in a first SG- LSDB of the plurality of SG-LSDBs; store service group identifiers for other service groups with corresponding link- state information for the other service groups in other SG-LSDBs of the plurality of SG- LSDBs; participate in synchronization of zero or more of the plurality of SG-LSDBs with a designated intermediate system (DIS); and participate in synchronization of, with other nodes, any SG-LSDBs that were not synchronized with the DIS.
25. The node of any of claims 22 to 24, wherein each of the SG-LSDBs includes explicit path information for one or more explicit paths in the service group, wherein for each respective explicit path there is a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path.
26. A method for distributing link-state information over a network using protocol data units (PDUs), comprising: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that indicates a flooding scope of service group, the LSP includes a service group identifier for a service group comprising a plurality of nodes, wherein the LSP further includes link-state information for the service group; selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group; and distributing the LSP on the selected one or more of the plurality of network interfaces.
27. The method of claim 26, wherein selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group comprises: selecting one or more of the plurality of network interfaces that correspond to the nodes in the service group.
28. The method of claim 27, wherein selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group comprises: selecting only the one or more of the plurality of network interfaces that correspond to the nodes in the service group.
29. The method of any of claims 26 to 28, further comprising: storing the service group identifier and the link-state information in a service group link-state database (SG-LSDB).
30. The method of any of claims 26 to 29, wherein the LSP further includes explicit path information for one or more explicit paths in the service group, wherein each explicit path comprises nodes in the service group, wherein for each respective explicit path the explicit path information includes a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path.
31. The method of any of claims 26 to 30, further comprising: forwarding packets that contain a path identifier of any of the one or more explicit paths from the node to a next node in the service group identified by the explicit path information in the service group.
32. The method of any of claims 30 to 31 , further comprising: forwarding a packet that contain a path identifier of any of the one or more explicit paths from the node to a node on a loose path in the service group that is not expressly listed in the explicit path information for the service group in order to forward the packet to a node at an end of the sequentially ordered topological PDEs for the service group.
33. The method of any of claims 26 to 32, further comprising: advertising that the node supports the flooding scope of service group.
34. The method of any of claims 26 to 33, further comprising: advertising the service groups in which the node is a member.
35. The method of any of claims 30 to 34, further comprising: participating in synchronization of the explicit path information stored at the node with explicit path information for the service group stored at an adjacent node in the network.
36. The method of any of claims 30 to 35, further comprising: storing the service group identifier and the explicit path information in a first of a plurality of service group link-state database (SG-LSDB); storing service group identifiers for other service groups with corresponding explicit path information for the other service groups in other SG-LSDBs of the plurality SG-LSDBs; and participating in synchronization of the explicit path information in the plurality of SG-LSDBs for all of the service groups shared by the node and an adjacent node.
37. The method of any of claims 30 to 35, further comprising: storing the service group identifier and the explicit path information in a first of a plurality of service group link-state database (SG-LSDB); storing service group identifiers for other service groups with corresponding explicit path information for the other service groups in other SG-LSDBs of the plurality SG-LSDB; participating in synchronization of zero or more of the SG-LSDBs with a designated intermediate system (DIS); and advertising any service groups that were not synchronized with the DIS.
38. The method of any of claims 30 to 35, further comprising: storing the service group identifier and the explicit path information for the service group in a first of a plurality of service group link-state database (SG-LSDB); storing service group identifiers for other service groups with corresponding explicit path information for the other service groups in other SG-LSDBs of the plurality SG-LSDB; participating in synchronization of zero or more of the SG-LSDBs with a designated intermediate system (DIS); and participating in synchronization of, with other nodes, any SG-LSDBs that were not synchronized with the DIS.
39. A non-transitory computer-readable medium storing computer instructions for distributing link-state information over a network using protocol data units (PDUs), that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that indicates a flooding scope of service group, the LSP includes a service group identifier for a service group comprising a plurality of nodes, wherein the LSP further includes link-state information for the service group; selecting one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the service group; and distributing the LSP on the selected one or more of the plurality of network interfaces.
40. The non-transitory computer-readable medium of claim 39, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: selecting one or more of the plurality of network interfaces that correspond to the nodes in the service group.
41. The non-transitory computer-readable medium of claim 40, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: selecting only the one or more of the plurality of network interfaces that correspond to the nodes in the service group.
42. The non-transitory computer-readable medium of any of claims 39 to 40, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: storing the service group identifier and the link-state information for the service group in a service group link-state database (SG-LSDB).
43. The non-transitory computer-readable medium of any of claims 39 to 42, wherein the LSP further includes explicit path information for one or more explicit paths in the service group, wherein each explicit path comprises nodes in the service group, wherein for each respective explicit path the explicit path information includes a path identifier of the respective explicit path and sequentially ordered topological path description elements (PDEs) in the service group that describe the respective explicit path.
44. The non-transitory computer-readable medium of claim 43, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: forwarding a packet that contain a path identifier of any of the one or more explicit paths from the node to a node on a loose path in the service group that is not expressly listed in the explicit path information for the service group in order to forward the packet to a node at an end of the sequentially ordered topological PDEs for the service group.
45. The non-transitory computer-readable medium of any of claims 39 to 44, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: advertising that the node supports the flooding scope of service group.
46. The non-transitory computer-readable medium of any of claims 39 to 45, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: advertising the service groups in which the node is a member.
47. The non-transitory computer-readable medium of any of claims 39 to 46, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the step of: participating in synchronization of the explicit path information stored at the node with explicit path information for the service group stored at an adjacent node in the network.
48. The non-transitory computer-readable medium of any of claims 43 to 47, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the service group identifier and the explicit path information in a first of a plurality of service group link-state database (SG-LSDB); storing service group identifiers for other service groups with corresponding explicit path information identified by other service group identifiers in other SG-LSDBs of the plurality SG-LSDBs; and participating in synchronization of the explicit path information in the plurality of SG-LSDBs for all of the service groups shared by the node and an adjacent node.
49. The non-transitory computer-readable medium of any of claims 43 to 47, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the service group identifier and the explicit path information for the service group in a first of a plurality of service group link-state database (SG-LSDB); storing service group identifiers for other service groups with corresponding explicit path information for the other service groups in other SG-LSDBs of the plurality SG-LSDB; participating in synchronization of zero or more of the SG-LSDBs with a designated intermediate system (DIS); and advertising any service groups that were not synchronized with the DIS.
50. The non-transitory computer-readable medium of any of claims 43 to 47, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the service group identifier and the explicit path information in a first of a plurality of service group link-state database (SG-LSDB); storing service group identifiers for other service groups with corresponding explicit path information for the other service groups in other SG-LSDBs of the plurality SG-LSDB; participating in synchronization of zero or more of the SG-LSDBs with a designated intermediate system (DIS); and participating in synchronization of, with other nodes, any SG-LSDBs that were not synchronized with the DIS.
PCT/US2020/047119 2019-08-22 2020-08-20 Service group flooding in is-is networks WO2021035014A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962890431P 2019-08-22 2019-08-22
US62/890,431 2019-08-22

Publications (1)

Publication Number Publication Date
WO2021035014A1 true WO2021035014A1 (en) 2021-02-25

Family

ID=72340423

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/047119 WO2021035014A1 (en) 2019-08-22 2020-08-20 Service group flooding in is-is networks

Country Status (1)

Country Link
WO (1) WO2021035014A1 (en)

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
"ISO/IEC 10589:2002", 2002, article "Information technology -- Telecommunications and information exchange between systems - Intermediate System to Intermediate System intradomain routeing information exchange protocol for use in conjunction with the protocol for providing the connection less-mode network service (ISO 8473"
CHUNDURI R LI HUAWEI USA R WHITE JUNIPER NETWORKS J TANTSURA APSTRA INC L CONTRERAS TELEFONICA Y QU HUAWEI USA U: "Preferred Path Routing (PPR) in IS-IS; draft-chunduri-lsr-isis-preferred-path-routing-02.txt", no. 2, 15 February 2019 (2019-02-15), pages 1 - 26, XP015131102, Retrieved from the Internet <URL:https://tools.ietf.org/html/draft-chunduri-lsr-isis-preferred-path-routing-02> [retrieved on 20190215] *
GINSBERG S PREVIDI Y YANG CISCO SYSTEMS L: "IS-IS Flooding Scope LSPs; draft-ietf-isis-fs-lsp-02.txt", IS-IS FLOODING SCOPE LSPS; DRAFT-IETF-ISIS-FS-LSP-02.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 4 June 2014 (2014-06-04), pages 1 - 22, XP015099404 *
L. GINSBERG ET AL.: "IS-IS Extensions for Advertising Router Information", INTERNET ENGINEERING TASK FORCE (IETF), REQUEST FOR COMMENTS (RFC): 7981, October 2016 (2016-10-01)
L. GINSBERG ET AL.: "IS-IS Flooding Scope Link State PDUs (LSPs", INTERNET ENGINEERING TASK FORCE (IETF), REQUEST FOR COMMENTS (RFC): 7356, September 2014 (2014-09-01)
L. GINSBERG: "IS-IS, Traffic Engineering (TE) Metric Extensions", THE INTERNET ENGINEERING TASK FORCE (IETF) REQUEST FOR COMMENTS (RFC) 8570, March 2019 (2019-03-01)
S. PREVIDI: "IS-IS, Traffic Engineering (TE) Metric Extensions", THE INTERNET ENGINEERING TASK FORCE (IETF) REQUEST FOR COMMENTS (RFC) 7810, May 2016 (2016-05-01)
U. CHUNDURI ET AL., PREFERRED PATH ROUTE GRAPH STRUCTURE, 8 March 2020 (2020-03-08)
U. CHUNDURI ET AL., PREFERRED PATH ROUTING (PPR) IN IS-IS, 8 July 2019 (2019-07-08)
U. CHUNDURI ET AL., PREFERRED PATH ROUTING (PPR) IN IS-IS, 8 March 2020 (2020-03-08)

Similar Documents

Publication Publication Date Title
US10541905B2 (en) Automatic optimal route reflector root address assignment to route reflector clients and fast failover in a network environment
US10476793B2 (en) Multicast flow overlay using registration over a reliable transport
US8228786B2 (en) Dynamic shared risk node group (SRNG) membership discovery
CN113261245B (en) Recovery system and method for network link or node failure
US9043487B2 (en) Dynamically configuring and verifying routing information of broadcast networks using link state protocols in a computer network
US9288067B2 (en) Adjacency server for virtual private networks
EP2282459A1 (en) Link state routing protocols for database synchronisation in GMPLS networks
JP5625121B2 (en) Prioritizing routing information updates
US8667174B2 (en) Method and system for survival of data plane through a total control plane failure
US20230116548A1 (en) Route Processing Method and Related Device
EP3975511B1 (en) Neighbor discovery for border gateway protocol in a multi-access network
WO2018177273A1 (en) Method and apparatus for processing based on bier information
SE1750815A1 (en) Methods, System and Apparatuses for Routing Data Packets in a Network Topology
WO2021035014A1 (en) Service group flooding in is-is networks
WO2021035013A1 (en) Path scope flooding in is-is networks
US11888596B2 (en) System and method for network reliability
WO2022174789A1 (en) Information publishing method and apparatus, and computer readable storage medium
Nasir et al. A Comparative Study of Routing Protocols Including RIP, OSPF and BGP

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20765415

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20765415

Country of ref document: EP

Kind code of ref document: A1