WO2020243465A1 - Bases de données dédiées à un groupe de services ospf (open shortest path first) - Google Patents

Bases de données dédiées à un groupe de services ospf (open shortest path first) Download PDF

Info

Publication number
WO2020243465A1
WO2020243465A1 PCT/US2020/035179 US2020035179W WO2020243465A1 WO 2020243465 A1 WO2020243465 A1 WO 2020243465A1 US 2020035179 W US2020035179 W US 2020035179W WO 2020243465 A1 WO2020243465 A1 WO 2020243465A1
Authority
WO
WIPO (PCT)
Prior art keywords
service group
neighboring
database
ospf
service
Prior art date
Application number
PCT/US2020/035179
Other languages
English (en)
Inventor
Padmadevi Pillay-Esnault
Uma S. Chunduri
Alvaro Retana
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2020243465A1 publication Critical patent/WO2020243465A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/021Ensuring consistency of routing table updates, e.g. by using epoch numbers

Definitions

  • OSPF Open Shortest Path First
  • the present disclosure pertains to the field of data transmission in a network implementing an Interior Gateway Protocol (IGP), such as Open Shortest Path First (OSPF) version (OSPFvl) or OSPF version 2 (OSPFv2).
  • IGP Interior Gateway Protocol
  • OSPFvl Open Shortest Path First version
  • OSPFv2 OSPF version 2
  • OSPFv2 OSPF version 2
  • An IGP is a type of protocol used for exchanging information among network elements (NEs), such as routers, switches, gateways, etc., within a network (also referred to herein as an“autonomous system (AS)” or a“domain”).
  • NEs network elements
  • AS autonomous system
  • the information exchanged using IGP may include routing information and/or state information.
  • the information can be used to route data using network-layer protocols, such as Internet Protocol (IP).
  • IP Internet Protocol
  • IGPs can be divided into two categories: distance-vector routing protocols and link- state routing protocols.
  • each NE in the network does not possess information about the full network topology. Instead, each NE advertises a distance value calculated to other routers and receives similar advertisements from other routers. Each NE in the network uses the advertisements to populate a local routing table.
  • each NE stores network topology information about the complete network topology.
  • Each NE then independently calculates the next best hop from the NE for every possible destination in the network using the network topology information.
  • the NE then stores a routing table including the collection of next best hops to every possible destination.
  • Each NE in the network forwards the information encoded according to an IGP to adjacent NEs, thereby flooding the network with the information that is saved at each of the NEs in the network.
  • Examples of link-state routing protocols include Intermediate System to Intermediate System (IS-IS), OSPFv2, and OSPFv3.
  • OSPFv2, OSPFv3, and similar protocols are dynamic routing protocols that quickly detect topological changes and calculate new loop free routes after a period of convergence.
  • Each NE in the network implementing an OSPF protocol includes a link-state database (LSDB) and a routing table.
  • the LSDB describes a topology of the network, and each NE in the network maintains an identical LSDB.
  • Each entry in the LSDB describes a particular NE’s local state (e.g., usable interfaces and reachable neighbors).
  • Each NE constructs a tree of shortest paths with the respective NE as the root using the LSDB. This shortest path tree indicates the route from the respective NE to each destination in the network and is used to construct the routing table maintained by the respective NE.
  • a method performed by an NE in a network comprising storing, in a memory of the NE, a service group database, wherein the service group database only includes data associated with a service group, and wherein the service group includes the NE, receiving a packet comprising a service group identifier (ID) identifying the service group from a neighboring NE, and updating the service group database to include data from the packet
  • ID service group identifier
  • the packet is at least one of a database description packet, a link state update, a link state request, or a link state acknowledgement.
  • the service group ID is carried in a header of the packet.
  • the packet comprises an OSPF header, and wherein the service group ID is carried in an instance ID field of the OSPF header.
  • the service group ID is carried in four bits of the instance ID field.
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that the service group ID is carried in an instance ID field of the OSPF header.
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that the service group ID is carried in an area ID field of the
  • the method further comprises storing a plurality of different service group databases respectively associated with a plurality of different service groups, wherein the NE is a member of the plurality of different service groups.
  • receiving an advertisement comprising the service group ID and forwarding the advertisement to only a second neighboring NE that is also a member of the service group.
  • the method further comprises exchanging OSPF hello messages with a neighboring NE that is also a member of the service group, negotiating a master slave relationship between the NE and the neighboring NE that is also a member of the service group, transmitting a base database of the NE to the neighboring NE that is also a member of the service group, wherein the base database comprises data stored in a link-state database (LSDB) and a routing table of the NE, determining a priority of the service group, and transmitting the service group database to the neighboring NE that is also a member of the service group based on the priority of the service group after transmitting the base database of the NE.
  • LSDB link-state database
  • the method further comprises exchanging OSPF hello messages with a neighboring NE that is also a member of the service group, negotiating a master slave relationship between the NE and the neighboring NE that is also a member of the service group, transmitting a base database of the NE to the neighboring NE that is also a member of the service group until the base database has fully transmitted to the neighboring NE that is also a member of the service group, wherein the base database comprises data stored in a link-state database (LSDB) and a routing table of the NE, determining a priority of the service group, and transmitting the service group database to the neighboring NE that is also a member of the service group based on the priority of the service group after fully transmitting the base database of the NE.
  • LSDB link-state database
  • the method further comprises exchanging a first set of OSPF hello messages with a neighboring NE that is also a member of the service group, negotiating a first master slave relationship between the NE and the neighboring NE that is also a member of the service group in response to exchanging the first set of OSPF hello messages, determining a priority of the service group, transmitting the service group database to the neighboring NE that is also a member of the service group based on the priority of the service group, exchanging a second set of OSPF hello messages with the neighboring NE that is also a member of the service group after transmitting the service group database to the neighboring NE that is also a member of the service group, negotiating a second master slave relationship between the NE and the neighboring NE that is also a member of the service group in response to exchanging the second set of OSPF hello messages, and transmitting a base database of the NE to the neighboring NE that
  • the NE and the neighboring NE are in a network implementing Open Shortest Path First (OSPF).
  • OSPF Open Shortest Path First
  • a method performed by an NE in a network comprising receiving information about a first service group and information about a second service group from a first neighboring NE, updating a first service group database to include the information about the first service group, updating a second service group database to include the information about the second service group, determining that a second neighboring NE is included in the first service group and is not included in the second service group, and transmitting a packet comprising the information about the first service group to the second neighboring NE, wherein the information about the second service group is not transmitted to the second neighboring NE.
  • the packet is at least one of a database description packet, a link state update, a link state request, or a link state acknowledgement.
  • a service group identifier (ID) identifying the first service group is carried in a header of the packet.
  • the packet comprises an OSPF header, and wherein a service group identifier (ID) identifying the first service group is carried in an instance ID field of the OSPF header.
  • ID service group identifier
  • the service group ID is carried in four bits of the instance ID field.
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that a service group identifier (ID) identifying the first service group is carried in an instance ID field of the OSPF header.
  • ID service group identifier
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that a service group identifier (ID) identifying the first service group is carried in an area ID field of the OSPF header.
  • ID service group identifier
  • the method further comprises exchanging OSPF hello messages with a neighboring NE that is also a member of the first service group, negotiating a master slave relationship between the NE and the neighboring NE that is also a member of the first service group, transmitting a base database of the NE to the neighboring NE that is also a member of the first service group, wherein the base database comprises data stored in a link-state database (LSDB) and a routing table of the NE, determining a priority of the first service group, and transmitting the first service group database to the neighboring NE that is also a member of the first service group based on the priority of the first service group after transmitting the base database of the NE.
  • LSDB link-state database
  • the method further comprises exchanging OSPF hello messages with a neighboring NE that is also a member of the first service group, negotiating a master slave relationship between the NE and the neighboring NE that is also a member of the first service group, transmitting a base database of the NE to the neighboring NE that is also a member of the first service group until the base database has fully transmitted to the neighboring NE that is also a member of the first service group, wherein the base database comprises data stored in a link- state database (LSDB) and a routing table of the NE, determining a priority of the first service group, and transmitting the first service group database to the neighboring NE that is also a member of the first service group based on the priority of the first service group after fully transmitting the base database of the NE.
  • LSDB link- state database
  • the method further comprises exchanging a first set of OSPF hello messages with a neighboring NE that is also a member of the first service group, negotiating a first master slave relationship between the NE and the neighboring NE that is also a member of the first service group in response to exchanging the first set of OSPF hello messages, determining a priority of the first service group, transmitting the first service group database to the neighboring NE that is also a member of the first service group based on the priority of the first service group, exchanging a second set of OSPF hello messages with the neighboring NE that is also a member of the first service group after transmitting the first service group database to the neighboring NE that is also a member of the first service group, negotiating a second master slave relationship between the NE and the neighboring NE that is also a member of the first service group in response to exchanging the second set of OSPF hello messages, and transmitting a base database
  • the NE and the neighboring NE are in a network implementing Open Shortest Path First (OSPF).
  • OSPF Open Shortest Path First
  • the method further comprises determining that a third neighboring NE is included in the first service group and the second service group, and transmitting the information about the first service group and the information about the second service group to the third neighboring NE independently of each other.
  • a NE in a network comprising a memory storing instructions, and a processor.
  • the memory stores a service group database only including data associated with a service group, and wherein the service group includes the NE.
  • the processor is configured to execute the instructions, which cause the processor to be configured to receive a packet comprising a service group identifier (ID) identifying the service group from a neighboring NE, and update the service group database to include data from the packet.
  • ID service group identifier
  • the packet is at least one of a database description packet, a link state update, a link state request, or a link state acknowledgement.
  • the service group ID is carried in a header of the packet.
  • the packet comprises an OSPF header, and wherein the service group ID is carried in an instance ID field of the OSPF header.
  • the service group ID is carried in four bits of the instance ID field.
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that the service group ID is carried in an instance ID field of the OSPF header.
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that the service group ID is carried in an area ID field of the OSPF header.
  • the neighboring NE is included in the service group.
  • the memory is further configured to store a plurality of different service group databases respectively associated with a plurality of different service groups, wherein each service group database only stores data for one of the different service groups.
  • the instructions further cause the processor to be configured to receive an advertisement comprising the service group ID, and forward the advertisement to only a second neighboring NE that is also a member of the service group.
  • the instructions further cause the processor to be configured to exchange OSPF hello messages with a neighboring NE that is also a member of the service group, negotiate a master slave relationship between the NE and the neighboring NE that is also a member of the service group, transmit a base database of the NE to the neighboring NE that is also a member of the service group, wherein the base database comprises data stored in a link-state database (LSDB) and a routing table of the NE, determine a priority of the service group, and transmit the service group database to the neighboring NE that is also a member of the service group based on the priority of the service group after transmitting the base database of the NE.
  • LSDB link-state database
  • the instructions further cause the processor to be configured to exchange OSPF hello messages with a neighboring NE that is also a member of the service group, negotiate a master slave relationship between the NE and the neighboring NE that is also a member of the service group, transmit a base database of the NE to the neighboring NE that is also a member of the service group until the base database has fully transmitted to the neighboring NE that is also a member of the service group, wherein the base database comprises data stored in a link-state database (LSDB) and a routing table of the NE, determine a priority of the service group, and transmit the service group database to the neighboring NE that is also a member of the service group based on the priority of the service group after fully transmitting the base database of the NE.
  • LSDB link-state database
  • the instructions further cause the processor to be configured to exchange a first set of OSPF hello messages with a neighboring NE that is also a member of the service group, negotiate a first master slave relationship between the NE and the neighboring NE that is also a member of the service group in response to exchanging the first set of OSPF hello messages, determine a priority of the service group, transmit the service group database to the neighboring NE that is also a member of the service group based on the priority of the service group, exchange a second set of OSPF hello messages with the neighboring NE that is also a member of the service group after transmitting the service group database to the neighboring NE that is also a member of the service group, negotiate a second master slave relationship between the NE and the neighboring NE that is also a member of the service group in response to exchanging the second set of OSPF hello messages, and transmit a base database of the NE to the neighboring
  • a NE in a network comprising a memory storing instructions, a processor configured to execute the instructions, which cause the processor to be configured to receive information about a first service group and information about a second service group from a first neighboring NE, update a first service group database to include the information about the first service group, update a second service group database to include the information about the second service group, determine that a second neighboring NE is included in the first service group and is not included in the second service group, and transmit a packet comprising the information about the first service group to the second neighboring NE, wherein the information about the second service group is not transmitted to the second neighboring NE.
  • the packet is at least one of a database description packet, a link state update, a link state request, or a link state acknowledgement .
  • a service group identifier (ID) identifying the first service group is carried in a header of the packet.
  • the packet comprises an OSPF header, and wherein a service group identifier (ID) identifying the first service group is carried in an instance ID field of the OSPF header.
  • the service group ID is carried in four bits of the instance ID field.
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that a service group identifier (ID) identifying the first service group is carried in an instance ID field of the OSPF header.
  • ID service group identifier
  • the packet comprises an OSPF header, and wherein the OSPF packet comprises a flag indicating that a service group identifier (ID) identifying the first service group is carried in an area ID field of the OSPF header.
  • ID service group identifier
  • the instructions further cause the processor to be configured to exchange OSPF hello messages with a neighboring NE that is also a member of the first service group, negotiate a master slave relationship between the NE and the neighboring NE that is also a member of the first service group, transmit a base database of the NE to the neighboring NE that is also a member of the first service group, wherein the base database comprises data stored in a link- state database (LSDB) and a routing table of the NE, determine a priority of the first service group, and transmit the first service group database to the neighboring NE that is also a member of the first service group based on the priority of the first service group after transmitting the base database of the NE.
  • LSDB link- state database
  • the instructions further cause the processor to be configured to exchange OSPF hello messages with a neighboring NE that is also a member of the first service group, negotiate a master slave relationship between the NE and the neighboring NE that is also a member of the first service group, transmit a base database of the NE to the neighboring NE that is also a member of the first service group until the base database has fully transmitted to the neighboring NE that is also a member of the first service group, wherein the base database comprises data stored in a link-state database (LSDB) and a routing table of the NE, determine a priority of the first service group, and transmit the first service group database to the neighboring NE that is also a member of the first service group based on the priority of the first service group after fully transmitting the base database of the NE.
  • LSDB link-state database
  • the instructions further cause the processor to be configured to exchange a first set of OSPF hello messages with a neighboring NE that is also a member of the first service group, negotiate a first master slave relationship between the NE and the neighboring NE that is also a member of the first service group in response to exchanging the first set of OSPF hello messages, determine a priority of the first service group, transmit the first service group database to the neighboring NE that is also a member of the first service group based on the priority of the first service group, exchange a second set of OSPF hello messages with the neighboring NE that is also a member of the first service group after transmitting the first service group database to the neighboring NE that is also a member of the first service group, negotiate a second master slave relationship between the NE and the neighboring NE that is also a member of the first service group in response to exchanging the second set of OSPF hello messages, and transmit a base database of the
  • the NE and the neighboring NE are in a network implementing Open Shortest Path First (OSPF).
  • OSPF Open Shortest Path First
  • the instructions further cause the processor to be configured to determine that a third neighboring NE is included in the first service group and the second service group, and transmit the information about the first service group and the information about the second service group to the third neighboring NE independently of each other.
  • a non-transitory computer-readable medium configured to store a computer program product comprising computer executable instructions that, when executed by a processor of a NE implemented in a network, cause the processor to implement the method according to the first or second aspect or any other implementation of the first or second aspect.
  • any one of the foregoing embodiments may be combined with any one or more of the other foregoing embodiments to create a new embodiment within the scope of the present disclosure.
  • FIG. 1 is a schematic diagram illustrating a network configured to implement service groups and maintain service group databases according to various embodiments of the disclosure.
  • FIG. 2 is a schematic diagram of an NE suitable to implement service groups and maintain service group databases according to various embodiments of the disclosure.
  • FIG. 3 is a schematic diagram illustrating databases stored at NEs in the network according to various embodiments of the disclosure.
  • FIG. 4 is a schematic diagram illustrating the forwarding of databases in OSPF packets across NEs in a service group according to various embodiments of the disclosure.
  • FIGS. 5A-C are examples of headers or portions of headers included in the OSPF packets according to various embodiments of the disclosure.
  • FIGS. 6A-C message sequence diagrams illustrating a process of creating service group neighbors and exchanging service group databases according to various embodiments of the disclosure.
  • FIG. 7 is a flowchart illustrating a method for implementing service groups and maintaining service group databases according to various embodiments of the disclosure.
  • FIG. 8 is a flowchart illustrating another method for implementing service groups and maintaining service group databases according to various embodiments of the disclosure.
  • FIG. 9 is a schematic diagram illustrating an apparatus for implementing service groups and maintaining service group databases according to various embodiments of the disclosure.
  • FIG. 10 is a schematic diagram illustrating an apparatus for implementing service groups and maintaining service group databases according to various embodiments of the disclosure.
  • FIG. 1 is a schematic diagram illustrating a network 100 (also referred to herein as an “area,”“autonomous system (AS)” or“domain”) configured to implement service groups and maintain service group databases using an OSPF protocol according to various embodiments of the disclosure.
  • OSPF protocol also referred to herein as“OSFP”
  • OSPF may refer to a routing protocol, such as, for example, OSPFv2, OSPFv3, or any other IGP that implements a flooding mechanism similar to OSPFv2 or OSPFv3.
  • Network 100 comprises a central entity 103 (also referred to herein as a“controller”) and multiple NEs 104-112. The NEs 104-112 are interconnected by links 123.
  • FIG. 1 is a schematic diagram illustrating a network 100 (also referred to herein as an “area,”“autonomous system (AS)” or“domain”) configured to implement service groups and maintain service group databases using an OSPF protocol according to various embodiments of the disclosure.
  • OSPF protocol also referred to herein
  • the central entity 103 is coupled to a single NE 109 in the network 100 by central entity-to-NE link 125.
  • the central entity 103 may be coupled to each of the NEs 104-112 via central entity- to-NE links 125.
  • the central entity 103 may be substantially similar to a Path Computation Element (PCE), which is further described in Internet Engineering Task Force (IETF) Request for Comments (RFC) 8281, entitled“Path Computation Element Communication Protocol (PCEP) Extensions for PCE-Initiated LSP Setup in a Stateful PCE Model,” by E. Crabbe, dated December 2017, which is incorporated by reference herein in its entirety.
  • PCE Path Computation Element
  • the central entity 103 may be substantially similar to a Software Defined Network Controller (SDNC), which is further described in the IETF RFC 8402 entitled“Segment Routing Architecture,” by C. Filsfils, dated July 2018, which is incorporated by reference herein in its entirety.
  • SDNC Software Defined Network Controller
  • the central entity 103 may be substantially similar to an Application Layer Traffic Optimization (ALTO) server, which is further described in the IETF RFC 7285, entitled “Application Layer Traffic Optimization (ALTO) Protocol,” by R. Alimi, dated September 2014, which is incorporated by reference herein in its entirety.
  • ATO Application Layer Traffic Optimization
  • NEs 104-112 may each be a physical device, such as a router, a bridge, a network switch, or a logical device, such as a virtual machine, configured to forward data across the network 100 by encoding the data according to an OSPF protocol.
  • the NEs 104-112 are headend nodes or edge nodes positioned at an edge of the network 100.
  • one or more of NEs 104-112 may be an ingress node at which traffic (e.g., control packets and data packets) is received, and one or more of NEs 104-112 may be an egress node from which traffic is transmitted.
  • the central entity-to-NE link 125 may be a wired link, wireless links, or interfaces interconnecting each of the NEs 104-112 with the central entity 103.
  • the links 123 may be wired links, wireless links, or interfaces interconnecting each of the NEs 104-112.
  • the network 100 shown in FIG. 1 may include any number of NEs, such as at least nine, more than nine, or more than 100.
  • the central entity 103 and NEs 104-112 are configured to implement various packet forwarding protocols, such as, but not limited to, Multi-protocol Label Switching (MPLS), IP version 4 (IPv4), IP version 6 (IPv6), and Big Packet Protocol.
  • MPLS Multi-protocol Label Switching
  • IPv4 IP version 4
  • IPv6 IP version 6
  • Big Packet Protocol such as, but not limited to, IP version 4 (IPv6), and Big Packet Protocol.
  • Each of the NEs 104-112 may receive an advertisement 165 including information related to the network 100 using an OSPF protocol. The information may be received from the central entity 103, another NE 104-112 in the network 100, or another NE or entity external to the network 100. An NE 104-112 may also generate an advertisement 165 including information related to the NE 104-112 or network 100.
  • the advertisements 165 may be link state advertisements (LSAs) pursuant to the OSPF protocol, and the LSAs may each carry link state information, routing information, traffic engineering information, security information, or any other information relevant to the NEs 104- 112. Additional details regarding contents of the LSA is described in Network Working Group RFC 2328, entitled“OSPF Version 2,” dated April 1998, by J. Moy, and Network Working Group RFC 5340, entitled“OSPF for IPv6,” dated July 2008, by R. Colton, et. al, which are both incorporated by reference herein in their entirety.
  • LSAs link state advertisements
  • the link-state information describes a state of a respective NE’s interfaces and adjacencies, such as, for example, prefixes, security identifiers (SIDs), traffic engineering (TE) information, identifiers (IDs) of adjacent NEs, links, interfaces, ports, and routes.
  • the link-state information may include, for example, local/remote IP address, local/remote interface identifiers, link metrics and TE metrics, link bandwidth, reserveable bandwidth, per Class-of- Service (CoS) class reservation state, preemption, and Shared Risk Link Groups (SLRGs).
  • the link-state information received in an advertisement 165 may be stored in the LSDB of each NE 104-112. Each NE 104-112 may use the information stored in the LSDB to obtain a topology of the network 100.
  • the routing information may include information describing one or more elements on a path between a source (first NE) and a destination (second NE) in the network 100.
  • the routing information may include an ID of a path and a label, address, or ID of one or more elements (e.g., NEs 104-112 or links 123) on the path.
  • the term“path” may refer to the shortest path, preferred path routing (PPR), or PPR graphs.
  • PPR also referred to herein as a“Non- Shortest Path (NSP)” refers to a custom path or any other path that may deviate from the shortest path computed between two NEs or between a source and destination.
  • a PPR may also be the same as the shortest path.
  • the PPRs are determined based on an application or server request for a path between two NEs 104-112 or between a source and destination that satisfies one or more network characteristics (such as TE) or service requirements.
  • PPRs are further defined in International Patent Publication No. WO/2019/164637, filed on January 28, 2019, which is incorporated by reference in its entirety.
  • PPRs are also further described in the LSR Working Group Internet-Draft Document, entitled “Preferred Path Routing (PPR) in OSPF,” by U. Chunduri, dated March 8, 2020, which is incorporated by reference in its entirety.
  • a PPR graph refers to a collection of multiple PPRs between one or more ingress NEs 104-112 (also referred to herein as “sources”) and one or more egress NEs 104-112 (also referred to herein as “destinations”).
  • a PPR graph may include a single source and multiple destinations, multiple destinations and a single source, or multiple sources and multiple destinations. PPR graphs are further defined in International Patent Publication No. WO/2019/236221, filed on May 2, 2019, which is incorporated by reference in its entirety.
  • the routing information includes information describing each of these types of paths that have been provisioned in the network 100.
  • the routing information received in an advertisement 165 may be stored in the routing table of each NE 104-112.
  • Each NE 104-112 uses the routing table to determine next hops by which to forward advertisements 165 or other types of OSPF packets.
  • the advertisements 165 may also contain any information related to a service or application that uses one or more NEs 104-112 in the network 100.
  • the advertisements 165 may include security information, authentication information, identification information, operations, administration, and maintenance (OAM) information, etc. for a relevant service or application.
  • OAM operations, administration, and maintenance
  • Specific types of information that are flooded in advertisements 165 are further described in IETF RFC 7471, entitled“OSPF Traffic Engineering (TE) Metric Extensions,” by S. Giacalone, et. al, dated March 2015, which is incorporated by reference herein in its entirety, and IETF RFC 8330, entitled“OSPF Traffic Engineering (OSPF-TE) Link Availability Extension for Links with Variable Discrete Bandwidth,” by H. Long, et. al, dated February 2018, which is incorporated by reference herein in its entirety.
  • NE 104-112 receives an advertisement 165 or generates an advertisement 165
  • NE 104-112 is configured to initiate OSPF flooding of the advertisement 165 through the network 100.
  • each NE 104-112 forwards the advertisement 165 including the information to neighboring NEs 104-112 in the network 100.
  • neighboring NEs 104-112 refer to two adjacent NEs each having interfaces that can directly communicate with one another, or two adjacent NEs, each having interfaces to a common network.
  • NE 109 is configured to update a local database with information from the advertisement 165, and then forward the advertisement 165 to neighboring NEs 104, 108, and 110.
  • NEs 104, 108, and 110 Upon receiving the advertisement 165, NEs 104, 108, and 110 also update their local databases and then forward the advertisement 165 to neighboring NEs 105, 107, and 111. That is, NE 104 forwards the advertisement 165 to NE 105, NE 108 forwards the advertisement 165 to NE 107, and NE 110 forwards the advertisement 165 to NE 111.
  • NEs 105, 107, and 111 are configured to update their local databases and forward the advertisement 165 to neighboring NEs 106 and 112.
  • the OSPF protocol allows networks 100 to implement a reliable flooding mechanism by which all the NEs 104-112 in the network 100 maintain an identical and synchronized view of the network 100.
  • the information that is flooded through the network 100 is completely irrelevant to some of the NEs 104-112 that receive the information. In these cases, each of the NEs 104-112 nevertheless process and store this information even though the NEs 104- 112 may never use the information. Further, the overall amount of information that needs to be flooded through a network 100 is continuously growing, which results in an inefficient use of the resources within a network 100. For this reason, network characteristics, such as bandwidth, throughput, latency, error rate, etc., can be significantly affected when data is unnecessarily flooded through the network 100.
  • service groups 130A-B may be provisioned within a network 100.
  • a service group 130A-B includes NEs 104-112 in a network 100, or area, that are associated with an application or a service.
  • a service group 130A-B can include a single NE 104-112 or multiple NEs 104-112 in a network 100.
  • a single NE 104-112 may belong to zero or more service groups 130A-B.
  • a single NE 104-112 may belong to a single service group 130A-B.
  • a single NE 104-112 may belong to more than one service group 130A-B.
  • single NE 104-112 does not necessarily have to belong to a service group 130A-B.
  • the service group 130A includes NEs 104, 108, 109, 110, and 111.
  • Service group 130B includes NEs 105, 106, 107, and 112. It should be appreciated, the service group 130 A may include other NEs not shown in FIG. 1, and service group 130B may include other NEs not shown in FIG. 1. Similarly, NEs 104-112 may be members of other service groups not illustrated by FIG. 1.
  • Each service group 130A-B may be associated with a different application or service.
  • Service group 130A may be associated with a first service
  • service group 130B may be associated with a second service.
  • the first service may be a security service
  • the second service may be an operations, administration, maintenance (OAM) service.
  • a service group 130A-B may be identified by a service group ID 140.
  • service groups 130A and 130B may be grouped together in a service group set 135.
  • a service group set 135 may be identified by a service group set ID 145.
  • the service group set ID 145 identifies the service group set 135 that the service groups 130A- 130B belong to. While the service group set 135 in FIG. 1 includes two service groups (e.g., service groups 130A-130B), the service group set 135 may include a different number of service groups in practical applications. In some cases, each service group set 135 includes at least two service groups 130A-B.
  • the central entity 103 stores a database including service group mappings between applications/services, service groups 130A-B, and service group set 135.
  • the service group mappings include mappings between applications/services, the service group set 135, service groups 130A-B in the service group set 135, and NEs 104-112 in each of the service groups 130A- B.
  • a mapping for one or more applications or services may include one or more service group IDs 140 and one or more labels, addresses, or IDs of the NEs 104-112 in the corresponding service group 130A-B.
  • the mapping may also include a service group set ID 145 when the service groups 130A-B are part of a service group set 135.
  • the central entity 103 has knowledge of the service groups 130A-B and the service group set 135 in which each of the NEs 104-112 belongs.
  • the central entity 103 may transmit service group capability information 160 to one or more of the NEs 104-112 in the network 100 via the central entity-to-NE links 125.
  • the service group capability information 160 indicates whether each of the NEs 104-112 is capable of implementing service groups 130A-B.
  • the service group capability information 160 may also indicate which service groups 130A-B and/or service group set 135 each of the NEs 104-112 belongs to.
  • the service group capability information 160 includes one or more of the service group IDs 140, service group set IDs 145, NE IDs 147, and a priority 188.
  • the service group ID 140 identifies the service group 130A-B that an NE 104-112 receiving the service group capability information 160 (“receiving NE 104-112”) belongs to.
  • the service group set ID 145 identifies the service group set 135 that the receiving NE 104-112 belongs to, when the receiving NE 104-112 is part of a service group set 135.
  • the NE IDs 147 include labels, addresses, or IDs describing each of the NEs 104-112 included in the service group 130A-B or the service group set 135.
  • the priority 188 refers to a value reflecting a priority of the service group 130A-B or service group set 135 based on an importance of the service group 130A-B or service group set 135. For example, service groups 130A-B implementing high security functionalities or forwarding high security data may be given a higher priority than service groups 130A-B implementing low security functionalities or forwarding low security data.
  • the central entity 103 When the central entity 103 is only connected to a single NE 109 in the network 100, as shown in FIG. 1, the central entity 103 sends service group capability information 160 to NE 109 via central entity-to-NE link 125.
  • the service group capability information 160 includes information describing all of the service groups 130A-B provisioned in the network 100.
  • the service group capability information 160 includes (the service group set ID 145 identifying the service group set 135; the service group ID 140 identifying the service group 130A, NE IDs 147 describing NEs 104, 108, 109, 110, and 111 in service group 130A, a priority 188 of service group 130A; the service group ID 140 identifying the service group 130B, NE IDs 147 describing NEs 105, 106, 107, and 112 in service group 130B, and a priority 188 of service group 130B ⁇ .
  • NE 109 generates an advertisement 165 carrying at least a part of this service group capability information 160 and floods the advertisement 165 to all of the other NEs 104-108 and 110-112 in network 100.
  • the advertisement 165 may also include a capability flag indicating whether each of NEs 104-112 is capable of implementing service groups 130A-B in the network 100.
  • the central entity 103 When the central entity 103 is separately connected to each of the NEs 104-112 in the network 100 (not shown in FIG. 1), the central entity 103 sends service group capability information 160 to each of the NEs 104-112 in the network 100. However, the central entity 103 only sends the service group capability information 160 relevant to the NE 104-112 receiving the sends service group capability information 160. For example, central entity 103 sends service group capability information 160, which includes a service group set ID 145 identifying the service group set 135, a service group ID 140 identifying the service group 130A, and the priority 188 of the service group 130A, to NE 109.
  • the central entity 103 sends service group capability information 160, which includes a service group set ID 145 identifying the service group set 135, a service group ID 140 identifying the service group 130B, and the priority 188 of the service group 130B, to NE 106.
  • service group capability information 160 which includes a service group set ID 145 identifying the service group set 135, a service group ID 140 identifying the service group 130B, and the priority 188 of the service group 130B, to NE 106.
  • each of the NEs 104-112 floods the network 100 with an advertisement 165 including the service group capability information 160 related to the respective NE 104-112.
  • each advertisement 165 includes the service group capability information 160 of a single NE 104-112 in the network 100.
  • the advertisement 165 may also include a capability flag indicating whether an NE 104-112 is capable of implementing service groups 130A- B and service group flooding in the network 100.
  • each of the NEs 104-112 updates a local database to include the information contained within the advertisement 165 upon receiving the advertisements 165.
  • the NEs 104- 112 implementing an OSPF protocol are able to detect changes in a network topology, such as link failures, to converge a new loop-free routing structure within seconds.
  • service groups 130A-B affects the ability of the NEs 104-112 in the network 100 to maintain the same information across all the NEs 104-112 in the network 100. This is because certain information pertaining to a particular service group 130A-B is only sent to NEs 104-112 that are members of the particular service group 130A-B. In this way, the implementation of service groups 130A-B changes one of the fundamental properties of the OSPF protocol.
  • NEs 104-112 that are members of the service group 130A-B should each store the same information related to a service group 130A-B in a database dedicated to that service group 130A-B. This way, the NEs 104-112 can still provide all the functionalities under the OSPF protocol.
  • NEs 104-112 store service group databases 180A-B for each service group 130A-B in which the NE 104-112 is a member.
  • each of the NEs 104, 108, 109, 110, and 111 in the service group 130A maintains a service group database 180 A.
  • the service group database 180 A only includes data relevant to service group 130 A.
  • each of the NEs 105, 106, 107, and 112 in the service group 130B maintains a service group database 180B.
  • the service group database 180B only includes data relevant to service group 130B.
  • the service group databases 180A-B may also be referred to as“link state service group databases 180A-B.”
  • each of the NEs 104-112 stores a base database, which includes the information that has been flooded to all of the NEs 104-112 in the network 100.
  • the base database may include the link state database (LSDB), which indicates link-state information that can be used to deduce a topology of the network 100.
  • the base database may also include a routing table, which includes path information indicating a path by which to reach one or more of the other NEs 104-112 in the network 100.
  • the service group database 180A-B may also include link-state information and path information, but the link-state information and path information is only relevant for the service group 130A-B associated with the service group database 180A-B.
  • the base database may not include the information stored in the service group database 180A.
  • information from the base database may be forwarded to all neighboring NEs 104-112, but information from the service group database 180A-B may only be flooded to other NEs 104-112 that are members of the service group 130A-B.
  • the service group database 180A-B may be flooded through other NEs 104-112 in the network 100 that are not members of the service group 130A-B to reach an NE 104-112 that is a member of the service group 130A-B.
  • the non-member NEs 104-112 may forward the service group database 180A-B along without storing the service group database 180A-B locally at the non-member NE 104-112.
  • NE 108 may flood the service group database 180A through NEs 107, 106, and 105 to reach NE 104, which is a member of service group 130A.
  • NE 104 receives and stores the service group database 180 A since NE 104 is a member of the service group 130A, but NEs 106 and 105 may not store the service group database 180 A since NEs 106 and 105 are not members of the service group 130A.
  • information from the service group database 180A-B may only be flooded to service group neighbors.
  • a“service group neighbor” refers to one or more neighboring NEs 104-112 of a respective NE 104-112 that is also a member of a common service group 130A-B.
  • NEs 109 and 110 are service group neighbors.
  • NEs 106 and 112 are service group neighbors.
  • each NE 104-112 identifies the service group neighbors for each service group 130A-B in which the NE 104-112 is a member. After detecting the service group neighbors, the NE 104-112 determines to forward the service group database 180A-B only to members of a corresponding service group 130A-B.
  • the embodiments disclosed herein are advantageous in that the network overhead can be significantly reduced by reducing the amount of data that is flooded through the network 100 and reducing the amount of data that has to be processed by each of NEs 104-112 in the network 100.
  • the network 100 By forwarding advertisements 165 only to the NEs 104-112 in a particular service group 130A-B, the network 100 inherently will have more bandwidth by which to transmit additional data, and throughput of the network 100 can be significantly increased.
  • latency can be reduced due to the higher availability of network resources within the network 100.
  • the delay occurring between receiving packets/messages at each of the NEs 104-112 and being processed at each of the NEs 104-112 can also be greatly reduced. Accordingly, the embodiments disclosed herein enhance the OSPF protocols to provide a more efficient and resource effective manner by which to flood the network 100 with necessary information.
  • FIG. 2 is a schematic diagram of an NE 200 suitable to implement service groups 130A-B and service group databases 180A-B according to various embodiments of the disclosure.
  • the NE 200 may be implemented as any one of NEs 104-112 or the central entity 103.
  • the NE 200 comprises ports 220, transceiver units (Tx/Rx) 210, a processor 230, and a memory 260.
  • the processor 230 comprises a service group module 235. Ports 220 are coupled to Tx/Rx 210, which may be transmitters, receivers, or combinations thereof.
  • the Tx/Rx 210 may transmit and receive data via the ports 220.
  • Processor 230 is configured to process data.
  • Memory 260 is configured to store data and instructions for implementing embodiments described herein.
  • the NE 200 may also comprise electrical-to-optical (EO) components and optical -to-electrical (OE) components coupled to the ports 220 and Tx/Rx 210 for receiving and transmitting electrical signals and optical signals.
  • EO electrical-to-optical
  • OE optical -to-electrical
  • the processor 230 may be implemented by hardware and software.
  • the processor 230 may be implemented as one or more central processing unit (CPU) and/or graphics processing unit (GPU) chips, logic units, cores (e.g., as a multi-core processor), field- programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and digital signal processors (DSPs).
  • the processor 230 is in communication with the ports 220, Tx/Rx 210, and memory 260.
  • the service group module 235 is implemented by the processor 230 to execute the steps of methods 700 and 800 and the instructions for implementing various embodiments discussed herein.
  • the NE 200 is a non-transitory computer- readable medium configured to store a computer program product comprising computer executable instructions that, when executed by the processor 230, cause the processor to implement the steps of methods 700 and 800.
  • the service group module 235 is configured to forward advertisements 165 to only NEs 104-112 in a service group 130A-B identified in the advertisements 165.
  • the inclusion of the service group module 235 provides an improvement to the functionality of the NE 200.
  • the service group module 235 also effects a transformation of NE 200 to a different state.
  • the service group module 235 is implemented as instructions stored in the memory 260.
  • the memory 260 comprises one or more of disks, tape drives, or solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the memory 260 may be volatile and non-volatile and may be read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), and static random-access memory (SRAM).
  • the memory 260 is configured to store service group capability information 160, service group ID 140, service group set IDs 145, service group neighbors 280, LSDB 273 (shown in FIG. 2 as the“link-state database 273”), routing table 276, service group mappings 279, a base database 270, and one or more service group databases 180A-B (hereinafter referred to as a“service group database 180”).
  • the service group capability information 160 indicates information regarding the service groups 130A-B (referred to hereinafter as a“service group 130”) in a network 100 and the capability of NEs 104-112 in the network 100 to implement the service group 130.
  • the service group ID 140 is a value uniquely identifying a service group 130.
  • the service group ID 140 may be a 4 bit value, a 6 bit value, a 32 bit value, or other values, depending on the application and use of the service group 130, or the network constraints.
  • a service group set 135 includes one or more service groups 130.
  • a service group set ID 145 is a value uniquely identifying a service group set 135.
  • the service group set ID 145 may also be a 4 bit value, a 6 bit value, a 32 bit value, or other values, depending on the application and use of the service group 130, or the network constraints.
  • a service group neighbor 280 refers to one or more NEs 104-112 that neighbor a respective NE 104-112 and that is a member of a common service group 130.
  • the service group mappings 279 may include mappings between an application or services/applications 277, zero or more service group set IDs 145 (shown in FIG. 2 as“SGS ID 145”), one or more service group IDs 140 (shown in FIG. 2 as“SG ID 140”), one or more NE IDs 147, and the priority 188 of the service group 130.
  • the memory 260 of the central entity 103 stores the service group mappings 279.
  • the base database 270 includes the information that has been flooded to all of the NEs 104-112 in the network 100.
  • the base database 270 includes the LSDB 273 and the routing table 276.
  • the LSDB 273 stores information describing a topology of network 100.
  • the routing table 276 includes routing information describing a next hop to every destination in the network 100 from the NE 200.
  • the base database 270 may be the same across all of the NEs 104-112 in the network 100.
  • Each of the one or more service group databases 180 stores information pertaining to a single service group 130.
  • NE 200 only stores service group databases 180 with information regarding service groups 130 in which NE 200 is a member.
  • the service group database 180 may store link-state information, routing information, security information, operations, administration, and maintenance (OAM) information, and/or any other type of information relevant to a service group 130.
  • OAM operations, administration, and maintenance
  • the base database 270 does not store the information in the service group databases 180.
  • a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design.
  • a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation.
  • a design may be developed and tested in a software form and later transformed, by well- known design rules, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software.
  • a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.
  • FIG. 3 is a schematic diagram 300 illustrating base database 270 and service group databases 180A-B stored at one or more NEs 104-112 in the network 100 according to various embodiments of the disclosure.
  • An NE 104-112 stores the base database 270 and one or more of service group databases 180A-B, depending on the service group 130 in which the NE 104-112 is a member. From the example shown in FIG. 1, NEs 104, 108, 109, 110, and 111 may store the base database 270 and the service group database 180A since NEs 104, 108, 109, 110, and 111 are members of the service group 130A associated with service group database 180 A.
  • NEs 105, 106, 107, and 112 may store the base database 270 and the service group database 180B since NEs 104, 108, 109, 110, and 111 are members of the service group 130B associated with service group database 180B.
  • the base database 270 includes multiple databases, including the area X database 303, area Y database 306, network extended LSAs database 309, and the opaque database 312. It should be appreciated that other databases that are not shown in FIG. 3 may also be stored in the base database 270.
  • the area X database 303 and the area Y database 306 correspond to data relevant to different areas in which the NE 104-112 is included.
  • the network 100 can be divided into multiple areas, or sub-domains within the network 100.
  • An area refers to a logical collection of NEs 104-112 and links 123 that have the same area identification.
  • An area can, for example, correspond to a geographical area.
  • the NE 104-112 in an area receives advertisements from other NEs in the area, and stores the advertisements to maintain an LSDB 273 and a routing table 276 for the area.
  • area X database 303 includes at least one of advertisements 304 with information pertaining to area X, the LSDB 273 describing link-state information of area X, and the routing table 276 describing routing information for paths provisioned in area X.
  • Area X database 303 may include other databases and data not otherwise shown in FIG. 3.
  • area Y database 306 includes at least one of advertisements 307 with information pertaining to area Y, the LSDB 273 describing link- state information of area Y, and the routing table 276 describing routing information for paths provisioned in area Y.
  • Area Y database 306 may include other databases and data not otherwise shown in FIG. 3.
  • the AS extended LSAs database 309 stores information received from areas that the NE 104-112 is not part of.
  • AS extended LSAs database 309 stores advertisements 310 and 311 received from NEs that are not part of area X or area Y. Advertisements 310 and 311 may be, for example, LSAs.
  • the AS extended LSAs database 309 may also store an LSDB 273 and/or a routing table 276 relevant to the areas not associated with the NE 104-112.
  • the AS extended LSAs database 309 may include other databases and data not otherwise shown in FIG. 3.
  • the opaque database 312 stores, for example, TE parameters 315.
  • the TE parameters 315 may include network constraints, such as, for example, maximum bandwidth, maximum reservable bandwidth, unreserved bandwidth, latency, etc., which should be provisioned for the network 100 or areas within the network 100.
  • the opaque database 312 may include other databases and data not otherwise shown in FIG. 3.
  • the service group databases 180A-B each store data that is relevant only to the corresponding service group 130A.
  • the service group databases 180A-B store advertisements 320A-N that include information relevant to the corresponding service group 130.
  • service group database 180A includes advertisements 320 A-N pertaining to service group 130A
  • service group database 180B includes advertisements 320 A-N pertaining to service group 130B.
  • the advertisements 320 A-N may be LSAs.
  • the service group databases 180A-B may store advertisements 320A-N without regard to the area from which the advertisements 320A-N were received.
  • FIG. 4 is a schematic diagram 400 illustrating the forwarding of base database 270 and service group databases 180A-B in OSPF packets 415A-B across NEs 403, 406, and 409 in a service group 130A-B according to various embodiments of the disclosure.
  • Schematic diagram 400 shows NEs 403, 406, and 409 that are each similar to NEs 104-112.
  • NEs 403, 406, 409 are interconnected by links 123.
  • NEs 403, 406, and 409 and links 123 may be part of network 100 of FIG. 1.
  • NE 403 is a member of service groups 130A and 130B
  • NE 406 is a member of service group 130A
  • NE 409 is a member of service group 130B.
  • NE 403 stores and maintains the base database 270.
  • NE 403 stores and maintains the service group database 180A and the service group database 180B since NE 403 is a member of both service group 130 A and service group 13 OB.
  • NEs 403 and 406 are service group neighbors 280 for service group 130A, and NEs 403 and 409 are service group neighbors 280 for service group 130B.
  • NE 403 receives service group capability information 160 from both NEs 406 and 409.
  • the service group capability information 160 from NE 406 indicates that NE 406 is a member of service group 130A.
  • NE 403 determines that NEs 403 and 406 are service group neighbors 280 for service group 130A.
  • the service group capability information 160 from NE 409 indicates that NE 409 is a member of service group 130B.
  • NE 403 determines that NEs 403 and 409 are service group neighbors 280 for service group 130B.
  • OSPF packets 415A-B there are five types of OSPF packets 415A-B: the hello packet, the database description packet, the link state request packet, the link state update packet, and the link state acknowledgement packet.
  • the hello packet is sent over a period of time on all interfaces to establish and maintain neighbor NE 403, 406, and 409 relationships in the network 100.
  • the database description packet is exchanged at the time that adjacencies between neighboring NEs 403, 406, and 409 are being initialized.
  • the database description packet includes descriptions of topological database contents, which can be split into multiple database description packets.
  • the database description packet is also used to establish a master slave relationship between neighboring NEs 403, 406, and 409.
  • the link state request packet is sent by an NE 403, 406, and 409 to retrieve a database or send an update to a database from a neighboring NE 403, 406, and 409.
  • the link state update packet implements the flooding of LSAs, such as advertisement 165, 304, 307, 310, 311, and 320A-N, through the network 100.
  • the link state acknowledgement packet includes an acknowledgement indicating whether a link state update packet was successfully received by an NE 403, 406, and 409, thereby indicating a reliability of flooding the network 100 with the link state update.
  • NE 403 floods the network 100 with the information stored at NE 403 by forwarding information describing all databases to all neighboring NEs 406 and 409, for example, via a database description packet.
  • the actual databases are exchanged among neighboring NEs 403, 406, and 409 using, for example, link state update packets.
  • NE 403 populates all OSPF packets 415A-B to include the base database 270, since the information included in the base database 270 is relevant to all NEs 403, 406, and 409 in the network 100.
  • NE 403 determines how to populate the OSPF packet 415A-B based on whether a neighboring NE 406 or 409 is a service group neighbor 280.
  • NE 403 only populates the OSPF packets 415A-B with service group databases 180A-B when the neighboring NE 406 or 409 receiving the OSPF packet 415A-B is a member of service group 130A or 130B. Since NE 406 is a member of service group 130A, NE 403 includes the service group database 180A in the OSPF packet 415A sent to NE 406. Since NE 409 is a member of service group 130B, NE 403 includes the service group database 180B in the OSPF packet 415B sent to NE 409.
  • NE 403 sends the OSPF packet 415 A including the base database 270 and service group database 180A to NE 406 via link 123.
  • NE 403 sends the OSPF packet 415B including the base database 270 and service group database 180B to NE 409 via link 123.
  • NE 403 may send OSPF packet 415 A to NE 406 and OSFP packet 415B to NE 409 simultaneously.
  • NE 403 may send the OSPF packet 415 A to NE 406 and the OSFP packet 415B to NE 409 according to a priority 188 of the corresponding service group 130A-B. For example, when service group 130A has a higher priority 188 than service group 130B, NE 403 transmits the OSPF packet 415A to NE 406 first, and then subsequently transmits the OSPF packet 415B to NE 409.
  • FIGS. 5A-C are examples of headers or portions of headers included in OSPF packets 415A-B (hereinafter referred to as an“OSPF packet 415”) according to various embodiments of the disclosure.
  • FIG. 5 A shows a header 500 of an OSPF packet 415 according to a first embodiment
  • FIG. 5B shows a header 525 of an OSPF packet 415 according to a second embodiment
  • FIG. 5C shows a header 550 of an OSPF packet 415 according to various other embodiments.
  • the header 500 includes various fields, such as a version field 501, a type field 502, a packet length field 503, a router ID field 504, an area ID field 505, a checksum field 506, a service group ID/service group set ID field 507, an instance ID field 508, an AuType field 509, and an authentication field 510.
  • the header 500 may contain other fields or information in other embodiments.
  • the version field 501 indicates an OSPF version number (e.g., OSPFv2 or OSPFv3) implemented by the NE 104-112 sending the OSPF packet 415.
  • the type field 502 indicates a value corresponding to the type of the OSPF packet 415 (e.g., a hello packet, a data description packet, a link state request packet, a link state update, or a link state acknowledgement).
  • the packet length field 503 indicates a length of the OSPF packet 415 in bytes, including the header 500.
  • the router ID field 504 includes a router ID identifying a source of the OSPF packet 415.
  • the area ID field 505 indicates a 32 bit number identifying an area associated with the OSPF packet 415 belongs.
  • the checksum field 506 indicates a standard IP checksum of the contents of the OSPF packet 415, including the header 500, but excluding the 64 bit authentication field 510.
  • AuType field 509 indicates an authentication procedure to be used for the OSPF packet 415.
  • the authentication field 510 is a 64 bit field for use by an authentication scheme to authenticate the OSPF packet 415.
  • a standard instance ID field is an 8 bit field enabling multiple instances of OSPF to be used on a single interface. Each instance is assigned a separate instance ID. The instance ID is carried in the standard instance ID field of the OSPF protocol.
  • the following OSPF instance IDs have been defined: 0 (base IP version 4 (IPv4) instance), 1 (base IPv4 multicast instance), and 2 (base IPv4 in-band management instance).
  • the instance IDs 3-127 are reserved for private use by a local network administrator.
  • the instance IDs 128-255 are reserved and unused. Additional details regarding the standard OSPF instance ID field and the OSPF instance IDs are defined in IETF RFC 6549, entitled“OSPFv2 Multi-Instance Extensions,” by A. Lindem, et. al, dated March 2012, which is incorporated by reference herein in its entirety.
  • the header 500 splits the standard OSPF instance ID field into the service group ID/service group set ID field 507 and the instance ID field 508.
  • the first four high order bits of the standard OSPF instance ID field are the service group ID/service group set ID field 507.
  • the service group ID/service group set ID field 507 indicates the service group ID 140 or the service group set ID 145.
  • ID field 508 indicates the instance ID.
  • Header 525 is similar to header 500, in that header 525 includes the version field 501, type field 502, packet length field 503, router ID field 504, area ID field 505, checksum field 506, AuType field 509, and authentication field 510.
  • the header 500 may contain other fields or information in other embodiments.
  • header 525 includes only the instance ID field 526.
  • the instance ID field 526 is similar to the standard OSPF instance ID field, which is an 8 bit field set to carry an instance ID.
  • the allocated instance IDs are repurposed to account for the service group IDs 140 or the service group set IDs 145.
  • the instance IDs from 0-31 may be reserved for the standard, and the instance IDs 32-127 may be reserved for private use by the local network administrator.
  • the instance IDs 128-255 may be reserved to be service group IDs 140 or service group set IDs 145.
  • service groups 130A-B or service group set 135 may be identified only by IDs within the range of 128- 255.
  • ID 128-255 is indicated in the instance ID field 526
  • the NE 104-112 receiving the OPSF packet 415 recognizes the corresponding service group 130A-B or service group set 135 identified in the instance ID field 526.
  • header 550 included in the OSPF packet 415 according to the various other embodiments.
  • Header 550 is similar to header 525, in that header 550 includes the version field 501, type field 502, packet length field 503, router ID field 504, checksum field 506, AuType field 509, and authentication field 510.
  • the header 550 also includes the instance ID field 526, as described above with reference to FIG. 5B, that may indicate a service group ID 140 or a service group set ID 145.
  • the area ID field 551 may be repurposed to indicate the service group ID 140 or the service group set ID 145.
  • the header 550 may contain other fields or information in other embodiments.
  • the header 550 shown in FIG. 5C represents a header of a data description packet (e.g., OSPF packet 415 encoded as a data description packet)
  • the header 550 further includes an interface maximum transmission unit (MTU) field 511, options field 512, flags 513, and a database description sequence number field 516 (shown in FIG. 5C as“DD sequence number 516”).
  • the interface MTU field 511 indicates a size in bytes of the largest IPv6 datagram that can be sent out to the associated link 123 without fragmentation.
  • the options field 512 indicates optional capabilities supported by the NE 104-112.
  • the database description sequence number field 516 indicates a sequence of the collection of database description packets being exchanged between NEs 104-112.
  • Flags 513 include one or more flags or bits that, when set, indicate a capability, functionality, or setting of the OSPF packet 415 or the NE 104-112.
  • the flags 513 include an I bit, an M bit, and an MS bit.
  • the I bit when set to 1, indicates that the OSPF packet 415 is the first in a sequence of data description packets.
  • the M bit when set to 1, indicates that more data description packets are to follow.
  • the MS bit when set to 1, indicates whether the NE 104-112 is the master during the database exchange process.
  • the flags 513 include a service group bit 514 as one of the bits.
  • the service group bit 514 indicates whether the service group ID 140 or service group set ID 145 is carried in the instance ID field 526 or the area ID field 551.
  • the service group ID 140 or service group set ID 145 is included in the instance ID field 526 when the service group bit 514 indicates that the instance ID field 526 carries the service group ID 140 or service group set ID 145.
  • the service group bit 514 is set to 1
  • the service group ID 140 or service group set ID 145 is indicated in the instance ID field 526.
  • the instance ID field 526 is used as the standard instance ID field 526. In this case, header 550 does not carry a service group ID 140 or service group set ID 145.
  • the service group bit 514 may otherwise be set to 0 to indicate that 1 the instance ID field 526 carries the service group ID 140 or service group set ID 145, and the service group bit 514 may otherwise be set to 1 to indicate that the instance ID field 526 is used as the standard instance ID field 526.
  • the service group ID 140 or service group set ID 145 is included in the area ID field 551 when the service group bit 514 indicates that the area ID field 551 carries the service group ID 140 or service group set ID 145.
  • the service group bit 514 indicates that the area ID field 551 carries the service group ID 140 or service group set ID 145.
  • the service group bit 514 is set to 1
  • the service group ID 140 or service group set ID 145 is indicated in the area ID field 551.
  • the area ID field 551 is used as the standard area ID field 551. In this case, the area ID field 551 does not carry a service group ID 140 or service group set ID 145.
  • the service group bit 514 may otherwise be set to 0 to indicate that 1 the area ID field 551 carries the service group ID 140 or service group set ID 145, and the service group bit 514 may otherwise be set to 1 to indicate that the area ID field 551 is used as the standard instance ID field 526.
  • an OSPF adjacency is formed between NEs 403, 406, and 409.
  • NEs 403, 406, and 409 determine whether they are service group neighbors 280. During this process, the NEs 403, 406, and 409 go through several state changes before becoming fully adjacent with the neighboring NE 403, 406, or 409. In an OSPF protocol, the process of these changes is described as the“finite state machine” in Network Working Group RFC 2328, entitled“OSPF Version 2,” dated April 1998, by J. Moy, which is incorporated by reference in its entirety.
  • FIGS. 6A-C are message sequence diagrams illustrating a process of creating service group neighbors 280 and exchanging service group databases 180A-B during the OSPF finite state machine according to various embodiments of the disclosure. Specifically, FIGS. 6A-C are message sequence diagrams illustrating different embodiments by which NEs 403, 406, and 409 implement the stages of becoming fully adjacent while accounting for the service groups 130A-B and service group set 135.
  • FIG. 6 A shown is a message sequence diagram 600 illustrating a first embodiment by which NEs 403, 406, and 409 become fully adjacent while accounting for the service groups 130A-B and service group set 135.
  • the message sequence diagram 600 is implemented when NE 403 already stores and maintains the base database 270, the service group database 180 A, and the service group database 180B (see FIG. 4).
  • the message sequence diagram 600 is also implemented before NE 403 fully establishes an adjacency with NEs 406 and 409.
  • the term“establishing an adjacency” as used herein refers to the process of NEs 403, 406, and 409 determining neighboring NEs 403, 406, and 409 and fully exchanging databases with the neighboring NEs 403, 406, and 409.
  • the NEs 403, 406, and 409 perform the initialization stage 604 (also referred to as “init”), in which NEs 406 and 409 receive hello packets 605A and 605B, respectively, from NE 403.
  • the hello packet 605 A sent from NE 403 to NE 406 identifies NE 403, but does not identify NE 406.
  • the hello packet 605B sent from NE 403 to NE 409 identifies NE 403, but does not identify NE 409.
  • the NEs 403, 406, and 409 perform the bi-directional communication stages 606- 607 (also referred to as“2-way”).
  • NE 406 transmits a hello packet 605 C to NE 403 identifying both NEs 403 and 406 (where NE 403 is labeled as the adjacent node (AN)).
  • NE 409 transmits a hello packet 605D to NE 403 identifying both NEs 403 and NE 409.
  • NE 403 transmits a hello packet 605E to NE 406 identifying both NEs 403 and 406, and NE 403 transmits a hello packet 605F to NE 409 identifying both NEs 403 and 409.
  • NEs 406 and 409 have received hello packets 605E and 605F identifying NEs 406 and 409 in, for example, the neighbor field of the hello packet 605E and 605F.
  • NEs 406 and 409 now determine to become adjacent with NE 403.
  • NEs 403, 406, and 409 perform the exchanging of information stage 607 (also part of the bi-directional communication stage), which includes the exstart stage and the exchange stage.
  • NEs 403, 406, and 409 establish a master-slave relationship between each other (e.g., one NE is a master and one NE is a slave) and choose an initial sequence number for adjacency formation.
  • NEs 403 and 406 establish the master-slave relationship at arrow 608A
  • NEs 403 and 409 establish the master-slave relationship at arrow 608B.
  • NEs 403, 406, and 409 exchange database description packets (e.g., an OSPF packet 415 encoded as a database description packet).
  • the database description packets contain LSA headers describing contents of the databases stored at the respective NEs 403, 406, and 409.
  • NEs 403, 406, and 409 perform the loading stage 609. During the loading stage 609, NEs 403 and 406 exchange the base databases 270 relevant to the respective NEs 403 and 406. At arrow 610A, NEs 403 and 406 exchange base databases 270 with each other. At arrow 61 OB, NEs 403 and 409 exchange base databases 270 with each other.
  • NEs 403, 406, and 409 also exchange their respective service group capability information 160, which indicates the service groups 130A-B in which they are members.
  • NEs 403 and 406 exchange their respective service group capability information 160, which indicates that NE 403 is a member of service group 130A and 130B, and NE 406 is a member of service group 130 A.
  • NEs 403 and 409 exchange their respective service group capability information 160, which indicates that NE 403 is a member of service group 130 A and 130B, and NE 409 is a member of service group 130B.
  • NE 403 determines an order in which to transmit the service group databases 180A-B based on the service group capability information 160 and a priority 188 of each service group 130A-B.
  • NE 406 is a member of service group 130A, corresponding to the service group database 180A
  • NE 409 is a member of service group 130B, corresponding to service group database 180B, as indicated by the service group capability information 160.
  • the service group 130A has a higher priority 188 than the service group 130B, as indicated by the service group capability information 160.
  • NE 403 transmits the service group database 180A to NE 406 first at arrow 617, before transmitting the service group database 180B to NE 409 at arrow 618.
  • NEs 406 and 409 transmit link state request packets to NE 403, in which the link state request packets specify a list of data (e.g., LSAs) that the NEs 406 and 409 wishes to receive.
  • the NE 403, 406, or 409 receiving the link state request packets should have the data indicated in the link state request packet stored locally.
  • NE 403, 406, or 409 responds to the link state request packet with one or more link state update packets, in which the link state update packets carry the data specified in the link state request packet.
  • NE 406 sends one or more link state request packets to NE 403, in which the link state request packets indicate the data included in the base database 270 of NE 403.
  • NE 403 responds to the link state request packets by sending one or more link state update packets back to NE 406, in which the link state update packets carry the data from the base database 270.
  • NE 409 sends one or more link state request packets to NE 403, in which the link state request packets indicate the data included in the base database 270 of NE 403.
  • NE 403 responds to the link state request packets by sending one or more link state update packets back to NE 409, in which the link state update packets carry the data from the base database 270.
  • the NEs 403, 406, and 409 exchange similar link state requests packets and link state update packets to synchronize the service group databases 180A-B.
  • NE 406 sends one or more link state request packets to NE 403, in which the link state request packets indicate the data included in the service group database 180A of NE 403.
  • NE 403 responds to the link state request packets by sending one or more link state update packets back to NE 406, in which the link state update packets carry the data from the service group database 180 A.
  • NE 409 sends one or more link state request packets to NE 403, in which the link state request packets indicate the data included in the service group database 180B of NE 403.
  • NE 403 responds to the link state request packets by sending one or more link state update packets back to NE 409, in which the link state update packets carry the data from the service group database 180B.
  • NEs 403, 406, and 409 reach the full stage 619, which occurs when the databases across neighboring NEs 403, 406, 409 are fully exchanged and consistent across all neighboring NEs 403, 406, and 409.
  • the adjacency and consistency of the base database 270 between NEs 403 and 406 are considered full, as indicated by arrow 620 A. That is, NEs 403 and 406 maintain the same base database 270.
  • the adjacency and consistency of the base database 270 between NEs 403 and 409 are considered full, as indicated by arrow 620B. That is, NEs 403 and 409 maintain the same base database 270.
  • the term“full” refers to the stage when adjacent NEs 403, 406, and 409 have completed exchanging a particular database such that the database is fully synchronized between the adjacent NEs 403, 406, and 409.
  • the adjacency and consistency of the service group database 180A between NEs 403 and 406 are considered full, as indicated by arrow 621.
  • subsequent OSPF packets 415 such as link state update packets, may be sent between NEs 403 and 406 at arrow 625 A.
  • the subsequent OSPF packets 415 may include updates to the service group database 180 A.
  • NEs 403 and 406 use the adjacency neighbor relationship to flood the updates to one another to maintain the same service group database 180A across both NEs 403 and 406.
  • the bases databases 270 and service group databases 180A-B are exchanged (e.g., flooded) in parallel.
  • the flooding of the service group databases 180A-B are performed according to a priority 188 of the service group database 180A-B, such that higher priority service group databases 180A-B are flooded first.
  • the base databases 270 and service group databases 180A-B are flooded independently of each other and reach the full stage 619 independent of each other.
  • the databases are transmitted in separate packets or separate sets of packets considered part of a single transmission.
  • FIG. 6B shown is a message sequence diagram 650 illustrating a second embodiment by which NEs 403, 406, and 409 become fully adjacent while accounting for the service groups 130A-B and service group set 135.
  • the message sequence diagram 650 is similar to message sequence diagram 600, except that in the embodiment shown in message sequence diagram 650, the flooding of the bases databases 270 and the service group databases 180A-B are not independent of each other. Specifically, the service group databases 180A-B may not be exchanged until the base databases 270 have been fully exchanged in the message sequence diagram 650.
  • NEs 403, 406, and 409 first exchange hello packets 605A-F to perform the initialization stage 604 and the bi-directional communication stage 606. NEs 403, 406, and 409 then establish the master-slave relationship at the exchanging of information stage 607 (arrows 608A-B) before beginning the first loading stage 609A.
  • NEs 403 and 506 exchange base databases 270 at arrow 615 A
  • NEs 403 and 409 exchange base databases 270 at arrow 615B.
  • the service group capability information 160 is also exchanged between NEs 403, 406, and 409.
  • NEs 403 and 406 exchange service group capability information 160 at arrow 610A
  • NEs 403 and 409 exchange service group capability information 160 at arrow 610B.
  • NEs 406 and 409 transmit link state request packets to NE 403, in which the link state request packets specify a list of data from the base database 270 of NE 403 that NEs 406 and 409 wish to receive.
  • NE 403 transmits link state update packets back to NEs 406 and 409, in which the link state update packets include the data from the base database 270.
  • the base database 270 is considered fully synchronized.
  • NEs 403, 406, and 409 reach a base database 270 full stage 619A, which occurs when the base databases 270 across neighboring NEs 403, 406, 409 are fully exchanged and consistent across all neighboring NEs 403, 406, and 409.
  • the adjacency and consistency of the base database 270 between NEs 403 and 406 are considered full, as indicated by arrow 620A.
  • the adjacency and consistency of the base database 270 between NEs 403 and 409 are considered full, as indicated by arrow 620B.
  • the service group database loading stage 609B occurs after the base databases 270 have been fully exchanged between NEs 403, 406, and 409. During this loading stage 609B, the service group databases 180A-B may be exchanged between NEs 403, 406, and 409 according to a priority 188 of the service group 130.
  • NE 403 determines an order in which to transmit the service group databases 180A-B based on the service group capability information 160 and a priority 188 of each service group 130A-B.
  • the service group 130A has a higher priority 188 than the service group 130B, as indicated by the service group capability information 160.
  • NE 403 transmits the service group database 180 A to NE 406 first at arrow 617, before transmitting the service group database 180B to NE 409 at arrow 618.
  • NEs 406 and 409 transmit link state request packets to NE 403, in which the link state request packets specify a list of data from the service group database 180A-B of NE 403 that NEs 406 and 409 wish to receive.
  • NE 406 transmits link state request packets listing data from the service group database 180A to NE 403
  • NE 409 transmits link state request packets listing data from the service group database 180B to NE 403.
  • NE 403 transmits link state update packets back to NEs 406 and 409, in which the link state update packets include the data from service group databases 180A-B.
  • NE 403 transmits data from the service group database 180A in a link state update packet to NE 406, and NE 403 transmits data from the service group database 180B in a link state update packet to NE 409.
  • the service group databases 180A-B are considered fully synchronized.
  • NEs 403, 406, and 409 reach a service group database 180A-B full stage 619B, which occurs when the service group databases 180A-B across neighboring NEs 403, 406, 409 are fully exchanged and consistent across all neighboring NEs 403, 406, and 409.
  • the adjacency and consistency of the service group database 180A between NEs 403 and 406 are considered full, as indicated by arrow 621.
  • NEs 403, 406, and 409 continue to flood service group databases 180A- B to each other according to a priority 188 of the service group database 180A-B in a similar fashion as described above.
  • the bases databases 270 and service group databases 180A-B are not exchanged (e.g., flooded) in parallel. Instead, the base databases 270 are exchanged in full before service group databases 180A-B can begin to be exchanged.
  • FIG. 6C shown is a message sequence diagram 675 illustrating a second embodiment by which NEs 403, 406, and 409 become fully adjacent while accounting for the service groups 130A-B and service group set 135.
  • the message sequence diagram 675 is similar to message sequence diagram 600. Except that in the embodiment shown in message sequence diagram 675, NEs 403, 406, and 409 exchange service group capability information 160 and service group databases 180A-B before exchanging base databases 270. This may occur, for example, when the priority 188 of the service group databases 180A-B indicates that the service group databases 180A-B include information of a higher priority 188 than the information included in the base databases 270.
  • NEs 403, 406, and 409 perform the initialization stage 604, bidirectional communication stage 606, and exchange information stage 607 twice. In this way, NEs 403, 406, and 409 perform the adjacency process twice: a first time to exchange the service group capability information 160 and service group databases 180A-B, and a second time to exchange the base databases 270.
  • NEs 403, 406, and 409 first exchange hello packets 605A-F to perform the initialization stage 604 and the bi-directional communication stage 606 to perform the first adjacency. NEs 403, 406, and 409 then establish the master-slave relationship at the exchanging of information stage 607 (arrows 608A-B) before beginning the first loading stage 609 A. During the first loading stage 609 A, NEs 403, 406, and 409 exchange the service group capability information 160 and service group databases 180A-B. At arrow 610A, NE 403 and NE 406 exchange the service group capability information 160, and at arrow 610B, NE 403 and NE 409 exchange the service group capability information 160.
  • NE 403 transmits the service group database 180A of service group 130A to NE 406 since the service group 130A is associated with a higher priority 188 than the service group 130B.
  • NE 403 transmits the service group database 180B of the service group 130B to NE 409.
  • NEs 406 and 409 transmit link state request packets to NE 403, in which the link state request packets specify a list of data from the service group database 180A-B of NE 403 that NEs 406 and 409 wish to receive.
  • NE 403 transmits link state update packets back to NEs 406 and 409, in which the link state update packets include the data from service group databases 180A-B.
  • the service group databases 180A-B may be fully exchanged to reach the full stage 619B before moving on to performing the second adjacency.
  • the adjacency and consistency of the service group database 180 A between NEs 403 and 406 are considered full.
  • the adjacency and consistency of the service group database 180B between NEs 403 and 409 are considered full. That is, NEs 403 and 406 maintain the same service group database 180A, and NEs 403 and 409 maintain the same service group database 180B.
  • NEs 403, 406, and 409 again exchange hello packets 605 A-F to perform the initialization stage 604 and the bi directional communication stage 606.
  • NEs 403, 406, and 409 also again establish the master-slave relationship at the exchanging of information stage 607 (arrows 608A-B) before beginning the loading stage 609B to exchange base databases 270.
  • NEs 403 and 406 exchange the base databases 270 at arrow 615 A
  • NEs 403 and 409 exchange the base databases 270 at arrow 615B.
  • NEs 406 and 409 transmit link state request packets to NE 403, in which the link state request packets specify a list of data from the base database 270 of NE 403 that NEs 406 and 409 wish to receive.
  • NE 403 transmits link state update packets back to NEs 406 and 409, in which the link state update packets include the data from the base database 270.
  • the base database 270 is considered fully synchronized
  • the base databases 270 may be fully exchanged to reach the full stage 619A. After completing exchange of the base databases 270 between NEs 403, 406, and 409, the adjacency and consistency of the base databases 270 between NEs 403, 406, and 409 are considered full. Any subsequent changes to the base databases 270 are flooded to neighboring NEs 403, 406, and 409.
  • NEs 403, 406, and 409 can be easily restored upon restarting or reloading using the databases stored at the neighboring NEs 403, 406, and 409.
  • the Network Working Group RFC 3623 entitled“Graceful OSPF Restart,” by P. Pillay-Esnault, dated November 2003, incorporated by reference herein in its entirety, describes an OSPF mechanism by which NEs 403, 406, and 409 use neighboring NEs 403, 406, and 409 to retrieve databases upon restarting or restoring the NE 403, 406, and 409.
  • NE 403 may retrieve the databases from neighboring NEs 406 and 409. NE 403 may retrieve the base database 270 from either NE 406 or 409. NE 403 may retrieve the service group database 180A from NE 406, and NE 403 may retrieve the service group database 180B from NE 409.
  • the embodiments disclosed herein describe an enhancement to the OSPF protocols such that networks 100 implementing service groups 130A-B may reduce the amount of information flooded through the network 100 and processed by each NE 403, 406, and 409.
  • the embodiments disclosed herein also enable the NEs 403, 406, and 409 in the network 100 to maintain consistent databases across only the relevant NEs in the network 100. In this way, standard OSPF mechanisms of flooding and restarting can still be implemented successfully.
  • FIG. 7 is a flowchart illustrating a method 700 for implementing service groups 130A- B (hereinafter referred to as“service group 130”) and maintaining service group databases 180A-B (hereinafter referred to as“service group database 180”) according to various embodiments of the disclosure.
  • Method 700 is implemented after two NEs 104-112, 200, or NEs 403, 406, or 409 (hereinafter referred to as“NEs”) store the base database 270 locally in a memory 260 of the NE.
  • the NE generates a service group database 180 in a memory 260 of the NE.
  • the NE receives advertisements 320A-N containing information pertaining to a service group 130, and the information or the advertisement itself may be stored in the service group database 180.
  • the service group database 180 is associated with a single service group 130 or a single service group set 135.
  • the NE is a member of the service group 130 or the service group set 135.
  • the NE receives an OSPF packet 415 comprising a service group ID 140 from a neighboring NE.
  • the OSPF packet 415 may also comprise a service group set ID 145.
  • the NE determines the service group 130 based on the service group ID 140 or the service group set 135 based on the service group set ID 145.
  • the NE updates the service group database 180 to include data from the OSPF packet 415.
  • the OSPF packet 415 is a link stage update packet including another advertisement 320A-N pertaining to a service group 130
  • NE adds the advertisement 320A-N to the service group database 180.
  • FIG. 8 is a flowchart illustrating a method 800 for implementing service groups 130A- B (hereinafter referred to as“service group 130”) and maintaining service group databases 180A-B (hereinafter referred to as“service group database 180”) according to various embodiments of the disclosure.
  • Method 700 is implemented after two NEs 104-112, 200, or NEs 403, 406, or 409 (hereinafter referred to as“NEs”) store the base database 270 locally in a memory 260 of the NE.
  • an NE receives information about a first service group 130 and information about a second service group 130 from a neighboring NE.
  • the neighboring NE may be an adjacent NE or may be an NE that is more than one hop away from the NE.
  • the NE updates a first service group database 180 to include the information about the first service group 130.
  • the NE updates a second service group database 180 to include the information about the second service group 130.
  • the NE determines that a second neighboring NE is included in the first service group 130, but is not included in the second service group 130. The second neighboring NE is different from the first neighboring NE.
  • the NE transmits a packet comprising the information about the first service group 130 to the second neighboring NE.
  • the information about the second service group 130 is not transmitted to the second neighboring NE since the second neighboring NE is not a member of (e.g., is not included in) the second service group 130.
  • the packet may be similar to the OSPF packet 416.
  • FIG. 9 is a schematic diagram illustrating an apparatus 900 for implementing service groups 130 and maintaining service group databases 180 according to various embodiments of the disclosure.
  • the apparatus 900 includes a means for generating 903, a means for receiving 906, and a means for updating 909.
  • the means for generating 903 includes a means for generating a service group database 180 in a memory 260 of the NE.
  • the NE receives advertisements 320A-N containing information pertaining to a service group 130, and the information or the advertisement itself may be stored in the service group database 180.
  • the service group database 180 is associated with a single service group 130 or a single service group set 135.
  • the NE is a member of the service group 130 or the service group set 135.
  • the means for receiving 906 includes a means for receiving an OSPF packet 415 comprising a service group ID 140 from a neighboring NE.
  • the OSPF packet 415 may also comprise a service group set ID 145.
  • the NE determines the service group 130 based on the service group ID 140 or the service group set 135 based on the service group set ID 145.
  • the means for updating 909 includes a means for updating the service group database 180 to include data from the OSPF packet 415.
  • the OSPF packet 415 is a link stage update packet including another advertisement 320A-N pertaining to a service group 130
  • NE adds the advertisement 320A-N to the service group database 180.
  • FIG. 10 is a schematic diagram illustrating an apparatus 1000 for implementing service groups 130 and maintaining service group databases 180 according to various embodiments of the disclosure.
  • the apparatus 1000 includes a means for receiving 1003, a means for updating 1006, a means for determining 1009, and a means for transmitting 1012.
  • the means for receiving 1003 comprises a means for receiving information about a first service group 130 and information about a second service group 130 from a first neighboring NE.
  • the means for updating 1006 comprises a means for updating a first service group database 180 to include the information about the first service group 130 and a means for updating a second service group database 180 to include the information about the second service group 130.
  • the means for determining 1009 comprising a means for determining that a second neighboring NE is included in the first service group 130 and is not included in the second service group 130.
  • the means for transmitting 1012 comprises a means for transmitting a packet comprising the information about the first service group 130 to the second neighboring NE, wherein the information about the second service group 130 is not transmitted to the second neighboring NE. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par un élément réseau (NE), comprenant le stockage, dans une mémoire du NE, d'une base de données de groupe de services, dans lequel la base de données de groupes de services comprend uniquement des données associées à un groupe de services, et dans lequel le groupe de services comprend le NE, la réception d'un paquet comprenant un identifiant (ID) de groupe de services identifiant le groupe de services d'un NE voisin, et la mise à jour la base de données de groupes de services pour inclure des données du paquet.
PCT/US2020/035179 2019-05-31 2020-05-29 Bases de données dédiées à un groupe de services ospf (open shortest path first) WO2020243465A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962855632P 2019-05-31 2019-05-31
US62/855,632 2019-05-31

Publications (1)

Publication Number Publication Date
WO2020243465A1 true WO2020243465A1 (fr) 2020-12-03

Family

ID=71787073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/035179 WO2020243465A1 (fr) 2019-05-31 2020-05-29 Bases de données dédiées à un groupe de services ospf (open shortest path first)

Country Status (1)

Country Link
WO (1) WO2020243465A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019164637A1 (fr) 2018-02-23 2019-08-29 Futurewei Technologies, Inc. Publicité et programmation d'itinéraires de chemin à l'aide de protocoles de passerelle intérieurs
WO2019236221A1 (fr) 2018-06-04 2019-12-12 Futurewei Technologies, Inc. Graphes de voie d'acheminement préférée dans un réseau

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019164637A1 (fr) 2018-02-23 2019-08-29 Futurewei Technologies, Inc. Publicité et programmation d'itinéraires de chemin à l'aide de protocoles de passerelle intérieurs
WO2019236221A1 (fr) 2018-06-04 2019-12-12 Futurewei Technologies, Inc. Graphes de voie d'acheminement préférée dans un réseau

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
"Converged Personal Network Service Core Technical Specification ; OMA-TS-CPNS_Core-V1_1-20121115-D-cb", no. 1.1, 15 November 2012 (2012-11-15), pages 1 - 339, XP064137294, Retrieved from the Internet <URL:ftp/Public_documents/CD/CPNS/Permanent_documents/> [retrieved on 20121116] *
A. LINDEM: "RFC 6549", March 2012, IETF, article "OSPFv2 Multi-Instance Extensions"
C. FILSFILS: "RFC 8402", July 2018, IETF, article "Segment Routing Architecture"
E. CRABBE: "Request for Comments (RFC) 8281", December 2017, INTERNET ENGINEERING TASK FORCE (IETF, article "Path Computation Element Communication Protocol (PCEP) Extensions for PCE-Initiated LSP Setup in a Stateful PCE Model"
H. LONG: "RFC 8330", February 2018, IETF, article "OSPF Traffic Engineering (OSPF-TE) Link Availability Extension for Links with Variable Discrete Bandwidth"
J. MOY: "RFC 2328", April 1998, NETWORK WORKING GROUP, article "OSPF Version 2"
JIHYE LEE ET AL: "SG Member Update Procedure and Message ; OMA-CD-CPNS-2010-0331R01-CR_SG_Member_Update_Message", 3 January 2011 (2011-01-03), pages 1 - 6, XP064035978, Retrieved from the Internet <URL:ftp/Public_documents/CD/CPNS/2010/> [retrieved on 20110103] *
JIHYE LEE ET AL: "SG Member Update Procedure and Message ; OMA-CD-CPNS-2010-0331R02-CR_SG_Member_Update_Message", 3 January 2011 (2011-01-03), pages 1 - 6, XP064036024, Retrieved from the Internet <URL:ftp/Public_documents/CD/CPNS/2010/> [retrieved on 20110114] *
R. ALIMI: "RFC 7285", September 2014, IETF, article "Application Layer Traffic Optimization (ALTO) Protocol"
R. COLTON: "RFC 5340", July 2008, NETWORK WORKING GROUP, article "OSPF for IPv6"
S. GIACALONE: "RFC 7471", March 2015, IETF, article "OSPF Traffic Engineering (TE) Metric Extensions"
U. CHUNDURI: "Internet-Draft", vol. Preferred Path Routing (PPR) in OSPF, 8 March 2020, LSR WORKING GROUP

Similar Documents

Publication Publication Date Title
USRE49108E1 (en) Simple topology transparent zoning in network communications
US11943136B2 (en) Advanced preferred path route graph features in a network
US9716648B2 (en) System and method for computing point-to-point label switched path crossing multiple domains
EP3414874B1 (fr) Protocole de passerelle frontière destiné à la communication parmi des unités de commande de réseau défini de logiciel
US20200396162A1 (en) Service function chain sfc-based communication method, and apparatus
CN109314663B (zh) Pcep扩展用于支持分布式计算、多项服务和域间路由的pcecc
US11909596B2 (en) Connections and accesses for hierarchical path computation element (PCE)
US11431630B2 (en) Method and apparatus for preferred path route information distribution and maintenance
US11632322B2 (en) Preferred path route graphs in a network
US11671517B2 (en) Compressed data transmissions in networks implementing interior gateway protocol
US11502940B2 (en) Explicit backups and fast re-route mechanisms for preferred path routes in a network
US20230010837A1 (en) Fault diagnosis method and apparatus thereof
WO2020243465A1 (fr) Bases de données dédiées à un groupe de services ospf (open shortest path first)
CN112055954B (zh) 网络中优选路径路由的资源预留和维护
WO2020231740A1 (fr) Capacité de regroupement de services de premier trajet ouvert le plus court (ospf), adhésion et acheminement par inondation
WO2020227412A1 (fr) Inondation sensible au trajet d&#39;un protocole open shortest path first (ospf)
US11888596B2 (en) System and method for network reliability
WO2020021558A1 (fr) Procédés, appareil et supports lisibles par machine se rapportant à un calcul de chemin dans un réseau de communication
US20220393936A1 (en) System and Method for Border Gateway Protocol (BGP) Controlled Network Reliability
US20230179515A1 (en) Routing protocol broadcast link extensions
Vistro et al. A Review and Comparative Analysis of Routing Protocols in Network
WO2020247742A1 (fr) Vérification et négociation de connectivité de réseau
CN117749700A (zh) 对应关系的获取方法、参数通告方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20746374

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20746374

Country of ref document: EP

Kind code of ref document: A1