US12267231B2 - Dynamic path computation in networks based on automatically detected unavoidable risks - Google Patents

Dynamic path computation in networks based on automatically detected unavoidable risks Download PDF

Info

Publication number
US12267231B2
US12267231B2 US17/897,675 US202217897675A US12267231B2 US 12267231 B2 US12267231 B2 US 12267231B2 US 202217897675 A US202217897675 A US 202217897675A US 12267231 B2 US12267231 B2 US 12267231B2
Authority
US
United States
Prior art keywords
shared risks
remote
risks
ignore list
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/897,675
Other versions
US20240073125A1 (en
Inventor
Bhupendra Yadav
Prabhu Vaithilingam
Gerald Smallegange
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ciena Corp
Original Assignee
Ciena Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ciena Corp filed Critical Ciena Corp
Priority to US17/897,675 priority Critical patent/US12267231B2/en
Assigned to CIENA CORPORATION reassignment CIENA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SMALLEGANGE, GERALD, VAITHILINGAM, PRABHU, YADAV, BHUPENDRA
Priority to CN202380063094.4A priority patent/CN119856474A/en
Priority to PCT/US2023/030834 priority patent/WO2024049678A1/en
Priority to EP23772637.7A priority patent/EP4581810A1/en
Publication of US20240073125A1 publication Critical patent/US20240073125A1/en
Application granted granted Critical
Publication of US12267231B2 publication Critical patent/US12267231B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/036Updating the topology between route computation elements, e.g. between OpenFlow controllers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery

Definitions

  • the present disclosure relates generally to networking and computing. More particularly, the present disclosure relates to systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks.
  • SRG Shared Risk Group
  • SRG is a concept in network routing that different connections may suffer from a common failure if they share a common risk or a common SRG.
  • SRG can be used with optical networks, Ethernet networks, Multiprotocol Label Switching (MPLS) networks including the Generalized Multiprotocol Label Switching (GMPLS) networks, Internet Protocol (IP) networks, and the like as well as multi-layer networks.
  • MPLS Multiprotocol Label Switching
  • GPLS Generalized Multiprotocol Label Switching
  • IP Internet Protocol
  • An SRG failure makes multiple connections go down because of the failure of a common resource those connections share.
  • SRGs include Shared Risk Link Group (SRLG), Shared Risk Node Group (SRNG), Shared Risk Equipment Group (SREG), etc.
  • SRLG is a risk on a cable or the like
  • an SRNG is a risk associated with a node or network element
  • an SREG is a risk that extends within the node or network element itself, e.g., down to a module or other type of equipment.
  • the descriptions herein may reference SRLGs for illustration purposes, but those skilled in the art will recognize any, and all types of SRG risk representation are contemplated herein.
  • SRLGs refer to situations where links in a network share a common fiber (or a common physical attribute such as fiber conduit or the like). If one link fails, other links in the group may fail too, i.e., links in the group have a shared risk which is represented by the SRLG.
  • SRLGs are used in optical, Ethernet, MPLS, GMPLS, and/or IP networks and used for route computation for diversity.
  • a link at an upper layer has a connection at a lower layer, and thus any network resources (links, nodes, line cards, and the like) used by the lower layer connection can be represented as SRLGs on the upper layer links. That is, MPLS tunnels, OTN connections, IP routes, etc. all operate on a lower layer optical network (Layer 0).
  • Layer 0 a lower layer optical network
  • an MPLS link at an MPLS layer may have an SRLG to represent a connection at Layer 0 and thus any optical nodes, amplifiers, and multiplexing components, as well as fiber cables and conduits used by the Layer 0 connection, are accounted for in SRLGs on the MPLS link.
  • the SRLGs are used in the MPLS route computation to ensure the protected tunnels share no common risks in the optical network. That is, route or path computation can compare SRLGs of links between two paths to determine if they are disjoint or not. If two paths have a common risk, i.e., share an SRLG, there is a possibility of a common fault taking both paths down. Of course, this defeats the purpose of protection and is to be avoided.
  • SRLG in MPLS Traffic Engineering include associated links that share the same resources i.e., all links will fail if that resource fails.
  • SRLG can be represented by a 32-bit number and is unique in the Interior Gateway Protocol (IGP) (e.g., Intermediate System-Intermediate System (ISIS) or Open Shortest Path First (OSPF)) domain.
  • IGP Interior Gateway Protocol
  • ISIS Intermediate System-Intermediate System
  • OSPF Open Shortest Path First
  • LSP Label Switched Path
  • LSP Label Switched Path
  • a backup path can be made completely diverse from the primary path by excluding all SRLGs used by the primary path from its calculation of the backup path. This makes sure that backup path is not affected by the failure of any resources used by the primary path.
  • Unavoidable SRLGs are ones which physically cannot be avoided. An example of such can be an optical risk at a source or destination node.
  • There are existing approaches to deal with such unavoidable risks including, not using SRLG on contested resources, using loose SRLGs, i.e., some SRLGs are ignored from calculations, weighted SRLGs, and manually configured unavoidable SRLG, used on a case-by-case basis manually.
  • all these existing approaches are configuration intensive, i.e., there is no automation or allowing the network to learn. Further, this creates intensive configuration changes when there are network changes.
  • the present disclosure relates to systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks.
  • the present disclosure includes an adjustment to path computation to automatically detect and address unavoidable SRLGs.
  • SRLGs the shared risks are referred to as SRLGs, but those skilled in the art will recognize these can be any types of risks, i.e., also SRNG, SREG, and the like.
  • Unavoidable SRLGs can be incorporated in ignore lists of varying scopes, newly discovered unavoidable SRLGs can be automatically flooded in the network, unavoidable SRLG lists can be automatically generated from an IGP Shortest Path First (SPF) tree, and the unavoidable SRLGs are automatically accounted for in a Constrained SPF (CSFP) computation. This minimizes configuration and provides dynamic capability for path compute and network events.
  • SPF IGP Shortest Path First
  • CSFP Constrained SPF
  • the present disclosure includes a method having steps, an apparatus with a processor configured to implement the steps, and a non-transitory computer-readable medium with instructions that, when executed, cause one or more processors to perform the steps.
  • the steps include receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment; automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks; and utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list.
  • the local ignore list can include local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks
  • the remote ignore list can include remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
  • the automatically creating the local ignore list can include steps of determining all egress interfaces at the source node through which the destination node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the local ignore list.
  • the automatically creating the remote ignore list can include steps of computing all possible paths to the destination to determine all ingress interfaces for the destination; performing an intersection of all shared risks of the plurality of shared risks on the ingress interfaces; and providing the intersection as the remote ignore list.
  • the automatically creating the remote ignore list can include steps of determining all egress interfaces at the destination node through which the source node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the remote ignore list.
  • the local ignore list can be a first set of the plurality of shared risks denoted as L, wherein the remote ignore list can be a second set of the plurality of shared risks denoted as R, wherein a third set of the plurality of shared risks associated with the path can be denoted as S, and wherein the steps can further include pruning a source set of the plurality of shared risks, SS, as S ⁇ L; pruning a destination set of the plurality of shared risks, SD, as S ⁇ R; and utilizing the source set and the destination set in the path computation.
  • the automatically creating can include a k-shortest path computation and taking an intersection of the plurality of shared risks at the source and the destination on all k shortest paths.
  • the path computation can be one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node.
  • the network can include an optical topology and a packet topology sharing a common control plane. The automatically creating can be performed at runtime of the path computation.
  • FIG. 1 is a network diagram of a network of network elements interconnected by links.
  • FIG. 2 is a block diagram of an example network element (node) for use with the systems and methods described herein.
  • FIG. 3 is a block diagram of a controller which can form a controller for the network element, a PCE, an SDN controller, a management system, or the like.
  • FIG. 4 is an example of a network with an optical topology.
  • FIG. 5 is an example of the network with a packet topology.
  • FIG. 6 is an example of a SRLG configuration.
  • FIG. 7 is an example bitmask for flooding SRLG information.
  • FIG. 8 is a flowchart of a process for dynamic path computation in networks based on automatically detected unavoidable risks.
  • the present disclosure relates to systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks.
  • the present disclosure includes an adjustment to path computation to automatically detect and address unavoidable SRLGs.
  • SRLGs the shared risks are referred to as SRLGs, but those skilled in the art will recognize these can be any types of risks, i.e., also SRNG, SREG, and the like.
  • Unavoidable SRLGs can be incorporated in ignore lists of varying scopes, newly discovered unavoidable SRLGs can be automatically flooded in the network, unavoidable SRLG lists can be automatically generated from an IGP Shortest Path First (SPF) tree, and the unavoidable SRLGs are automatically accounted for in a Constrained SPF (CSFP) computation. This minimizes configuration and provides dynamic capability for path compute and network events.
  • SPF IGP Shortest Path First
  • CSFP Constrained SPF
  • FIG. 1 is a network diagram of a network 10 of network elements 12 (labeled as network elements 12 A- 12 G) interconnected by links 14 (labeled as links 14 A- 14 I).
  • the network elements 12 communicate with one another over the links 14 through Layer 0 (L0) such as optical wavelengths (Dense Wave Division Multiplexing (DWDM)), Layer 1 (L1) such as OTN, Layer 2 (L2) such as Ethernet, MPLS, etc., Layer 3 (L3) protocols, and/or combinations thereof.
  • L0 such as optical wavelengths (Dense Wave Division Multiplexing (DWDM)
  • Layer 1 (L1) such as OTN
  • Layer 2 (L2) such as Ethernet
  • MPLS Layer 3
  • L3 protocols Layer 3 protocols, and/or combinations thereof.
  • the network elements 12 can be network elements which include a plurality of ingress and egress ports forming the links 14 .
  • the network elements 12 can be switches, routers, cross-connects, etc. operating at one or more layers.
  • An example network element 12 implementation
  • the network 10 can include various services or calls between the network elements 12 .
  • Each service can be at any of the L0, L1, L2, and/or L3 protocols, such as a wavelength, a Subnetwork Connection (SNC), an LSP, a tunnel, a connection, etc., and each service is an end-to-end path and from the view of the client signal contained therein, it is seen as a single network segment.
  • the network 10 is illustrated, for example, as an interconnected mesh network, and those of ordinary skill in the art will recognize the network 10 can include other architectures, with additional network elements 12 or with fewer network elements 12 , etc. as well as with various different interconnection topologies and architectures.
  • the network 10 can include a control plane operating on and/or between the network elements 12 .
  • the control plane includes software, processes, algorithms, etc. that control configurable features of the network 10 , such as automating discovery of the network elements 12 , capacity on the links 14 , port availability on the network elements 12 , connectivity between ports; dissemination of topology and bandwidth information between the network elements 12 ; calculation and creation of paths for calls or services; network-level protection and restoration; and the like.
  • control plane can utilize Automatically Switched Optical Network (ASON) as defined in G.8080/Y.1304, Architecture for the automatically switched optical network (ASON) (February 2005), the contents of which are herein incorporated by reference; Generalized Multi-Protocol Label Switching (GMPLS) Architecture as defined in Request for Comments (RFC): 3945 (October 2004) and the like, the contents of which are herein incorporated by reference; Optical Signaling and Routing Protocol (OSRP) which is an optical signaling and routing protocol similar to PNNI (Private Network-to-Network Interface) and MPLS; or any other type control plane for controlling network elements at multiple layers, and establishing and maintaining connections between nodes.
  • ASON Automatically Switched Optical Network
  • G.8080/Y.1304 Architecture for the automatically switched optical network (ASON) (February 2005), the contents of which are herein incorporated by reference
  • GPLS Generalized Multi-Protocol Label Switching
  • RRC Request for Comments
  • OSRP Optical Signaling and
  • the network 10 and the control plane can utilize any type of control plane for controlling the network elements 12 and establishing, maintaining, and restoring calls or services between the nodes 12 .
  • the network 10 can include a Software-Defined Networking (SDN) controller for centralized control.
  • the network 10 can include hybrid control between the control plane and the SDN controller.
  • the network 10 can include a Network Management System (NMS), Element Management System (EMS), Path Computation Element (PCE), etc. That is, the present disclosure contemplates any type of controller for path computation utilizing the unavoidable network risks described herein. That is, the present disclosure is not limited to a control plane, SDN, PCE, etc. based path computation technique.
  • SRLGs are risks that are compared between two potential paths to ensure diversity between them.
  • the risks can include, without limitation, fibers, fiber conduits, physical junctions, bridges, Reconfigurable Optical Add/Drop Multiplexer (ROADM) degree, network element 12 , a module in the network element 12 , or any physical construct associated with the link 14 physically.
  • ROADM Reconfigurable Optical Add/Drop Multiplexer
  • the objective of SRLGs is to model various risks to enable comparison during route computation.
  • each link 14 is assigned associated SRLGs 20 for risks, and each is a unique value.
  • each node 12 is assigned associated SRNGs and/or SREGs 22 , again each is a unique value representing a specified risk.
  • the SRNGs and/or SREGs 22 just show the reference numeral of the network element, e.g., 12 A.
  • FIG. 1 lists each SRLG 20 as a four-digit number, but those skilled in the art will recognize these SRLGs, SRNGs, and SREGs can be a 32-bit value or the like.
  • the link 14 A has SRLGs 4211 , 6789 , 4011 and the link 14 B has SRLGs 4011 , 6789 , 6123 , 2102 , 4021 .
  • the link 14 H has SRLGs 4212 , 4051 , 9876 , and when compared to the link 14 A, there are no common SRLGs, and thus these two links 14 A, 14 H are diverse, i.e., no common risk.
  • the SRLGs 20 and the SRNGs and/or SREGs 22 can be flooded (in a control plane), managed (in an SDN controller, NMS, EMS, PCE, etc.), or the like.
  • connection 30 can be a primary tunnel (LSP)
  • connection 32 can be a backup tunnel (LSP).
  • LSP primary tunnel
  • connection 30 and the connection 32 can be disjoint, i.e., that they do not share a network risk.
  • the connection 30 has a path over links 14 H, 14 I, 14 G.
  • the path for the connection 32 is calculated, and then all of the network risks on the calculated path are compared to the network risks on the path for the connection 30 . Assume the only viable path for the connection 32 is through the network element 12 E.
  • this path would fail as here the connection 32 would share a same network risk, namely the network element 12 E, as the connection 30 .
  • these paths do not share a link 14 .
  • the network element 12 E is a “permitted network risk.” With the present disclosure, this permitted network risk is allowed, such that the connections 30 , 32 can share the network element 12 E, if required for the connection 32 .
  • FIG. 2 is a block diagram of an example network element 12 (node) for use with the systems and methods described herein.
  • the network element 12 can be a device that may consolidate the functionality of a Multi-Service Provisioning Platform (MSPP), Digital Cross-Connect (DCS), Ethernet and/or Optical Transport Network (OTN) switch, Wave Division Multiplexed (WDM)/DWDM platform, Packet Optical Transport System (POTS), etc. into a single, high-capacity intelligent switching system providing Layer 0, 1, 2, and/or 3 consolidation.
  • MSPP Multi-Service Provisioning Platform
  • DCS Digital Cross-Connect
  • OTN Optical Transport Network
  • WDM Wave Division Multiplexed
  • DWDM Wave Division Multiplexed
  • POTS Packet Optical Transport System
  • the network element 12 can be any of an OTN Add/Drop Multiplexer (ADM), a Multi-Service Provisioning Platform (MSPP), a Digital Cross-Connect (DCS), an optical cross-connect, a POTS, an optical switch, a router, a switch, a WDM/DWDM terminal, an access/aggregation device, etc. That is, the network element 12 can be any digital and/or optical system with ingress and egress digital and/or optical signals and switching of channels, timeslots, tributary units, wavelengths, etc.
  • ADM OTN Add/Drop Multiplexer
  • MSPP Multi-Service Provisioning Platform
  • DCS Digital Cross-Connect
  • the network element 12 can be any digital and/or optical system with ingress and egress digital and/or optical signals and switching of channels, timeslots, tributary units, wavelengths, etc.
  • the network element 12 includes common equipment 102 , one or more line modules 104 , and one or more switch modules 106 .
  • the common equipment 102 can include power; a control module; Operations, Administration, Maintenance, and Provisioning (OAM&P) access; user interface ports; and the like.
  • the common equipment 102 can connect to a management system 108 through a data communication network 110 (as well as a PCE, an SDN controller, etc.).
  • the common equipment 102 can include a control plane processor, such as a controller 200 illustrated in FIG. 3 configured to operate the control plane as described herein.
  • the network element 12 can include an interface 112 for communicatively coupling the common equipment 102 , the line modules 104 , and the switch modules 106 to one another.
  • the interface 112 can be a backplane, midplane, a bus, optical and/or electrical connectors, or the like.
  • the line modules 104 are configured to provide ingress and egress to the switch modules 106 and to external connections on the links to/from the network element 12 .
  • the line modules 104 can form ingress and egress switches with the switch modules 106 as center stage switches for a three-stage switch, e.g., a three-stage Clos switch. Other configurations and/or architectures are also contemplated.
  • the line modules 104 can include a plurality of optical connections per module, and each module may include a flexible rate support for any type of connection.
  • the line modules 104 can include WDM interfaces, short-reach interfaces, and the like, and can connect to other line modules 104 on remote network elements, end clients, edge routers, and the like, e.g., forming connections on the links in the network 10 .
  • the line modules 104 provide ingress and egress ports to the network element 12 , and each line module 104 can include one or more physical ports.
  • the switch modules 106 are configured to switch channels, timeslots, tributary units, packets, etc. between the line modules 104 .
  • the switch modules 106 can provide wavelength granularity (Layer 0 switching); OTN granularity; Ethernet granularity; and the like.
  • the switch modules 106 can include Time Division Multiplexed (TDM) (i.e., circuit switching) and/or packet switching engines.
  • TDM Time Division Multiplexed
  • the switch modules 106 can include redundancy as well, such as 1:1, 1:N, etc.
  • the network element 12 can include other components which are omitted for illustration purposes, and that the systems and methods described herein are contemplated for use with a plurality of different network elements with the network element 12 presented as an example type of network element.
  • the network element 12 may not include the switch modules 106 , but rather have the corresponding functionality in the line modules 104 (or some equivalent) in a distributed fashion.
  • the network element 12 may omit the switch modules 106 and that functionality, such as in a DWDM terminal.
  • other architectures providing ingress, egress, and switching are also contemplated for the systems and methods described herein.
  • the systems and methods described herein contemplate use with any network element, and the network element 12 is merely presented as an example for the systems and methods described herein.
  • FIG. 3 is a block diagram of a controller 200 which can form a controller for the network element 12 , a PCE, an SDN controller, a management system, or the like.
  • the controller 200 can be part of the common equipment, such as common equipment 102 in the network element 100 , or a stand-alone device communicatively coupled to the network element 100 via the data communication network 110 . In a stand-alone configuration, the controller 200 can be the management system 108 , a PCE, etc.
  • the controller 200 can include a processor 202 which is a hardware device for executing software instructions such as operating the control plane.
  • the processor 202 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the controller 200 , a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions.
  • the processor 202 is configured to execute software stored within the memory, to communicate data to and from the memory, and to generally control operations of the controller 200 pursuant to the software instructions.
  • the controller 200 can also include a network interface 204 , a data store 206 , memory 208 , an I/O interface 210 , and the like, all of which are communicatively coupled to one another and to the processor 202 .
  • the network interface 54 can be used to enable the controller 200 to communicate on a Data Communication Network (DCN), such as to communicate control plane information to other controllers, to a management system, to the network elements 12 , and the like.
  • DCN Data Communication Network
  • the network interface 204 can include, for example, an Ethernet module.
  • the network interface 204 can include address, control, and/or data connections to enable appropriate communications on the network.
  • the data store 206 can be used to store data, such as control plane information, provisioning data, OAM&P data, etc.
  • the data store 206 can include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, flash drive, CDROM, and the like), and combinations thereof. Moreover, the data store 206 can incorporate electronic, magnetic, optical, and/or other types of storage media.
  • the memory 208 can include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, flash drive, CDROM, etc.), and combinations thereof. Moreover, the memory 208 may incorporate electronic, magnetic, optical, and/or other types of storage media.
  • the memory 208 can have a distributed architecture, where various components are situated remotely from one another, but may be accessed by the processor 202 .
  • the I/O interface 210 includes components for the controller 200 to communicate with other devices. Further, the I/O interface 210 includes components for the controller 200 to communicate with the other nodes, such as using overhead associated with OTN signals.
  • the controller 200 is configured to implement software, processes, algorithms, etc. that can control configurable features of the network 10 , such as automating discovery of the network elements 12 , capacity on the links 14 , port availability on the network elements 12 , connectivity between ports; dissemination of topology and bandwidth information between the network elements 12 ; path computation and creation for connections; network-level protection and restoration; and the like.
  • the controller 200 can include a topology database that maintains the current topology of the network 10 , such as based on control plane signaling and a connection database that maintains available bandwidth on the links again based on the control plane signaling as well as management of the network risks for diverse path computation.
  • the present disclosure contemplates path computation via the controller 200 in a network element 12 , via a PCE, NMS, EMS, SDN controller, and the like, etc.
  • the network topology view is very different for packet and optical layers.
  • the packet topology presents the logical view of the network where the optical topology presents the physical layout of the network.
  • FIG. 4 is an example of a network 300 with an optical topology 302 .
  • FIG. 5 is an example of the network 300 with a packet topology 304 .
  • the optical topology 302 can host the packet topology 304 and different logical packet links may traverse same optical topology link.
  • FIG. 6 is an example of a SRLG configuration 400 . As shown, all packet interfaces from site Y to go through ROADM A and site Z have to go through ROADM B. All diverse path compute out of the sites Y, Z will share the ROADM SRLG for the ROADMs A, B, respectively, and hence will not be diverse.
  • the SRLG 100001 is common to all links and thus is unavoidable. Also, the other network elements have similar unavoidable SRLGs.
  • unavoidable SRLG would any SRLG assigned to the site (e.g., Point of Presence (POP)) where the source and destination nodes e.g., building, chassis, node itself. If there is only one line card facing towards the destination, then the SRLG associated with that line card. All such SRLGs will qualify as unavoidable SRLGs.
  • POP Point of Presence
  • a Shared SRLG concept was introduced to exclude certain SRLGs which are unavoidable for a given calculation because of topology constraints e.g., a single ROADM node through which all ports are connected.
  • the shared SRLG concept introduced Command Line Interfaces (CLIs) to specifically call out SRLGs that are shared (or should be ignored).
  • CLIs Command Line Interfaces
  • Every node must be configured with shared-srlg and every technology type has their own configuration for this information even though its use is only diverse path and/or protection compute.
  • SRLG is mandatory to include in compute.
  • all SRLGs are mandatory.
  • Loose means the SRLG is optional in compute and can be ignored. Every node has its configuration for SRLGs and when a diverse path compute fails, a recompute is done after ignoring the SRLGs that are marked as loose.
  • this approach is also configuration intensive with added burden of at least one path compute failure.
  • the present disclosure removes the configuration intensive approach to identifying unavoidable SRLGs and automatically detects them as part of path computation as well as flooding them in the network 10 .
  • an SRLG, S is unavoidable if:
  • a collection of the unavoidable SRLGs may be includes in an unavoidable SRLG list on the headend and tailend and may or may not have common elements. This list of unavoidable SRLGs can be called an ignore list.
  • a global ignore list can be provided at configuration time. This global configuration will need to be configured on every node in the administrative domain that will be head-end to a path. Every newly added node will require this configuration and all the nodes will require an update to their ignore list with unavoidable SRLG from newly added node.
  • all unavoidable SRLGs can be configured with a specific bit set and all the nodes in the administrative domain can be configured with a bitmask that enables them to test if it is unavoidable SRLG or not. Every new node will need to be configured with this mask. Existing nodes in the network will not require an update to their configuration.
  • IGP Interior Gateway Protocol
  • this information can be flooded with a new sub-TLV type, to extended reachability TLV type, specifically defined to carry SRLG bitmasks. This will require automatically flooding the bitmask for unavoidable SRLG.
  • Bit position 1 can be reserved for unavoidable SRLG.
  • An IGP CLI can be introduced to configure this bitmask which can then be flooded in the network.
  • nodes can build their global ignore list by first testing every configured and learnt SRLG against the bitmask and adding it to ignore set.
  • An unavoidable SRLG must be ignored because on the source and destination of a path they cover all paths and without ignoring them no path can be calculated. If we take intersection of all locally configured SRLGs on all interfaces we get a set L, this set will give us unavoidable SRLGs for this local node. Similarly, we can compute the destination's unavoidable SRLG set by isolating only the SRLGs advertised by the destination on link-by-link basis and then getting their intersection. If there are more than one ROADM node in the middle of the path, only interface through which there is reachability to the destination should be considered. This step can become part of path compute to make sure compute can keep up with the network changes.
  • Remote list calculation can be done in multiple ways.
  • the ignore lists can be automatically created before path computation as well as determined during path computation.
  • the unavoidable SRLG ignore lists (local and remote) are automatically determined in the present disclosure, removing the need for complex manual configuration. Referring to FIG. 5 , assume we want to compute a path from node 1 to node 9. Let us consider the following configuration which would be applied on node 1 in the above shown topology. This shows standard configuration for SRLG configured:
  • First step to path computation is to build local and remote ignore lists.
  • a path is computed from the source node 1 to the destination node 9 via intermediate node 5.
  • Dijkstra's algorithm is a common SPF calculation, and those skilled in the art will appreciate the present disclosure contemplates any path computation algorithm and is not limited Dijkstra's algorithm.
  • L local ignore list for source 1 and destination 9
  • R remote ignore list for source 1 and destination 9
  • L and R are sets of SRLGs that can be ignored.
  • SRLG set S will be used for pruning.
  • Dijkstra's algorithm can be used on it. Also, this pruning can be done at runtime.
  • TI-LFA path Preparation for computing TI-LFA path is same as described above for the diverse path. Once the pruned tree is ready then TI-LFA calculation can be run it.
  • node 5 is pruned out of the tree.
  • alternate path calculation process as described above can be used to calculate TI-LFA post convergence path.
  • FIG. 8 is a flowchart of a process 400 for dynamic path computation in networks based on automatically detected unavoidable risks.
  • the process 400 contemplates implementation as a method, execution via a processing device such as the controller 200 , the network element 12 , a management system, a PCE, an SDN controller, etc., and as instructions stored in a non-transitory computer-readable medium that, when executed, cause one or more processors to perform the process 400 .
  • the process 400 includes receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment (step 402 ); automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks (step 404 ); and utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list (step 406 ).
  • the local ignore list can include local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks
  • the remote ignore list can include remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
  • the automatically creating the local ignore list can include steps of determining all egress interfaces at the source node through which the destination node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the local ignore list.
  • the automatically creating the remote ignore list can include steps of computing all possible paths to the destination to determine all ingress interfaces for the destination; performing an intersection of all shared risks of the plurality of shared risks on the ingress interfaces; and providing the intersection as the remote ignore list.
  • the automatically creating the remote ignore list can also include steps of determining all egress interfaces at the destination node through which the source node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the remote ignore list. Note, this approach assumes symmetric connectivity between the source and destination.
  • the local ignore list is a first set of the plurality of shared risks denoted as L
  • the remote ignore list is a second set of the plurality of shared risks denoted as R
  • a third set of the plurality of shared risks associated with the path is denoted as S
  • the process 400 can further include pruning a source set of the plurality of shared risks, SS, as S ⁇ L; pruning a destination set of the plurality of shared risks, SD, as S ⁇ R; and utilizing the source set and the destination set in the path computation.
  • the automatically creating can include a k-shortest path computation and taking an intersection of the plurality of shared risks at the source and the destination on all k shortest paths.
  • the path computation can be one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node.
  • the network can include an optical topology and a packet topology sharing a common control plane. The automatically creating can be performed at runtime of the path computation.
  • processors such as microprocessors; central processing units (CPUs); digital signal processors (DSPs): customized processors such as network processors (NPs) or network processing units (NPUs), graphics processing units (GPUs), or the like; field programmable gate arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein.
  • processors such as microprocessors; central processing units (CPUs); digital signal processors (DSPs): customized processors such as network processors (NPs) or network processing units (NPUs), graphics processing units (GPUs), or the like; field programmable gate arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein.
  • circuitry configured or adapted to
  • logic configured or adapted to
  • some embodiments may include a non-transitory computer-readable storage medium having computer-readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein.
  • Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), Flash memory, and the like.
  • software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
  • a processor or device e.g., any type of programmable circuitry or logic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks include receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment; automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks; and utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list.

Description

FIELD OF THE DISCLOSURE
The present disclosure relates generally to networking and computing. More particularly, the present disclosure relates to systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks.
BACKGROUND OF THE DISCLOSURE
Shared Risk Group (SRG) is a concept in network routing that different connections may suffer from a common failure if they share a common risk or a common SRG. SRG can be used with optical networks, Ethernet networks, Multiprotocol Label Switching (MPLS) networks including the Generalized Multiprotocol Label Switching (GMPLS) networks, Internet Protocol (IP) networks, and the like as well as multi-layer networks. An SRG failure makes multiple connections go down because of the failure of a common resource those connections share. Examples of SRGs include Shared Risk Link Group (SRLG), Shared Risk Node Group (SRNG), Shared Risk Equipment Group (SREG), etc. An SRLG is a risk on a cable or the like, an SRNG is a risk associated with a node or network element, and an SREG is a risk that extends within the node or network element itself, e.g., down to a module or other type of equipment. The descriptions herein may reference SRLGs for illustration purposes, but those skilled in the art will recognize any, and all types of SRG risk representation are contemplated herein. SRLGs refer to situations where links in a network share a common fiber (or a common physical attribute such as fiber conduit or the like). If one link fails, other links in the group may fail too, i.e., links in the group have a shared risk which is represented by the SRLG. SRLGs are used in optical, Ethernet, MPLS, GMPLS, and/or IP networks and used for route computation for diversity.
In multi-layer networks, a link at an upper layer has a connection at a lower layer, and thus any network resources (links, nodes, line cards, and the like) used by the lower layer connection can be represented as SRLGs on the upper layer links. That is, MPLS tunnels, OTN connections, IP routes, etc. all operate on a lower layer optical network (Layer 0). For example, an MPLS link at an MPLS layer may have an SRLG to represent a connection at Layer 0 and thus any optical nodes, amplifiers, and multiplexing components, as well as fiber cables and conduits used by the Layer 0 connection, are accounted for in SRLGs on the MPLS link. As an example, one would not want to protect MPLS tunnels where the protected tunnels share a risk in an optical network. The SRLGs are used in the MPLS route computation to ensure the protected tunnels share no common risks in the optical network. That is, route or path computation can compare SRLGs of links between two paths to determine if they are disjoint or not. If two paths have a common risk, i.e., share an SRLG, there is a possibility of a common fault taking both paths down. Of course, this defeats the purpose of protection and is to be avoided.
For example, SRLG in MPLS Traffic Engineering (MPLS-TE) include associated links that share the same resources i.e., all links will fail if that resource fails. SRLG can be represented by a 32-bit number and is unique in the Interior Gateway Protocol (IGP) (e.g., Intermediate System-Intermediate System (ISIS) or Open Shortest Path First (OSPF)) domain. For a give Label Switched Path (LSP), its SRLGs are a union of all the resources used by this LSP from source to destination. When SRLGs are used, a backup path can be made completely diverse from the primary path by excluding all SRLGs used by the primary path from its calculation of the backup path. This makes sure that backup path is not affected by the failure of any resources used by the primary path.
Unavoidable SRLGs are ones which physically cannot be avoided. An example of such can be an optical risk at a source or destination node. There are existing approaches to deal with such unavoidable risks including, not using SRLG on contested resources, using loose SRLGs, i.e., some SRLGs are ignored from calculations, weighted SRLGs, and manually configured unavoidable SRLG, used on a case-by-case basis manually. Disadvantageously, all these existing approaches are configuration intensive, i.e., there is no automation or allowing the network to learn. Further, this creates intensive configuration changes when there are network changes.
BRIEF SUMMARY OF THE DISCLOSURE
The present disclosure relates to systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks. In particular, the present disclosure includes an adjustment to path computation to automatically detect and address unavoidable SRLGs. Of note, as described herein, the shared risks are referred to as SRLGs, but those skilled in the art will recognize these can be any types of risks, i.e., also SRNG, SREG, and the like. By automating this in path computation, there is no need for manual configuration. Unavoidable SRLGs can be incorporated in ignore lists of varying scopes, newly discovered unavoidable SRLGs can be automatically flooded in the network, unavoidable SRLG lists can be automatically generated from an IGP Shortest Path First (SPF) tree, and the unavoidable SRLGs are automatically accounted for in a Constrained SPF (CSFP) computation. This minimizes configuration and provides dynamic capability for path compute and network events.
In an embodiment, the present disclosure includes a method having steps, an apparatus with a processor configured to implement the steps, and a non-transitory computer-readable medium with instructions that, when executed, cause one or more processors to perform the steps. The steps include receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment; automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks; and utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list. The local ignore list can include local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks, and the remote ignore list can include remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
The automatically creating the local ignore list can include steps of determining all egress interfaces at the source node through which the destination node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the local ignore list. The automatically creating the remote ignore list can include steps of computing all possible paths to the destination to determine all ingress interfaces for the destination; performing an intersection of all shared risks of the plurality of shared risks on the ingress interfaces; and providing the intersection as the remote ignore list. The automatically creating the remote ignore list can include steps of determining all egress interfaces at the destination node through which the source node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the remote ignore list.
The local ignore list can be a first set of the plurality of shared risks denoted as L, wherein the remote ignore list can be a second set of the plurality of shared risks denoted as R, wherein a third set of the plurality of shared risks associated with the path can be denoted as S, and wherein the steps can further include pruning a source set of the plurality of shared risks, SS, as S−L; pruning a destination set of the plurality of shared risks, SD, as S−R; and utilizing the source set and the destination set in the path computation.
The automatically creating can include a k-shortest path computation and taking an intersection of the plurality of shared risks at the source and the destination on all k shortest paths. The path computation can be one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node. The network can include an optical topology and a packet topology sharing a common control plane. The automatically creating can be performed at runtime of the path computation.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is illustrated and described herein with reference to the various drawings, in which like reference numbers are used to denote like system components/method steps, as appropriate, and in which:
FIG. 1 is a network diagram of a network of network elements interconnected by links.
FIG. 2 is a block diagram of an example network element (node) for use with the systems and methods described herein.
FIG. 3 is a block diagram of a controller which can form a controller for the network element, a PCE, an SDN controller, a management system, or the like.
FIG. 4 is an example of a network with an optical topology.
FIG. 5 is an example of the network with a packet topology.
FIG. 6 is an example of a SRLG configuration.
FIG. 7 is an example bitmask for flooding SRLG information.
FIG. 8 is a flowchart of a process for dynamic path computation in networks based on automatically detected unavoidable risks.
DETAILED DESCRIPTION OF THE DISCLOSURE
Again, the present disclosure relates to systems and methods for dynamic path computation in networks based on automatically detected unavoidable risks. In particular, the present disclosure includes an adjustment to path computation to automatically detect and address unavoidable SRLGs. Of note, as described herein, the shared risks are referred to as SRLGs, but those skilled in the art will recognize these can be any types of risks, i.e., also SRNG, SREG, and the like. By automating this in path computation, there is no need for manual configuration. Unavoidable SRLGs can be incorporated in ignore lists of varying scopes, newly discovered unavoidable SRLGs can be automatically flooded in the network, unavoidable SRLG lists can be automatically generated from an IGP Shortest Path First (SPF) tree, and the unavoidable SRLGs are automatically accounted for in a Constrained SPF (CSFP) computation. This minimizes configuration and provides dynamic capability for path compute and network events.
Example Network
FIG. 1 is a network diagram of a network 10 of network elements 12 (labeled as network elements 12A-12G) interconnected by links 14 (labeled as links 14A-14I). The network elements 12 communicate with one another over the links 14 through Layer 0 (L0) such as optical wavelengths (Dense Wave Division Multiplexing (DWDM)), Layer 1 (L1) such as OTN, Layer 2 (L2) such as Ethernet, MPLS, etc., Layer 3 (L3) protocols, and/or combinations thereof. The network elements 12 can be network elements which include a plurality of ingress and egress ports forming the links 14. The network elements 12 can be switches, routers, cross-connects, etc. operating at one or more layers. An example network element 12 implementation is illustrated in FIG. 1 . The network 10 can include various services or calls between the network elements 12. Each service can be at any of the L0, L1, L2, and/or L3 protocols, such as a wavelength, a Subnetwork Connection (SNC), an LSP, a tunnel, a connection, etc., and each service is an end-to-end path and from the view of the client signal contained therein, it is seen as a single network segment. The network 10 is illustrated, for example, as an interconnected mesh network, and those of ordinary skill in the art will recognize the network 10 can include other architectures, with additional network elements 12 or with fewer network elements 12, etc. as well as with various different interconnection topologies and architectures.
The network 10 can include a control plane operating on and/or between the network elements 12. The control plane includes software, processes, algorithms, etc. that control configurable features of the network 10, such as automating discovery of the network elements 12, capacity on the links 14, port availability on the network elements 12, connectivity between ports; dissemination of topology and bandwidth information between the network elements 12; calculation and creation of paths for calls or services; network-level protection and restoration; and the like. In an embodiment, the control plane can utilize Automatically Switched Optical Network (ASON) as defined in G.8080/Y.1304, Architecture for the automatically switched optical network (ASON) (February 2005), the contents of which are herein incorporated by reference; Generalized Multi-Protocol Label Switching (GMPLS) Architecture as defined in Request for Comments (RFC): 3945 (October 2004) and the like, the contents of which are herein incorporated by reference; Optical Signaling and Routing Protocol (OSRP) which is an optical signaling and routing protocol similar to PNNI (Private Network-to-Network Interface) and MPLS; or any other type control plane for controlling network elements at multiple layers, and establishing and maintaining connections between nodes. Those of ordinary skill in the art will recognize the network 10 and the control plane can utilize any type of control plane for controlling the network elements 12 and establishing, maintaining, and restoring calls or services between the nodes 12. In another embodiment, the network 10 can include a Software-Defined Networking (SDN) controller for centralized control. In a further embodiment, the network 10 can include hybrid control between the control plane and the SDN controller. In yet a further embodiment, the network 10 can include a Network Management System (NMS), Element Management System (EMS), Path Computation Element (PCE), etc. That is, the present disclosure contemplates any type of controller for path computation utilizing the unavoidable network risks described herein. That is, the present disclosure is not limited to a control plane, SDN, PCE, etc. based path computation technique.
Again, SRLGs are risks that are compared between two potential paths to ensure diversity between them. The risks can include, without limitation, fibers, fiber conduits, physical junctions, bridges, Reconfigurable Optical Add/Drop Multiplexer (ROADM) degree, network element 12, a module in the network element 12, or any physical construct associated with the link 14 physically. For diversity, the SRLGs between two connections are compared, and any shared risk indicates a diversity concern or single point of failure for both connections. The objective of SRLGs is to model various risks to enable comparison during route computation.
In FIG. 1 , each link 14 is assigned associated SRLGs 20 for risks, and each is a unique value. Also, each node 12 is assigned associated SRNGs and/or SREGs 22, again each is a unique value representing a specified risk. Note, for illustration purposes, the SRNGs and/or SREGs 22 just show the reference numeral of the network element, e.g., 12A. Also, for illustration purposes, FIG. 1 lists each SRLG 20 as a four-digit number, but those skilled in the art will recognize these SRLGs, SRNGs, and SREGs can be a 32-bit value or the like. For example, the link 14A has SRLGs 4211, 6789, 4011 and the link 14B has SRLGs 4011, 6789, 6123, 2102, 4021. In path computation, the fact these two links 14A, 14B have the same SRLGs 6789, 4011 indicates these links 14A, 14B have a common risk and are not diverse/disjoint. The link 14H has SRLGs 4212, 4051, 9876, and when compared to the link 14A, there are no common SRLGs, and thus these two links 14A, 14H are diverse, i.e., no common risk. Depending on the network 10 implementation, the SRLGs 20 and the SRNGs and/or SREGs 22 can be flooded (in a control plane), managed (in an SDN controller, NMS, EMS, PCE, etc.), or the like.
As an example, assume there are two connections 30, 32 between the network elements 12A, 12F, e.g., the connection 30 can be a primary tunnel (LSP), and the connection 32 can be a backup tunnel (LSP). Thus, there is a requirement for the connection 30 and the connection 32 to be disjoint, i.e., that they do not share a network risk. The connection 30 has a path over links 14H, 14I, 14G. The path for the connection 32 is calculated, and then all of the network risks on the calculated path are compared to the network risks on the path for the connection 30. Assume the only viable path for the connection 32 is through the network element 12E. With conventional approaches, this path would fail as here the connection 32 would share a same network risk, namely the network element 12E, as the connection 30. However, these paths do not share a link 14. The network element 12E is a “permitted network risk.” With the present disclosure, this permitted network risk is allowed, such that the connections 30, 32 can share the network element 12E, if required for the connection 32.
Example Network Element/Node
FIG. 2 is a block diagram of an example network element 12 (node) for use with the systems and methods described herein. In an embodiment, the network element 12 can be a device that may consolidate the functionality of a Multi-Service Provisioning Platform (MSPP), Digital Cross-Connect (DCS), Ethernet and/or Optical Transport Network (OTN) switch, Wave Division Multiplexed (WDM)/DWDM platform, Packet Optical Transport System (POTS), etc. into a single, high-capacity intelligent switching system providing Layer 0, 1, 2, and/or 3 consolidation. In another embodiment, the network element 12 can be any of an OTN Add/Drop Multiplexer (ADM), a Multi-Service Provisioning Platform (MSPP), a Digital Cross-Connect (DCS), an optical cross-connect, a POTS, an optical switch, a router, a switch, a WDM/DWDM terminal, an access/aggregation device, etc. That is, the network element 12 can be any digital and/or optical system with ingress and egress digital and/or optical signals and switching of channels, timeslots, tributary units, wavelengths, etc.
In an embodiment, the network element 12 includes common equipment 102, one or more line modules 104, and one or more switch modules 106. The common equipment 102 can include power; a control module; Operations, Administration, Maintenance, and Provisioning (OAM&P) access; user interface ports; and the like. The common equipment 102 can connect to a management system 108 through a data communication network 110 (as well as a PCE, an SDN controller, etc.). Additionally, the common equipment 102 can include a control plane processor, such as a controller 200 illustrated in FIG. 3 configured to operate the control plane as described herein. The network element 12 can include an interface 112 for communicatively coupling the common equipment 102, the line modules 104, and the switch modules 106 to one another. For example, the interface 112 can be a backplane, midplane, a bus, optical and/or electrical connectors, or the like. The line modules 104 are configured to provide ingress and egress to the switch modules 106 and to external connections on the links to/from the network element 12. In an embodiment, the line modules 104 can form ingress and egress switches with the switch modules 106 as center stage switches for a three-stage switch, e.g., a three-stage Clos switch. Other configurations and/or architectures are also contemplated.
Further, the line modules 104 can include a plurality of optical connections per module, and each module may include a flexible rate support for any type of connection. The line modules 104 can include WDM interfaces, short-reach interfaces, and the like, and can connect to other line modules 104 on remote network elements, end clients, edge routers, and the like, e.g., forming connections on the links in the network 10. From a logical perspective, the line modules 104 provide ingress and egress ports to the network element 12, and each line module 104 can include one or more physical ports. The switch modules 106 are configured to switch channels, timeslots, tributary units, packets, etc. between the line modules 104. For example, the switch modules 106 can provide wavelength granularity (Layer 0 switching); OTN granularity; Ethernet granularity; and the like. Specifically, the switch modules 106 can include Time Division Multiplexed (TDM) (i.e., circuit switching) and/or packet switching engines. The switch modules 106 can include redundancy as well, such as 1:1, 1:N, etc.
Those of ordinary skill in the art will recognize the network element 12 can include other components which are omitted for illustration purposes, and that the systems and methods described herein are contemplated for use with a plurality of different network elements with the network element 12 presented as an example type of network element. For example, in another embodiment, the network element 12 may not include the switch modules 106, but rather have the corresponding functionality in the line modules 104 (or some equivalent) in a distributed fashion. Also, the network element 12 may omit the switch modules 106 and that functionality, such as in a DWDM terminal. For the network element 12, other architectures providing ingress, egress, and switching are also contemplated for the systems and methods described herein. In general, the systems and methods described herein contemplate use with any network element, and the network element 12 is merely presented as an example for the systems and methods described herein.
Example Controller
FIG. 3 is a block diagram of a controller 200 which can form a controller for the network element 12, a PCE, an SDN controller, a management system, or the like. The controller 200 can be part of the common equipment, such as common equipment 102 in the network element 100, or a stand-alone device communicatively coupled to the network element 100 via the data communication network 110. In a stand-alone configuration, the controller 200 can be the management system 108, a PCE, etc. The controller 200 can include a processor 202 which is a hardware device for executing software instructions such as operating the control plane. The processor 202 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the controller 200, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the controller 200 is in operation, the processor 202 is configured to execute software stored within the memory, to communicate data to and from the memory, and to generally control operations of the controller 200 pursuant to the software instructions. The controller 200 can also include a network interface 204, a data store 206, memory 208, an I/O interface 210, and the like, all of which are communicatively coupled to one another and to the processor 202.
The network interface 54 can be used to enable the controller 200 to communicate on a Data Communication Network (DCN), such as to communicate control plane information to other controllers, to a management system, to the network elements 12, and the like. The network interface 204 can include, for example, an Ethernet module. The network interface 204 can include address, control, and/or data connections to enable appropriate communications on the network. The data store 206 can be used to store data, such as control plane information, provisioning data, OAM&P data, etc. The data store 206 can include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, flash drive, CDROM, and the like), and combinations thereof. Moreover, the data store 206 can incorporate electronic, magnetic, optical, and/or other types of storage media. The memory 208 can include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, flash drive, CDROM, etc.), and combinations thereof. Moreover, the memory 208 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 208 can have a distributed architecture, where various components are situated remotely from one another, but may be accessed by the processor 202. The I/O interface 210 includes components for the controller 200 to communicate with other devices. Further, the I/O interface 210 includes components for the controller 200 to communicate with the other nodes, such as using overhead associated with OTN signals.
The controller 200 is configured to implement software, processes, algorithms, etc. that can control configurable features of the network 10, such as automating discovery of the network elements 12, capacity on the links 14, port availability on the network elements 12, connectivity between ports; dissemination of topology and bandwidth information between the network elements 12; path computation and creation for connections; network-level protection and restoration; and the like. As part of these functions, the controller 200 can include a topology database that maintains the current topology of the network 10, such as based on control plane signaling and a connection database that maintains available bandwidth on the links again based on the control plane signaling as well as management of the network risks for diverse path computation.
The present disclosure contemplates path computation via the controller 200 in a network element 12, via a PCE, NMS, EMS, SDN controller, and the like, etc.
Problem Statement
The network topology view is very different for packet and optical layers. The packet topology presents the logical view of the network where the optical topology presents the physical layout of the network.
FIG. 4 is an example of a network 300 with an optical topology 302. FIG. 5 is an example of the network 300 with a packet topology 304. The optical topology 302 can host the packet topology 304 and different logical packet links may traverse same optical topology link.
As control planes for packet and optical merge to provide a converged look at the topology. This poses a challenge for diverse path compute. In a merged control plane, all SRLGs from optical topology are leaked into packet control plane. This exposes the fact that all packet interfaces may be relying on the same ROADM node. This makes diverse path compute impossible because of common SRLGs. This is also a challenge for Topology-Independent Loop-Free Alternate (TI-LFA) protection calculation.
FIG. 6 is an example of a SRLG configuration 400. As shown, all packet interfaces from site Y to go through ROADM A and site Z have to go through ROADM B. All diverse path compute out of the sites Y, Z will share the ROADM SRLG for the ROADMs A, B, respectively, and hence will not be diverse.
Referring back to FIG. 5 , of note, at the network element 1, the SRLG 100001 is common to all links and thus is unavoidable. Also, the other network elements have similar unavoidable SRLGs.
Another example of unavoidable SRLG would any SRLG assigned to the site (e.g., Point of Presence (POP)) where the source and destination nodes e.g., building, chassis, node itself. If there is only one line card facing towards the destination, then the SRLG associated with that line card. All such SRLGs will qualify as unavoidable SRLGs.
Existing Solutions
A Shared SRLG concept was introduced to exclude certain SRLGs which are unavoidable for a given calculation because of topology constraints e.g., a single ROADM node through which all ports are connected. The shared SRLG concept introduced Command Line Interfaces (CLIs) to specifically call out SRLGs that are shared (or should be ignored).
For Fast Reroute (FRR), a new CLI was added, e.g.,
    • mpls tunnel-auto-fb-profile set auto-fb-profile if-5-8-fb-profile share-srlg-node 21,22 share-srlg-link 21,22
For Dynamic Co-Routed Tunnels (DCRT) tunnels, new CLI options were introduced as part of backup path command, e.g.,
    • gmpls tp-tunnel create rsvp-ingress-corout prot gmpls tp node1 dest-ip 1.1.1.1 setup-priority 2 hold-priority 2 sticky-lsp on auto-backup on path-diverse srlg-and-link strict bfd-monitor enable bfd-profile BFD-10 ms backup-resource-include-all blue share-srlg 21,22 resource-include-all red lsp-reopt enable increment-bandwidth 10000 auto-size enable auto-size-interval 30
This is a configuration intensive process. Every node must be configured with shared-srlg and every technology type has their own configuration for this information even though its use is only diverse path and/or protection compute.
For strict and loose SRLGs, this concept is very similar to Shared SRLG concept. Strict means the SRLG is mandatory to include in compute. By default, all SRLGs are mandatory. Loose means the SRLG is optional in compute and can be ignored. Every node has its configuration for SRLGs and when a diverse path compute fails, a recompute is done after ignoring the SRLGs that are marked as loose. Like the shared SRLG concept, this approach is also configuration intensive with added burden of at least one path compute failure.
Definitions
The present disclosure removes the configuration intensive approach to identifying unavoidable SRLGs and automatically detects them as part of path computation as well as flooding them in the network 10.
As described herein, an SRLG, S, is unavoidable if:
    • (1) a path cannot egress the headend (source) node without traversing the resource marked with S, and
    • (2) the path cannot ingress the tailend (destination) node without traversing the resource marked with S.
A collection of the unavoidable SRLGs may be includes in an unavoidable SRLG list on the headend and tailend and may or may not have common elements. This list of unavoidable SRLGs can be called an ignore list.
Local Ignore List—A local ignore list is collection of unavoidable SRLGs that affect a path compute as they represent a resource directly connected to or on the source node. SRLGs in this list need to be ignored on source to reach a given prefix. Following would be the local ignore list on node 1 to get to any node—Local ignore list={100001}.
Remote Ignore list—A remote ignore list is collection of unavoidable SRLGs that affect a path compute as they represent a resource directly connected to or on the destination node. This is a per prefix ignore list of SRLGs on the destination node that must be ignored on the source node during path compute. Remote ignore list={100009}.
Global ignore list—A global ignore list is collection of unavoidable SRLGs from global scope. It is a union of all local and remote ignore lists configured or learnt in the network. Global ignore list=Local ignore list∪Remote ignore list={10000, 100009}.
Knowledge of local and remote ignore list for diverse or protection path compute is a must. Following are proposed approaches of learning local and remote ignore lists.
    • (1) Configuration of local and remote ignore lists on every node.
    • (2) Configuration of local ignore list on every node and flooded to the network via IGP.
    • (3) Configuration of global ignore list on every node, local and remote ignore lists are derived at path compute time.
    • (4) Encoding unavoidable nature of SRLG in the SRLG value and flooding this mask in the network via IGP.
    • (5) Auto generate local and remote ignore lists before path compute.
      Configuration Based Solution
A global ignore list can be provided at configuration time. This global configuration will need to be configured on every node in the administrative domain that will be head-end to a path. Every newly added node will require this configuration and all the nodes will require an update to their ignore list with unavoidable SRLG from newly added node.
    • mpls traffic-eng set global unavoidable- srlg 100001, 100002, 100003, 100006, 100005
      Configuring a Mask to Identify Unavoidable SRLG
Alternatively, all unavoidable SRLGs can be configured with a specific bit set and all the nodes in the administrative domain can be configured with a bitmask that enables them to test if it is unavoidable SRLG or not. Every new node will need to be configured with this mask. Existing nodes in the network will not require an update to their configuration. As an extension to IGP (Interior Gateway Protocol), e.g., ISIS, this information can be flooded with a new sub-TLV type, to extended reachability TLV type, specifically defined to carry SRLG bitmasks. This will require automatically flooding the bitmask for unavoidable SRLG.
Consider the following bitmask in FIG. 7 . Bit position 1 can be reserved for unavoidable SRLG. An IGP CLI can be introduced to configure this bitmask which can then be flooded in the network. Using this bitmask, nodes can build their global ignore list by first testing every configured and learnt SRLG against the bitmask and adding it to ignore set.
Note, all approaches mentioned above require some form of configuration. Either an explicit configuration or bitmask to determine the SRLG type.
Automatic Building of Ignore List
An unavoidable SRLG must be ignored because on the source and destination of a path they cover all paths and without ignoring them no path can be calculated. If we take intersection of all locally configured SRLGs on all interfaces we get a set L, this set will give us unavoidable SRLGs for this local node. Similarly, we can compute the destination's unavoidable SRLG set by isolating only the SRLGs advertised by the destination on link-by-link basis and then getting their intersection. If there are more than one ROADM node in the middle of the path, only interface through which there is reachability to the destination should be considered. This step can become part of path compute to make sure compute can keep up with the network changes.
Local Unavoidable SRLG (Local Ignore List)
Following steps can be used to compute the local ignore list for a path calculation to destination prefix D.
    • (1) Find all interface through which prefix D is reachable.
    • (2) Take the intersection of all SRLGs configured on those interfaces.
    • (3) Resulting set will be local ignore list for that prefix.
      Remote Unavoidable SRLG (Remote Ignore List)
Remote list calculation can be done in multiple ways.
    • (1) Source can compute all possible paths to the destination to find all the ingress interfaces for destination. Intersection of SRLGs from this interface list will give us remote ignore list.
    • (2) Compute local ignore list from destination's perspective with source node as the destination. This will be faster but may not be accurate as reachability may not be symmetric.
      Path Computation with Automatic Identification of Unavoidable SRLGs
The ignore lists can be automatically created before path computation as well as determined during path computation. Of note, the unavoidable SRLG ignore lists (local and remote) are automatically determined in the present disclosure, removing the need for complex manual configuration. Referring to FIG. 5 , assume we want to compute a path from node 1 to node 9. Let us consider the following configuration which would be applied on node 1 in the above shown topology. This shows standard configuration for SRLG configured:
    • mpls traffic-eng set ip-interface if-1-5-1 srlg 100001, 200105, 100005
    • mpls traffic-eng set ip-interface if-1-5-2 srlg 100001, 200102, 100002, 200203, 100003, 200306, 100006, 100005
    • mpls traffic-eng set ip-interface if-1-6-1 srlg 100001, 200105, 100005, 200506, 100006
    • mpls traffic-eng set ip-interface if-1-6-2 srlg 100001, 200102, 100002, 100003, 200306, 100006
First step to path computation is to build local and remote ignore lists. Alternatively, it is possible to build local and remote ignore lists at path computation runtime, namely in a k-shortest path computation, if all k paths at the source and destination have the same SRLGs, these can be automatically added to or to create the local and remote ignore lists.
Assume in the path computation, a path is computed from the source node 1 to the destination node 9 via intermediate node 5. There are three protection scenarios.
    • (1) A path diverse of the path from nodes 1-5-9.
    • (2) TI-LFA path for the protection of links between node 1 and node 5.
    • (3) TI-LFA path for node protection for node 5.
      (1) A Path Diverse of the Path from Nodes 1-5-9.
Before or while running Dijkstra's algorithm for SPF calculation, the local ignore list must be calculated. Of note, Dijkstra's algorithm is a common SPF calculation, and those skilled in the art will appreciate the present disclosure contemplates any path computation algorithm and is not limited Dijkstra's algorithm.
Based on process described above, local ignore list for source 1 and destination 9 is L. Based on process described above remote ignore list for source 1 and destination 9 is R. L and R are sets of SRLGs that can be ignored. For the existing path, from node 1 to 5 link 1-5-2 is used. SRLG list for this link is equal to S1={100001, 200102, 100002, 200203, 100003, 200306, 100006, 100005}. From node 5 to node 9 link 5-9-2 is used. SRLG list for this link is S2={100005, 200507, 100007, 200709, 100009}.
SRLG list to be considered for path calculation will be a S={union of S1 and S2}.
Before or while running Dijkstra's algorithm, links/nodes that should not be considered require pruning. A link not associated with either source or destination, SRLG set S will be used for pruning.
For pruning links associated with source set SS will be used, where, SS=S−L.
For pruning links associated with destination, set SD will be used, where, SD=S−R.
Once the tree is pruned, Dijkstra's algorithm can be used on it. Also, this pruning can be done at runtime.
TI-LFA Path for the Protection of Links Between Node 1 and Node 5
Preparation for computing TI-LFA path is same as described above for the diverse path. Once the pruned tree is ready then TI-LFA calculation can be run it.
TI-LFA Path for Node Protection for Node 5
For node protection, node 5 is pruned out of the tree. Using the prefix being protected as destination, alternate path calculation process as described above can be used to calculate TI-LFA post convergence path.
Process for Dynamic Path Computation in Networks Based on Automatically Detected Unavoidable Risks
FIG. 8 is a flowchart of a process 400 for dynamic path computation in networks based on automatically detected unavoidable risks. The process 400 contemplates implementation as a method, execution via a processing device such as the controller 200, the network element 12, a management system, a PCE, an SDN controller, etc., and as instructions stored in a non-transitory computer-readable medium that, when executed, cause one or more processors to perform the process 400.
The process 400 includes receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment (step 402); automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks (step 404); and utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list (step 406).
The local ignore list can include local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks, and the remote ignore list can include remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
The automatically creating the local ignore list can include steps of determining all egress interfaces at the source node through which the destination node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the local ignore list.
The automatically creating the remote ignore list can include steps of computing all possible paths to the destination to determine all ingress interfaces for the destination; performing an intersection of all shared risks of the plurality of shared risks on the ingress interfaces; and providing the intersection as the remote ignore list. The automatically creating the remote ignore list can also include steps of determining all egress interfaces at the destination node through which the source node is reachable; performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and providing the intersection as the remote ignore list. Note, this approach assumes symmetric connectivity between the source and destination.
The local ignore list is a first set of the plurality of shared risks denoted as L, the remote ignore list is a second set of the plurality of shared risks denoted as R, a third set of the plurality of shared risks associated with the path is denoted as S, and the process 400 can further include pruning a source set of the plurality of shared risks, SS, as S−L; pruning a destination set of the plurality of shared risks, SD, as S−R; and utilizing the source set and the destination set in the path computation.
The automatically creating can include a k-shortest path computation and taking an intersection of the plurality of shared risks at the source and the destination on all k shortest paths. The path computation can be one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node. The network can include an optical topology and a packet topology sharing a common control plane. The automatically creating can be performed at runtime of the path computation.
CONCLUSION
It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; central processing units (CPUs); digital signal processors (DSPs): customized processors such as network processors (NPs) or network processing units (NPUs), graphics processing units (GPUs), or the like; field programmable gate arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments.
Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer-readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. The foregoing sections include headers for various embodiments and those skilled in the art will appreciate these various embodiments may be used in combination with one another as well as individually.

Claims (19)

What is claimed is:
1. A non-transitory computer-readable medium comprising instructions that, when executed, cause one or more processors to perform steps of:
receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment;
automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks, wherein the automatically creating includes a k-shortest path computation and taking an intersection of the plurality of shared risks at the source and the destination on all k shortest paths; and
utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list.
2. The non-transitory computer-readable medium of claim 1, wherein
the local ignore list includes local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks, and
the remote ignore list includes remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
3. The non-transitory computer-readable medium of claim 1, wherein the automatically creating the local ignore list includes steps of
determining all egress interfaces at the source node through which the destination node is reachable;
performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and
providing the intersection as the local ignore list.
4. The non-transitory computer-readable medium of claim 1, wherein the automatically creating the remote ignore list includes steps of
computing all possible paths to the destination to determine all ingress interfaces for the destination;
performing an intersection of all shared risks of the plurality of shared risks on the ingress interfaces; and
providing the intersection as the remote ignore list.
5. The non-transitory computer-readable medium of claim 1, wherein the automatically creating the remote ignore list includes steps of
determining all egress interfaces at the destination node through which the source node is reachable;
performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and
providing the intersection as the remote ignore list.
6. The non-transitory computer-readable medium of claim 1, wherein the local ignore list is a first set of the plurality of shared risks denoted as L, wherein the remote ignore list is a second set of the plurality of shared risks denoted as R, wherein a third set of the plurality of shared risks associated with the path is denoted as S, and wherein the steps further include
pruning a source set of the plurality of shared risks, SS, as S-L;
pruning a destination set of the plurality of shared risks, SD, as S—R; and
utilizing the source set and the destination set in the path computation.
7. The non-transitory computer-readable medium of claim 1, wherein the path computation is one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node.
8. The non-transitory computer-readable medium of claim 1, wherein the network includes an optical topology and a packet topology sharing a common control plane.
9. The non-transitory computer-readable medium of claim 1, wherein the automatically creating is performed at runtime of the path computation.
10. A non-transitory computer-readable medium comprising instructions that, when executed, cause one or more processors to perform steps of:
receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment;
automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks; and
utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list,
wherein the local ignore list is a first set of the plurality of shared risks denoted as L, wherein the remote ignore list is a second set of the plurality of shared risks denoted as R, wherein a third set of the plurality of shared risks associated with the path is denoted as S, and wherein the steps further include
pruning a source set of the plurality of shared risks, SS, as S-L;
pruning a destination set of the plurality of shared risks, SD, as S—R; and
utilizing the source set and the destination set in the path computation.
11. The non-transitory computer-readable medium of claim 10, wherein
the local ignore list includes local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks, and
the remote ignore list includes remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
12. The non-transitory computer-readable medium of claim 10, wherein the automatically creating the remote ignore list includes steps of
determining all egress interfaces at the destination node through which the source node is reachable;
performing an intersection of all shared risks of the plurality of shared risks on the egress interfaces; and
providing the intersection as the remote ignore list.
13. The non-transitory computer-readable medium of claim 10, wherein the path computation is one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node.
14. The non-transitory computer-readable medium of claim 10, wherein the automatically creating is performed at runtime of the path computation.
15. A non-transitory computer-readable medium comprising instructions that, when executed, cause one or more processors to perform steps of:
receiving a plurality of shared risks associated with any of one or more network layers, network links, and network equipment;
automatically creating a local ignore list for a source node and a remote ignore list for a destination node, based on the plurality of shared risks; and
utilizing the plurality of shared risks in a path computation for a path between the source node and the destination node and ignoring any of the plurality of shared risks in the local ignore list and the remote ignore list,
wherein the automatically creating the remote ignore list includes steps of
computing all possible paths to the destination to determine all ingress interfaces for the destination;
performing an intersection of all shared risks of the plurality of shared risks on the ingress interfaces; and
providing the intersection as the remote ignore list.
16. The non-transitory computer-readable medium of claim 15, wherein
the local ignore list includes local shared risks of the plurality of shared risks that the path cannot egress the source node without traversing the local shared risks, and
the remote ignore list includes remote shared risks of the plurality of shared risks that the path cannot ingress the destination node without traversing the remote shared risks.
17. The non-transitory computer-readable medium of claim 15, wherein the path computation is one of a diverse path, Topology-Independent Loop-Free Alternate (TI-LFA) protection of links, and TI-LFA protection of a node.
18. The non-transitory computer-readable medium of claim 15, wherein the network includes an optical topology and a packet topology sharing a common control plane.
19. The non-transitory computer-readable medium of claim 15, wherein the automatically creating is performed at runtime of the path computation.
US17/897,675 2022-08-29 2022-08-29 Dynamic path computation in networks based on automatically detected unavoidable risks Active 2043-09-22 US12267231B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/897,675 US12267231B2 (en) 2022-08-29 2022-08-29 Dynamic path computation in networks based on automatically detected unavoidable risks
CN202380063094.4A CN119856474A (en) 2022-08-29 2023-08-22 Dynamic path computation in an auto-detection based unavoidable risk network
PCT/US2023/030834 WO2024049678A1 (en) 2022-08-29 2023-08-22 Dynamic path computation in networks based on automatically detected unavoidable risks
EP23772637.7A EP4581810A1 (en) 2022-08-29 2023-08-22 Dynamic path computation in networks based on automatically detected unavoidable risks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/897,675 US12267231B2 (en) 2022-08-29 2022-08-29 Dynamic path computation in networks based on automatically detected unavoidable risks

Publications (2)

Publication Number Publication Date
US20240073125A1 US20240073125A1 (en) 2024-02-29
US12267231B2 true US12267231B2 (en) 2025-04-01

Family

ID=88093788

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/897,675 Active 2043-09-22 US12267231B2 (en) 2022-08-29 2022-08-29 Dynamic path computation in networks based on automatically detected unavoidable risks

Country Status (4)

Country Link
US (1) US12267231B2 (en)
EP (1) EP4581810A1 (en)
CN (1) CN119856474A (en)
WO (1) WO2024049678A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12489700B2 (en) 2024-05-10 2025-12-02 Ciena Corporation Automated ORF propagation in BGP networks

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191573A1 (en) 2001-06-14 2002-12-19 Whitehill Eric A. Embedded routing algorithms under the internet protocol routing layer of a software architecture protocol stack in a mobile Ad-Hoc network
US20030131130A1 (en) 2002-01-08 2003-07-10 Menachem Malkosh Method and device for selecting a communication path
US20050025058A1 (en) * 2003-07-30 2005-02-03 Siddheswar Chaudhuri Method for stochastic selection of improved cost metric backup paths in shared-mesh protection networks
US20060004916A1 (en) 2002-10-14 2006-01-05 Diego Caviglia Communications system
US20060140190A1 (en) * 2004-12-23 2006-06-29 Alcatel Method and apparatus for configuring a communication path
US20080130491A1 (en) 2006-11-02 2008-06-05 Hung-Hsiang Jonathan Chao Determining rerouting information for double-link failure recovery in an internet protocol network
US20080170857A1 (en) 2006-10-16 2008-07-17 Fujitsu Network Commununications, Inc. System and Method for Establishing Protected Connections
US20100172236A1 (en) 2009-01-08 2010-07-08 Vagish Madrahalli Methods and systems for mesh restoration based on associated hop designated transit lists
US20120075988A1 (en) 2010-09-29 2012-03-29 Wenhu Lu Fast flooding based fast convergence to recover from network failures
US8296407B2 (en) 2003-03-31 2012-10-23 Alcatel Lucent Calculation, representation, and maintenance of sharing information in mesh networks
US20120307644A1 (en) 2011-06-02 2012-12-06 Cisco Technology, Inc. System and method for link protection using shared srlg association
US20130003605A1 (en) 2010-10-25 2013-01-03 Level 3 Communications, Llc Network optimization
US20130010589A1 (en) 2011-07-06 2013-01-10 Sriganesh Kini Mpls fast re-route using ldp (ldp-frr)
US8456984B2 (en) 2010-07-19 2013-06-04 Ciena Corporation Virtualized shared protection capacity
US8515280B1 (en) 2010-12-13 2013-08-20 At&T Intellectual Property I, L.P. Physically-diverse routing in heterogeneous optical networks
US20130223225A1 (en) 2012-02-23 2013-08-29 Cisco Technology, Inc. Computing risk-sharing metrics in shared-media communication networks
US20140126355A1 (en) 2012-10-05 2014-05-08 Cisco Technology, Inc. Identifying, translating and filtering shared risk groups in communications networks
US20140147107A1 (en) 2012-11-27 2014-05-29 Gerard Leo SWINKELS Drop port based shared risk link group systems and methods
US20140226967A1 (en) 2013-02-12 2014-08-14 Infinera Corp. Demand Advertisement Method for Shared Mesh Protection Path Computation
US8824334B2 (en) 2005-04-07 2014-09-02 Cisco Technology, Inc. Dynamic shared risk node group (SRNG) membership discovery
US20140258486A1 (en) 2013-03-10 2014-09-11 Clarence Filsfils Server-Layer Shared Link Risk Group Analysis to Identify Potential Client-Layer Network Connectivity Loss
US8854955B2 (en) 2012-11-02 2014-10-07 Ciena Corporation Mesh restoration and bandwidth allocation systems and methods for shared risk connection groups
US8867333B2 (en) 2003-03-31 2014-10-21 Alcatel Lucent Restoration path calculation considering shared-risk link groups in mesh networks
US20140355419A1 (en) * 2013-05-31 2014-12-04 Telefonaktiebolaget L M Ericsson (Publ) Pseudo wire end-to-end redundancy setup over disjoint mpls transport paths
US20150295673A1 (en) 2014-04-10 2015-10-15 Fujitsu Limited Efficient utilization of transceivers for shared restoration in flexible grid optical networks
US9167318B1 (en) 2012-08-07 2015-10-20 Ciena Corporation Bandwidth advertisement systems and methods for optical transport network
US20160112327A1 (en) 2014-10-17 2016-04-21 Ciena Corporation Optical and packet path computation and selection systems and methods
US20160164739A1 (en) 2014-12-09 2016-06-09 Ciena Corporation Reduced link bandwidth update systems and methods for improved scalability, efficiency, and performance
US9497521B2 (en) 2014-04-30 2016-11-15 Ciena Corporation Opportunity based path computation systems and methods in constraint-based routing
US20170063658A1 (en) 2015-08-26 2017-03-02 Huawei Technologies Co., Ltd. Shared Risk Group Vicinities and Methods
US20180097725A1 (en) 2016-09-30 2018-04-05 Juniper Networks, Inc. Multiple paths computation for label switched paths
US20180191635A1 (en) 2015-06-30 2018-07-05 British Telecommunications Public Limited Company Communications network
US20180262421A1 (en) 2017-03-08 2018-09-13 Ciena Corporation Efficient shared risk group representation as a bit vector
US20210119903A1 (en) 2019-10-22 2021-04-22 Ciena Corporation Permitted network risks in diverse route determinations
US20220086078A1 (en) * 2020-09-11 2022-03-17 Ciena Corporation Segment Routing Traffic Engineering (SR-TE) with awareness of local protection

Patent Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191573A1 (en) 2001-06-14 2002-12-19 Whitehill Eric A. Embedded routing algorithms under the internet protocol routing layer of a software architecture protocol stack in a mobile Ad-Hoc network
US20030131130A1 (en) 2002-01-08 2003-07-10 Menachem Malkosh Method and device for selecting a communication path
US20060004916A1 (en) 2002-10-14 2006-01-05 Diego Caviglia Communications system
US8296407B2 (en) 2003-03-31 2012-10-23 Alcatel Lucent Calculation, representation, and maintenance of sharing information in mesh networks
US8867333B2 (en) 2003-03-31 2014-10-21 Alcatel Lucent Restoration path calculation considering shared-risk link groups in mesh networks
US20050025058A1 (en) * 2003-07-30 2005-02-03 Siddheswar Chaudhuri Method for stochastic selection of improved cost metric backup paths in shared-mesh protection networks
US20060140190A1 (en) * 2004-12-23 2006-06-29 Alcatel Method and apparatus for configuring a communication path
US8824334B2 (en) 2005-04-07 2014-09-02 Cisco Technology, Inc. Dynamic shared risk node group (SRNG) membership discovery
US20080170857A1 (en) 2006-10-16 2008-07-17 Fujitsu Network Commununications, Inc. System and Method for Establishing Protected Connections
US20080130491A1 (en) 2006-11-02 2008-06-05 Hung-Hsiang Jonathan Chao Determining rerouting information for double-link failure recovery in an internet protocol network
US20100172236A1 (en) 2009-01-08 2010-07-08 Vagish Madrahalli Methods and systems for mesh restoration based on associated hop designated transit lists
US8456984B2 (en) 2010-07-19 2013-06-04 Ciena Corporation Virtualized shared protection capacity
US20120075988A1 (en) 2010-09-29 2012-03-29 Wenhu Lu Fast flooding based fast convergence to recover from network failures
US20130003605A1 (en) 2010-10-25 2013-01-03 Level 3 Communications, Llc Network optimization
US8515280B1 (en) 2010-12-13 2013-08-20 At&T Intellectual Property I, L.P. Physically-diverse routing in heterogeneous optical networks
US20120307644A1 (en) 2011-06-02 2012-12-06 Cisco Technology, Inc. System and method for link protection using shared srlg association
US20130010589A1 (en) 2011-07-06 2013-01-10 Sriganesh Kini Mpls fast re-route using ldp (ldp-frr)
US20130223225A1 (en) 2012-02-23 2013-08-29 Cisco Technology, Inc. Computing risk-sharing metrics in shared-media communication networks
US9167318B1 (en) 2012-08-07 2015-10-20 Ciena Corporation Bandwidth advertisement systems and methods for optical transport network
US20140126355A1 (en) 2012-10-05 2014-05-08 Cisco Technology, Inc. Identifying, translating and filtering shared risk groups in communications networks
US8854955B2 (en) 2012-11-02 2014-10-07 Ciena Corporation Mesh restoration and bandwidth allocation systems and methods for shared risk connection groups
US20140147107A1 (en) 2012-11-27 2014-05-29 Gerard Leo SWINKELS Drop port based shared risk link group systems and methods
US20140226967A1 (en) 2013-02-12 2014-08-14 Infinera Corp. Demand Advertisement Method for Shared Mesh Protection Path Computation
US20140258486A1 (en) 2013-03-10 2014-09-11 Clarence Filsfils Server-Layer Shared Link Risk Group Analysis to Identify Potential Client-Layer Network Connectivity Loss
US20140355419A1 (en) * 2013-05-31 2014-12-04 Telefonaktiebolaget L M Ericsson (Publ) Pseudo wire end-to-end redundancy setup over disjoint mpls transport paths
US20150295673A1 (en) 2014-04-10 2015-10-15 Fujitsu Limited Efficient utilization of transceivers for shared restoration in flexible grid optical networks
US9497521B2 (en) 2014-04-30 2016-11-15 Ciena Corporation Opportunity based path computation systems and methods in constraint-based routing
US20160112327A1 (en) 2014-10-17 2016-04-21 Ciena Corporation Optical and packet path computation and selection systems and methods
US20160164739A1 (en) 2014-12-09 2016-06-09 Ciena Corporation Reduced link bandwidth update systems and methods for improved scalability, efficiency, and performance
US20180191635A1 (en) 2015-06-30 2018-07-05 British Telecommunications Public Limited Company Communications network
US20170063658A1 (en) 2015-08-26 2017-03-02 Huawei Technologies Co., Ltd. Shared Risk Group Vicinities and Methods
US20180097725A1 (en) 2016-09-30 2018-04-05 Juniper Networks, Inc. Multiple paths computation for label switched paths
US20180262421A1 (en) 2017-03-08 2018-09-13 Ciena Corporation Efficient shared risk group representation as a bit vector
US20210119903A1 (en) 2019-10-22 2021-04-22 Ciena Corporation Permitted network risks in diverse route determinations
US20220086078A1 (en) * 2020-09-11 2022-03-17 Ciena Corporation Segment Routing Traffic Engineering (SR-TE) with awareness of local protection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dec. 11, 2020, European Search Report and the Annex to the European Search Report on European Patent Application No. EP 20 20 3025.
Nov. 3, 2023, International Search Report and Written Opinion for International Application No. PCT/US2023/030834.
Wang et al., "Novel disjoint path selection scheme based on link availability in Ason", Journal of China Universities of Posts and Telecommunications, Beijing-Youdian-Daxue, CN, vol. 14, No. 3, Sep. 1, 2007 (Sep. 1, 2007) pp. 70-73, XP022938027, ISSN: 1005-8885(07)60151-4.

Also Published As

Publication number Publication date
CN119856474A (en) 2025-04-18
US20240073125A1 (en) 2024-02-29
WO2024049678A1 (en) 2024-03-07
EP4581810A1 (en) 2025-07-09

Similar Documents

Publication Publication Date Title
US11356356B2 (en) Permitted network risks in diverse route determinations
US11240145B2 (en) Shared risk representation in networks for troubleshooting, assignment, and propagation across layers
US10560212B2 (en) Systems and methods for mesh restoration in networks due to intra-node faults
US9831977B2 (en) Photonic routing systems and methods computing loop-free topologies
US10097306B1 (en) Path computation systems and methods in control plane based networks for photonic layer topologies
US10003867B2 (en) Disjoint path computation systems and methods in optical networks
US10187144B2 (en) Multi-layer network resiliency systems and methods
US9848049B2 (en) Service preemption selection systems and methods in networks
US9832548B2 (en) Flexible behavior modification during restoration in optical networks
US10355935B2 (en) Reduced link bandwidth update systems and methods for improved scalability, efficiency, and performance
US9553661B2 (en) Adaptive preconfiguration in optical transport network
US20180102834A1 (en) Partial survivability for multi-carrier and multi-module optical interfaces
US9985724B2 (en) Horizontal synchronization extensions for service resizing in optical networks
US10116552B2 (en) Efficient shared risk group representation as a bit vector
EP3869747B1 (en) Resolving label depth and protection in segment routing
US10382276B2 (en) Control plane routing systems and methods for pervasive maintenance
US12267231B2 (en) Dynamic path computation in networks based on automatically detected unavoidable risks
US10491318B1 (en) Systems and methods for coordinating layer 1 and layer 2 protection switching techniques for efficient layer 2 traffic recovery
US10079753B2 (en) Multi-drop unidirectional services in a network
US12267102B2 (en) Prioritizing optical routes for restoration based on failure impact on an IP layer
US20250080882A1 (en) Advertising an IP address of loopback interfaces to participating OSPF areas
Tsirakakis Fault Recovery in Carrier Ethernet, Optical and GMPLS Networks

Legal Events

Date Code Title Description
AS Assignment

Owner name: CIENA CORPORATION, MARYLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YADAV, BHUPENDRA;VAITHILINGAM, PRABHU;SMALLEGANGE, GERALD;SIGNING DATES FROM 20220824 TO 20220829;REEL/FRAME:060928/0407

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE