US20150043326A1 - Redundant network connections - Google Patents

Redundant network connections Download PDF

Info

Publication number
US20150043326A1
US20150043326A1 US14/521,174 US201414521174A US2015043326A1 US 20150043326 A1 US20150043326 A1 US 20150043326A1 US 201414521174 A US201414521174 A US 201414521174A US 2015043326 A1 US2015043326 A1 US 2015043326A1
Authority
US
United States
Prior art keywords
fault
edge device
provider edge
active gateway
customer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/521,174
Inventor
Don Fedyk
Shafiq Pirbhai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Canada Inc
Alcatel Lucent SAS
Original Assignee
Alcatel Lucent SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel Lucent SAS filed Critical Alcatel Lucent SAS
Priority to US14/521,174 priority Critical patent/US20150043326A1/en
Assigned to ALCATEL-LUCENT, USA reassignment ALCATEL-LUCENT, USA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FEDYK, Don
Assigned to ALCATEL-LUCENT CANADA, INC. reassignment ALCATEL-LUCENT CANADA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PRIBHAI, SHAFIQ
Assigned to ALCATEL-LUCENT reassignment ALCATEL-LUCENT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALCATEL-LUCENT USA, INC.
Assigned to ALCATEL-LUCENT reassignment ALCATEL-LUCENT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALCATEL-LUCENT CANADA, INC.
Publication of US20150043326A1 publication Critical patent/US20150043326A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0811Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/16Gateway arrangements

Definitions

  • Various exemplary embodiments disclosed herein relate generally to telecommunications networks.
  • Many computer networks are implemented as geographically-distributed, multi-tiered, and multi-technology associations of computing devices.
  • traffic may pass through numerous intermediate devices according to many different protocols.
  • MPLS multi-protocol label switching
  • MPLS multi-protocol label switching
  • Various exemplary embodiments relate to a method performed by a provider edge device for enabling connection redundancy, the method including one or more of the following: performing an active gateway election to determine whether the provider edge device will be an active gateway for a connection; if the provider edge device will be the active gateway for the connection, indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device; and if the provider edge device will not be the active gateway for the connection, indicating to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device.
  • a provider edge device for enabling connection redundancy
  • the provider edge device including one or more of the following: a customer edge interface configured to communicate with a customer edge device; an active gateway election module configured to determine whether the provider edge device will be an active gateway for a connection; and a fault reporting module configured to: if the active gateway election module determines that the provider edge device will be the active gateway for the connection, indicate to the customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device, and if the active gateway election module determines that the provider edge device will not be the active gateway for the connection, indicate to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device.
  • Various alternative embodiments additionally include determining that a paired provider edge device is currently experiencing a fault, wherein the step of performing an active gateway election is performed in response to determining that a paired provider edge device is currently experiencing a fault.
  • Various alternative embodiments additionally include detecting a fault associated with the link between the provider edge device and the customer edge device; and sending an indication to a paired provider edge device that the provider edge is currently experiencing a fault.
  • Various alternative embodiments additionally include detecting a fault on at least two links between the provider edge device and other devices in the network; sending an indication to a paired provider edge device that the provider edge is currently experiencing a fault.
  • the step of indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device includes: constructing a connectivity fault message that indicates that no fault has been detected; and transmitting the connectivity fault message to a maintenance endpoint of the customer edge device.
  • the active gateway election includes: determining whether the provider edge is currently experiencing a connectivity fault management (CFM) fault; determining whether a paired provider edge is currently experiencing a CFM fault; if the provider edge is not currently experiencing a CFM fault and the paired provider edge is currently experiencing a CFM fault, determining that the provider edge device will be the active gateway; and if the provider edge is currently experiencing a CFM fault and the paired provider edge is not currently experiencing a CFM fault, determining that the provider edge device will not be the active gateway.
  • CFM connectivity fault management
  • the active gateway election includes: determining whether the provider edge is currently experiencing a pseudowire (PW) fault; determining whether a paired provider edge is currently experiencing a PW fault; if the provider edge is not currently experiencing a PW fault and the paired provider edge is currently experiencing a PW fault, determining that the provider edge device will be the active gateway; and if the provider edge is currently experiencing a PW fault and the paired provider edge is not currently experiencing a PW fault, determining that the provider edge device will not be the active gateway.
  • PW pseudowire
  • connection is a control connection
  • the method further including: identifying a fate-shared connection associated with the control connection; if the provider edge device will be the active gateway for the control connection, indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device for the fate-shared connection; and if the provider edge device will not be the active gateway for the control connection, indicating to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device for the fate-shared connection.
  • a system for providing redundancy in a virtual leased line (VLL) service including one or more of the following: a first provider edge device configured to: support an VLL service between a first customer edge device and a second customer edge device; maintain a first maintenance endpoint (MEP) associated with a first link between the first provider edge device and the first customer edge device; execute a border gateway protocol (BGP) multihoming process to elect, among the first provider edge device and a second provider edge device, a designated forwarder for the VLL service; and report a status associated with the first link to the first customer edge device via the first MEP based on the outcome of the BGP multihoming process.
  • MEP maintenance endpoint
  • BGP border gateway protocol
  • Various alternative embodiments additionally include the second provider edge device, wherein the second provider edge device is configured to: support the Vll service between the first customer edge device and the second customer edge device; maintain a second maintenance endpoint (MEP) associated with a second link between the second provider edge device and the first customer edge device; execute the border gateway protocol (BGP) multihoming process to elect, among the first provider edge device and the second provider edge device, the designated forwarder for the VLL service; and report a status associated with the second link to the first customer edge device via the second MEP based on the outcome of the BGP multihoming process.
  • MEP maintenance endpoint
  • BGP border gateway protocol
  • Various alternative embodiments additionally include the first customer edge device, wherein the first customer edge device is configured to: maintain a third MEP associated with the first MEP that receives the report of the status of the first link from the first MEP; and maintain a fourth MEP associated with the second MEP that receives the report of the status of the second link from the second MEP; and switch VLL service traffic between the first provider edge and the second provider edge based on the status associated with the first link and the status associated with the second link, according to a G.8031 standard.
  • FIG. 1 illustrates an exemplary network for providing a redundant network connection
  • FIG. 2 illustrates an exemplary network for enabling connection redundancy
  • FIG. 3 illustrates an exemplary provider edge device for enabling connection redundancy
  • FIG. 4 illustrates an exemplary method for controlling an initial selection of a provider edge device
  • FIG. 5 illustrates an exemplary method for controlling a selection of a provider edge device based on the occurrence of various faults
  • FIG. 6 illustrates an exemplary method for electing an active gateway.
  • FIG. 1 illustrates an exemplary network 100 for providing a redundant network connection.
  • Exemplary network 100 may provide communication between two customer edge (CE) devices 110 , 120 .
  • CE devices 110 , 120 may each be a router located at a customer premises, such as a user's household or a lower-tier ISP location.
  • CE devices 110 , 120 may connect to one or more end-user devices (not shown), either directly or through one or more intermediate nodes (not shown). Examples of end user devices may include personal computers, laptops, tablets, mobile phones, servers, and other devices.
  • Such end user devices may communicate with each other via network 100 and, as such, CE A 110 and CE F 120 may exchange data with each other to provide such communication.
  • Each CE 110 , 120 may be connected to one or more provider edge (PE) devices 112 , 114 , 122 , 124 , either directly or through one or more intermediate devices (not shown).
  • PE provider edge
  • CE A 110 may be connected to PE B 112 and PE C 114 via links 116 , 118 , respectively, while CE F 120 may be connected to PE D 122 and PE E 124 via links 126 , 128 , respectively.
  • Each PE device 112 , 114 , 122 , 124 may be a router located at a provider premises.
  • PE B 112 may be located at a premises of a first provider
  • PE C 114 may be located at a premises of a second provider
  • both PE D 122 and PE E 124 may be located at the premises of a third provider.
  • Links 116 , 118 , 126 , 138 may be Ethernet, ATM, Frame Relay, or other connections.
  • paired PE devices may further be directly connected via, for example, interchassis-backup (ICB) pseudowires (PW) (not shown).
  • IOB interchassis-backup
  • PW pseudowires
  • PE B 112 and PE C 114 may be connected by one or more ICB PWs while PE D 122 and PE E 124 may also be connected by one or more ICB PWs.
  • ICB PWs may be used to redirect traffic between paired PE devices immediately after a CE device or other device switches traffic from one PE to another.
  • PE devices 112 , 114 , 122 , 124 may enable communication between CE devices 110 , 120 over packet network 130 .
  • Packet network 130 may be a backbone network and may enable communication according to the multi-protocol label switching (MPLS) protocol. Accordingly, packet network 130 may include a number of intermediate devices (not shown) for enabling communication between PE devices 112 , 114 , 122 , 124 .
  • PE devices 112 , 114 , 122 , 124 may communicate with each other via links 132 , 134 , 136 , 138 .
  • Links 132 , 134 , 136 , 138 may each constitute paths across packet network 130 and may represent pseudowires established for a service across exemplary network 100 .
  • PE B 112 may be in communication with both PE D 122 and PE E 124 via links 132 , 136 , respectively.
  • PE C may also be in communication with both PE D 122 and PE E 124 via links 134 , 138 , respectively.
  • CE A 110 may transmit packets to either PE B 112 or PE C 114 , each of which may forward the packets to either PE D 122 or PE E 124 , each of which may, in turn, forward packets to CE F 120 .
  • CE A 110 may decide to forward traffic to only one of PE devices 112 , 114 .
  • CE A 110 may implement Ethernet linear protection switching, as defined in ITU-T G.8031. It will be apparent to those of ordinary skill in the art that other redundancy or path selection methods may be employed other than G.8031.
  • CE A 110 may regard link 116 as active and link 118 as inactive for a particular connection 140 .
  • CE F 120 may regard link 126 as inactive and link 128 as active for the connection 140 .
  • connection 140 which may be, for example, a virtual leased line (VLL) service, may traverse links 116 , 136 , 128 to provide service between CE A 110 and CE F 120 .
  • VLL virtual leased line
  • connection 140 may be altered to maintain communication. For example, if a fault occurs in link 116 , PE B 112 , or both links 132 , 136 , CE A 110 may determine that link 118 should be regarded as active and link 116 as inactive. In various embodiments herein, as will be described below, this determination by CE A 100 may be driven by separate processes running on PE B 112 and/or PE C 114 . In various embodiments, these PE processes may operate prior to CE link switching and thus fully drive the switch, while in other embodiments, the PE processes and CE link switching may operate in parallel. Thereafter, connection 140 may instead traverse links 118 , 138 , 128 .
  • active and inactive links may be chosen on a per connection or per connection group basis.
  • a second connection (not shown) may traverse links 118 , 138 , 128 while connection 140 traverses the links as illustrated.
  • redundant devices and links may also be leveraged for load balancing.
  • FIG. 2 illustrates an exemplary network 200 for enabling connection redundancy.
  • Exemplary network 200 may illustrate a more detailed view of CE A 110 , PE B 120 , and PE C 130 of exemplary network 100 .
  • CE A 210 , PE B 230 , and PE C 250 may correspond to CE A 110 , PE B 120 , and PE C 130 , respectively.
  • CE A 210 may be configured with a VLL Epipe endpoint 212 for providing a VLL Epipe service to another CE such as, for example, CE F 120 of exemplary network 100 .
  • Epipe to refer to a VLL service for transporting Ethernet frames over an IP/MPLS network and may encompass an E-Line service. It should be apparent that the various mechanisms described herein may be applicable to other VLL services such as, for example, Ipipes, Apipes, Fpipes, and/or Cpipes.
  • CE A 210 may also be configured with a service access point (SAP) 214 facing the customer and providing a user device access to the Epipe 212 .
  • the Epipe 212 may be configured to provide an Ethernet linear protection switching service between PE B 230 and PE C 250 according to ITU-T G.8031 220 .
  • CE A 210 may maintain maintenance endpoints (MEPs) 224 , 226 for monitoring the status of the connection to PE B 230 and PE C 250 , respectively.
  • MEPS 224 , 226 may be implemented according to various Ethernet operations, administration, and maintenance (OAM) protocols known to those of skill in the art.
  • the G.8031 service may use status information obtained from MEPs 224 , 226 to make decisions regarding protection switching. For example, if MEP 226 detects a fault or receives an indication of a fault from an associated MEP, G.8031 may direct traffic to PE B 230 instead.
  • PE B 230 may be configured to support the Epipe service 232 and may be configured with a SAP 240 and a MEP 242 . MEP 242 may be paired with MEP 224 on the CE A 210 to monitor the link between the two devices.
  • PE B 230 may also be configured with pseudowire (PW) services 236 , 238 for communicating with provider edge devices (not shown) at other locations such as, for example, PE D 122 and PE 124 of exemplary network 100 , respectively.
  • PW pseudowire
  • the Epipe service 232 on PE B 230 may select a PW 236 , 238 for carrying Epipe traffic and forward all such traffic over the selected PW 236 , 238 . This selection may be based on coordination with other PEs or CEs. For example, if PE B 230 is aware that the PE to which PW 238 connects is active for the Epipe, PE B 230 may forward all Epipe traffic over PW 238 .
  • PE C 250 may be implemented in a similar manner to PE B 230 .
  • PE C may be configured to support Epipe 252 , and PWs 256 , 258 .
  • PE C 250 may also maintain a SAP 260 and a MEP 262 that is paired with MEP 226 of CE A 210 . Because PE B 230 and PE C 250 provide redundant service to CE A 210 , the PE devices may be referred to as “paired.”
  • PE B 230 and PE C 250 may be connected via one or more ICB PWs (not shown) for redirecting in-flight traffic after CE A 210 redirects traffic from one PE to another.
  • PE B 230 and PE C 250 may exert some control over the operation of the G.8031 service on CE A 210 .
  • PE B 230 and PE C 250 may each be configured to operate a border gateway protocol (BGP) multi-homing (MH) service 234 , 254 configured to control at least two connection points independently of other connections such as, in this case, SAP 240 , 260 , respectively, and an endpoint on CE A 210 .
  • BGP-MH service 234 , 254 may operate between the two PE devices 230 , 250 to elect one of PE B 230 and PE C 250 as designated forwarder according to the specifics of that protocol.
  • BGP-MH services 234 , 254 may communicate with each other via an additional or existing link (not shown) between PE device 230 , 250 .
  • the elected designated forwarder may then operate as an active gateway (AG).
  • AG active gateway
  • the BGP-MH service 234 running on PE B 230 may determine that PE B 230 is designated forwarder for the Epipe.
  • BGP-MH service 234 may cause MEP 242 to indicate to MEP 224 on CE A 210 that no fault has been detected in association with the link between CE A 210 and PE B 230 .
  • This indication may include affirmatively sending a connectivity fault management (CFM) message 244 indicating “NoFault” in an interface status (if Status) type-length-value (TLV) field.
  • CFM connectivity fault management
  • this indication may include refraining from sending such a message when a previous CFM message sent by MEP 242 has indicated “NoFault,” thereby allowing CE A 210 to continue under the assumption that there is no fault in the connection between CE A 210 and PE B 230 .
  • PE B 230 may indicate that it is available to receive traffic.
  • BGP-MH service 254 running on PE C 250 may come to the conclusion that PE C 250 should not operate as designated forwarder for the Epipe.
  • BGP-MH service 254 may cause MEP 262 to indicate a fault to MEP 226 .
  • This indication may include affirmatively sending a CCM message 264 that notifies MEP 226 of a fault or refraining from sending a message when a previously sent CCM message indicated a fault.
  • the G.8031 service on CE A 210 will set PE C 250 as inactive for the purposes of the Epipe 212 because CE A 210 believes PE C 250 to be unreachable or otherwise unusable.
  • the system described enables BGP-MH implementations 234 , 254 to control the operation of a G.8031 service without any modification to the operation of the G.8031 service.
  • the BGP-MH implementations 234 , 254 may select one PE 230 , 250 to operate as designated forwarder and thereafter may use CFM methods to indicate that only the active gateway has a working connection to CE A 210 .
  • the CE A 210 may have no choice but to forward traffic to the active gateway, which is PE B 230 in the illustrated example.
  • PE B 230 may detect a true fault associated with the link between CE A 210 and PE B 230 .
  • the fault associated with the link between CE A 210 and PE B 230 may include, for example, PE B 230 becoming inoperable, the link between CE A 210 and PE B 230 itself going down, or faults occurring on other links down- or upstream that are likely to impact traffic over the link between CE A 210 and PE B 230 .
  • Such a fault may be detected by, for example, by the PE device 230 itself discovering a fault or by the PE device 230 receiving a message from another device indicating the detection of a fault elsewhere in the network.
  • PE B 230 may determine that both PWs 236 , 238 are currently faulty and cannot be used to communicate with the PEs on the opposite side of the network. Either of these conditions may render PE B 230 an unsatisfactory choice for carrying the traffic related to the Epipe service.
  • PE B 230 may send an indication to its paired PE, PE C 250 , indicating that PE B 230 is currently experiencing a fault. This may trigger both BGP-MH 234 and BGP-MH 254 to perform the active gateway election procedure again. This time, based on the knowledge of connectivity faults associated with PE B 230 , BGP-MH 254 may determine that PE C 250 should now be designated forwarder.
  • BGP-MH 254 may then proceed to indicate, via MEP 262 , that there is no fault in the connection between MEP 262 and MEP 226 , as discussed above with respect to PE B 230 . Thereafter, the G.8031 service on CE A 210 may transmit traffic associated with Epipe 212 to PE C.
  • PE B 230 and PE C 250 may select an existing Epipe to serve as a control.
  • PE B 230 and PE C 250 may establish a new Epipe to serve exclusively as a control.
  • the operation of BGP-MH 234 , 254 may then occur as described above with respect to this control Epipe.
  • PE B 230 and PE C 250 may also support a number of additional Epipes (not shown) that are configured to share a fate with the control Epipe.
  • a SAP configured on the PE 230 , 250 for each such fate-shared Epipe may monitor the status of the control Epipe and mirror the monitored status.
  • the SAP for each fate-shared Epipe may also indicate a fault, thereby ensuring that the CE 210 chooses the same PE 230 , 250 to handle all traffic from any of the fate-shared Epipes.
  • FIG. 3 illustrates an exemplary provider edge (PE) device 300 for enabling connection redundancy.
  • PE device 300 may correspond to one or more of PE devices 112 , 114 , 122 , 124 , 230 , 250 .
  • PE device 300 may include a customer edge interface 310 , virtual leased line module 320 , pseudowire module 330 , backbone interface 340 , connectivity fault management module 350 , border gateway protocol module 360 , and/or provider edge interface 370 .
  • various components of PE device 300 may be abstracted to a degree and that PE device 300 may include a number of hardware components implementing or supporting the components described herein.
  • PE device 300 may include one or more processors for implementing the functionality described herein.
  • the term “processor” will be understood to include processors and other similar hardware components such as field programmable gate arrays and/or application-specific integrated circuits.
  • Customer edge interface 310 may be an interface comprising hardware and/or executable instructions encoded on a machine-readable storage medium configured to communicate with at least one other device, such as a CE device.
  • customer edge interface 310 may include one or more interfaces that communicate according to a protocol such as Ethernet, Frame Relay, ATM, and/or PPP. During operation, customer edge interface 310 may communicate with one or more customer edge devices.
  • Virtual leased line (VLL) module 320 may include hardware and/or executable instructions on a machine-readable storage medium configured to provide a VLL service.
  • VLL module 320 may be configured with one or more SAPs for VLL services and, upon receiving traffic from a CE device, associate the traffic with an appropriate SAP. After determining that received traffic is associated with a particular SAP for a VLL service, VLL module 320 may select an appropriate pseudowire over which to forward the traffic. VLL module 320 may then pass the traffic and selection on to pseudowire module 330 for further processing.
  • VLL Module 320 may also be configured to process traffic in the reverse direction as well.
  • VLL module 320 may receive traffic from pseudowire module 330 , associate it with a particular VLL service, and forward the traffic to one or more customer edge devices via customer edge interface 310 . It will be apparent that the foregoing description of implementing a VLL service may be a simplification in some respects. Various additional or alternative details for implementing VLL services will be apparent to those of skill in the art.
  • Pseudowire (PW) module 330 may include hardware and/or executable instructions on a machine-readable storage medium configured to provide and maintain pseudowires across a network to other PE devices.
  • PW module 320 may receive traffic from VLL module 320 and an indication of a PW over which to transmit the traffic.
  • PW module 330 may then encapsulate the traffic in an appropriate tunneling protocol such as, for example, MPLS, and forward the encapsulated traffic to another PE device via backbone interface 340 .
  • PW module 330 may also handle traffic flowing in the opposite direction.
  • PW module 330 may receive traffic via backbone interface 340 , decapsulate the traffic, and pass the traffic to VLL module 320 for further processing. It will be apparent that the foregoing description of implementing a PW service may be a simplification in some respects. Various additional or alternative details for implementing PW services will be apparent to those of skill in the art.
  • PW module 330 may also provide various maintenance functions with respect to established pseudowires. For example, PW module 330 may detect faults in established PWs or receive indications of faults from other devices supporting a multi-segment PW. Upon determining that one or more PWs associated with a VLL are experiencing faults, PW module 330 may send an indication of such to border gateway protocol module 360 . In some embodiments, PW module 330 may only send such an indication when all PWs associated with a VLL are experiencing faults.
  • Backbone interface 340 may be an interface comprising hardware and/or executable instructions encoded on a machine-readable storage medium configured to communicate with at least one other device that forms part of a network backbone.
  • backbone interface 340 may include one or more interfaces that communicate according to a protocol such as MPLS.
  • Connectivity fault management (CFM) module 350 may include hardware and/or executable instructions on a machine-readable storage medium configured to provide connectivity fault management with respect to various links established via customer edge interface 310 .
  • CFM module 350 may implement Ethernet OAM according to IEEE 802.1ag.
  • CFM module 350 may establish and maintain various MEPs associated with customer edge interface 310 .
  • CFM module 350 may discover faults on various links associated with customer edge interface 310 .
  • CFM module 350 may report the fault to border gateway protocol module 360 .
  • border gateway protocol module 360 It will be appreciated that various alternative fault management protocols may be used instead of Ethernet OAM.
  • CFM module 350 may be referred to as a “fault reporting module,” to refer to a module that implements any fault management functions, regardless of whether it is implemented according to Ethernet OAM or another protocol.
  • CFM module 350 may perform various functions at the request of BGP module 360 .
  • BGP module 360 may instruct CFM module 350 to construct and send a CFM message to a particular MEP.
  • CFM module 350 may construct and transmit a CFM message indicating a fault regardless of the actual existence of such a fault.
  • the CFM module 350 may, on request by BGP module 360 , construct and send a CFM message indicating that no fault exists.
  • Border gateway protocol (BGP) module 360 may include hardware and/or executable instructions on a machine-readable storage medium configured to implement various aspects of the border gateway protocol. For example, BGP module 360 may implement the designated forwarder election process defined for BGP multi-homing applications. This designated forwarder may then be used as the active gateway (AG). It will be appreciated that various alternative AG election methods may be employed instead of BGP. Accordingly, BGP module 360 may be referred to as an “AG election module” to refer to a module configured to elect an AG, regardless of whether it is implemented according to BGP or some other protocol.
  • BGP module 360 may elect an AG under various circumstances. For example, on the establishment of a new VLL service, BGP module 360 may make an initial election of an AG. BGP module 360 may perform the election process again in response to changing network conditions. For example, if either CFM module 350 or PW module 330 reports a fault to BGP module 360 , BGP module 360 may proceed to perform AG election based on the new information.
  • BGP module 360 may further be configured to communicate with one or more paired PE devices via provider edge interface 370 .
  • BGP module 360 may send an indication that PE 300 is experiencing a fault to one or more paired PE devices via provider edge interface 370 .
  • BGP module 360 may also receive similar indications from paired PE devices via provider edge interface 370 .
  • BGP module 360 may perform AG election again in response to receiving such an indication.
  • BGP module 360 may have decided whether PE 300 will be designated forwarder for the VLL service. If the PE 300 will be designated forwarder for the VLL service, BGP module may indicate to an appropriate CE device that there is no fault on the link between CE interface 310 and that CE device. This may include instructing CFM module 350 to construct and transmit a CFM message. On the other hand, if the PE 300 will not be designated forwarder for the VLL service, BGP module may indicate to an appropriate CE device that a fault exists on the link between CE interface 310 and that CE device. Again, this may include instructing CFM module 350 to construct and transmit a CFM message.
  • Provider edge interface 370 may be an interface comprising hardware and/or executable instructions encoded on a machine-readable storage medium configured to communicate with at least one other device, such as a paired PE device.
  • provider edge interface 370 may include one or more interfaces that communicate according to a protocol such as Ethernet, Frame Relay, ATM, and/or PPP.
  • provider edge interface 370 may communicate with one or more customer edge devices.
  • provider edge interface 370 may share at least some hardware in common with customer edge interface 310 .
  • FIG. 4 illustrates an exemplary method 400 for controlling an initial selection of a provider edge device.
  • Method 400 may be performed by the components of a PE device such as PE device 300 .
  • method 400 may be performed by CFM module 350 and/or BGP module 360 .
  • Method 400 may begin in step 405 and proceed to step 410 where the PE device may send initial CFM signals to a CE device. For example, the PE device may send a CFM message indicating a fault to an appropriate MEP configured on the CE device.
  • the PE device may perform AG election to determine whether the PE device will be designated forwarder. An example of an AG election process will be described in greater detail below with respect to FIG. 6 .
  • step 420 the PE device may evaluate whether the AG election process has elected the PE device as designated forwarder. If not, method 400 may proceed to step 425 where the PE device may indicate a fault to the CE device. In various embodiments, this step may include simply refraining from sending additional CFM messages. In particular, because a fault CFM message was sent previously in step 410 , it may be unnecessary to send an additional fault CFM message. Method 400 may then proceed to end in step 435 .
  • step 415 may instead proceed from step 420 to step 430 .
  • step 430 the PE device may indicate a “no fault” condition to the CE device. In various embodiments, this step may include simply refraining from sending additional CFM messages. Alternatively, because the previous message sent in step 410 indicated a fault, the PE device may construct and transmit a new “no fault” CFM message to the appropriate MEP configured on the CE device. Method 400 may then proceed to end in step 435 .
  • FIG. 5 illustrates an exemplary method 500 for controlling a selection of a provider edge device based on the occurrence of various faults.
  • Method 500 may be performed by the components of a PE device such as PE device 300 .
  • method 500 may be performed by CFM module 350 and/or BGP module 360 .
  • Method 500 may begin in step 505 and proceed to step 510 where the PE device may monitor for various events that may impact the network. After receiving an indication of such an event, method 500 may proceed to step 515 where the PE device may determine whether the event included the detection of a new CFM fault at the PE device. If so, method 500 may proceed to step 525 . Otherwise, method 500 may proceed to step 520 . In step 520 , the PE device may determine whether the event included the detection of a new pseudowire fault. Again, if so, method 500 may proceed to step 525 . Otherwise, method 500 may proceed to step 530 where the PE device may determine whether the event included receiving an indication that a paired PE is currently experiencing a fault.
  • the PE device may receive a message indicating that a paired PE device has detected a CFM or PW fault. If a paired PE device is experiencing a fault, method 500 may proceed to step 535 . Otherwise, method 500 may proceed to end in step 555 .
  • the PE device may send an indication to any paired PE devices that the PE device is experiencing a fault.
  • This indication may include specific details describing the fault such as, for example, whether the fault is a CFM or PW fault.
  • Method 500 may then proceed to step 535 .
  • Steps 535 - 550 may correspond to steps 415 - 430 of method 400 .
  • method 500 may proceed to end in step 555 .
  • FIG. 6 illustrates an exemplary method 600 for electing an active gateway.
  • Method 600 may be performed by the components of a PE device such as PE device 300 .
  • method 600 may be performed by BGP module 360 . It should be noted that method 600 is one example of an AG election process and that alternative methods may be useful or appropriate in various alternative embodiments.
  • Method 600 may begin in step 605 and proceed to step 610 where the PE device may determine whether the PE device is the only device not currently experiencing a CFM fault. If the PE device is not experiencing a CFM fault but any paired PE devices are experiencing CFM fault, method 600 may proceed to elect the PE device AG in step 630 . Otherwise, method 600 may proceed to step 615 .
  • the PE device may determine whether it is currently experiencing a CFM fault while at least one other PE device is not experiencing such a fault. If so, method 600 may proceed to determine that the PE device should not be elected AG in step 635 . Otherwise, method 600 may proceed to step 620 .
  • the PE device may determine whether the PE device is the only device not currently experiencing a PW fault.
  • a PW fault may exist only when all appropriate PWs for a VLL are experiencing faults. If the PE device is not experiencing a PW fault but any paired PE devices are experiencing PW fault, method 600 may proceed to elect the PE device AG in step 630 . Otherwise, method 600 may proceed to step 625 .
  • the PE device may determine whether it is currently experiencing a PW fault while at least one other PE device is not experiencing such a fault. If so, method 600 may proceed to determine that the PE device should not be elected AG in step 635 . Otherwise, method 600 may proceed to step 640 .
  • the PE device may proceed to perform further election procedures based on the BGP-MH protocol. For example, the PE device may attempt to make an election based on a local preference, an AS-PATH attribute, and/or a NEXT-HOP attribute. Various modifications will be apparent to those of skill in the art.
  • method 600 may proceed to end in step 645 .
  • various embodiments enable the provision of a redundant, multi-technology, point-to-point service that does not require the learning of MAC addresses. For example, by leveraging BGP-MH designated forwarder election processes to control a linear protection switching, traffic can be reliably transported across a backbone or other network without incurring the overhead of an address learning system.
  • various exemplary embodiments of the invention may be implemented in hardware and/or firmware. Furthermore, various exemplary embodiments may be implemented as instructions stored on a machine-readable storage medium, which may be read and executed by at least one processor to perform the operations described in detail herein.
  • a machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device.
  • a tangible and non-transitory machine-readable storage medium may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.
  • any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention.
  • any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

Abstract

Various exemplary embodiments relate to a method and related network node including one or more of the following: performing an active gateway election to determine whether the provider edge device will be an active gateway for a connection; if the provider edge device will be the active gateway for the connection, indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device; and if the provider edge device will not be the active gateway for the connection, indicating to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/359,993, filed on Jan. 27, 2012, the entire disclosure of which is hereby incorporated herein by reference for all purposes.
  • TECHNICAL FIELD
  • Various exemplary embodiments disclosed herein relate generally to telecommunications networks.
  • BACKGROUND
  • Many computer networks, the most noteworthy of which is the Internet, are implemented as geographically-distributed, multi-tiered, and multi-technology associations of computing devices. To enable communication between two devices, traffic may pass through numerous intermediate devices according to many different protocols. For example, in the case of the Internet, local traffic may be exchanged according to the Ethernet protocol while traffic crossing the backbone of the network may be passed according to the multi-protocol label switching (MPLS) protocol. As such, various mechanisms have been developed to manage such multi-technology handovers and thereby ensure end-to-end connectivity.
  • While handover mechanisms may be sufficient to enable communication in ideal network conditions, conditions in practice are rarely ideal. Intermediate routing devices and the links connecting these devices may become overloaded or inoperable for various reasons and may render a particular communication path broken. Many networks, however, provide a robust mesh of connections, affording multiple communication paths between any two devices. Thus, if one communication path is severed, communication may be switched to a different path, thereby preserving the connection between the two devices. To provide such functionality, various redundancy mechanisms have also been developed.
  • SUMMARY
  • A brief summary of various exemplary embodiments is presented below. Some simplifications and omissions may be made in the following summary, which is intended to highlight and introduce some aspects of the various exemplary embodiments, but not to limit the scope of the invention. Detailed descriptions of a preferred exemplary embodiment adequate to allow those of ordinary skill in the art to make and use the inventive concepts will follow in later sections.
  • Various exemplary embodiments relate to a method performed by a provider edge device for enabling connection redundancy, the method including one or more of the following: performing an active gateway election to determine whether the provider edge device will be an active gateway for a connection; if the provider edge device will be the active gateway for the connection, indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device; and if the provider edge device will not be the active gateway for the connection, indicating to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device.
  • Various exemplary embodiments relate to a provider edge device for enabling connection redundancy, the provider edge device including one or more of the following: a customer edge interface configured to communicate with a customer edge device; an active gateway election module configured to determine whether the provider edge device will be an active gateway for a connection; and a fault reporting module configured to: if the active gateway election module determines that the provider edge device will be the active gateway for the connection, indicate to the customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device, and if the active gateway election module determines that the provider edge device will not be the active gateway for the connection, indicate to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device.
  • Various alternative embodiments additionally include determining that a paired provider edge device is currently experiencing a fault, wherein the step of performing an active gateway election is performed in response to determining that a paired provider edge device is currently experiencing a fault.
  • Various alternative embodiments additionally include detecting a fault associated with the link between the provider edge device and the customer edge device; and sending an indication to a paired provider edge device that the provider edge is currently experiencing a fault.
  • Various alternative embodiments additionally include detecting a fault on at least two links between the provider edge device and other devices in the network; sending an indication to a paired provider edge device that the provider edge is currently experiencing a fault.
  • Various embodiments are described wherein the step of indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device includes: constructing a connectivity fault message that indicates that no fault has been detected; and transmitting the connectivity fault message to a maintenance endpoint of the customer edge device.
  • Various embodiments are described wherein the active gateway election is performed according to the border gateway protocol.
  • Various embodiments are described wherein the active gateway election includes: determining whether the provider edge is currently experiencing a connectivity fault management (CFM) fault; determining whether a paired provider edge is currently experiencing a CFM fault; if the provider edge is not currently experiencing a CFM fault and the paired provider edge is currently experiencing a CFM fault, determining that the provider edge device will be the active gateway; and if the provider edge is currently experiencing a CFM fault and the paired provider edge is not currently experiencing a CFM fault, determining that the provider edge device will not be the active gateway.
  • Various embodiments are described wherein the active gateway election includes: determining whether the provider edge is currently experiencing a pseudowire (PW) fault; determining whether a paired provider edge is currently experiencing a PW fault; if the provider edge is not currently experiencing a PW fault and the paired provider edge is currently experiencing a PW fault, determining that the provider edge device will be the active gateway; and if the provider edge is currently experiencing a PW fault and the paired provider edge is not currently experiencing a PW fault, determining that the provider edge device will not be the active gateway.
  • Various embodiments are described wherein the connection is a control connection, the method further including: identifying a fate-shared connection associated with the control connection; if the provider edge device will be the active gateway for the control connection, indicating to a customer edge device that no fault is currently associated with a link between the customer edge device and the provider edge device for the fate-shared connection; and if the provider edge device will not be the active gateway for the control connection, indicating to the customer edge device that a fault is currently associated with the link between the customer edge device and the provider edge device for the fate-shared connection.
  • Various exemplary embodiments relate to a system for providing redundancy in a virtual leased line (VLL) service, the system including one or more of the following: a first provider edge device configured to: support an VLL service between a first customer edge device and a second customer edge device; maintain a first maintenance endpoint (MEP) associated with a first link between the first provider edge device and the first customer edge device; execute a border gateway protocol (BGP) multihoming process to elect, among the first provider edge device and a second provider edge device, a designated forwarder for the VLL service; and report a status associated with the first link to the first customer edge device via the first MEP based on the outcome of the BGP multihoming process.
  • Various alternative embodiments additionally include the second provider edge device, wherein the second provider edge device is configured to: support the Vll service between the first customer edge device and the second customer edge device; maintain a second maintenance endpoint (MEP) associated with a second link between the second provider edge device and the first customer edge device; execute the border gateway protocol (BGP) multihoming process to elect, among the first provider edge device and the second provider edge device, the designated forwarder for the VLL service; and report a status associated with the second link to the first customer edge device via the second MEP based on the outcome of the BGP multihoming process.
  • Various alternative embodiments additionally include the first customer edge device, wherein the first customer edge device is configured to: maintain a third MEP associated with the first MEP that receives the report of the status of the first link from the first MEP; and maintain a fourth MEP associated with the second MEP that receives the report of the status of the second link from the second MEP; and switch VLL service traffic between the first provider edge and the second provider edge based on the status associated with the first link and the status associated with the second link, according to a G.8031 standard.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to better understand various exemplary embodiments, reference is made to the accompanying drawings, wherein:
  • FIG. 1 illustrates an exemplary network for providing a redundant network connection;
  • FIG. 2 illustrates an exemplary network for enabling connection redundancy;
  • FIG. 3 illustrates an exemplary provider edge device for enabling connection redundancy;
  • FIG. 4 illustrates an exemplary method for controlling an initial selection of a provider edge device;
  • FIG. 5 illustrates an exemplary method for controlling a selection of a provider edge device based on the occurrence of various faults; and
  • FIG. 6 illustrates an exemplary method for electing an active gateway.
  • To facilitate understanding, identical reference numerals have been used to designate elements having substantially the same or similar structure and/or substantially the same or similar function.
  • DETAILED DESCRIPTION
  • While various handover and redundancy mechanisms have been developed and implemented in communication networks, there remains a need for mechanisms tailored to as-of-yet unmet considerations. For example, many redundancy mechanisms rely on MAC address learning to provide their functionality. However, in many cases, it is undesirable to implement such address learning because, for example, the known algorithms may not scale well, for example, due to the requirement of learning and storage of MAC addresses. Thus, there exists a need for a method and device for implementing a redundant point-to-point service that does not rely on MAC address learning.
  • Referring now to the drawings, in which like numerals refer to like components or steps, there are disclosed broad aspects of various exemplary embodiments.
  • FIG. 1 illustrates an exemplary network 100 for providing a redundant network connection. Exemplary network 100 may provide communication between two customer edge (CE) devices 110, 120. CE devices 110, 120 may each be a router located at a customer premises, such as a user's household or a lower-tier ISP location. CE devices 110, 120 may connect to one or more end-user devices (not shown), either directly or through one or more intermediate nodes (not shown). Examples of end user devices may include personal computers, laptops, tablets, mobile phones, servers, and other devices. Such end user devices may communicate with each other via network 100 and, as such, CE A 110 and CE F 120 may exchange data with each other to provide such communication.
  • Each CE 110, 120 may be connected to one or more provider edge (PE) devices 112, 114, 122, 124, either directly or through one or more intermediate devices (not shown). For example, CE A 110 may be connected to PE B 112 and PE C 114 via links 116, 118, respectively, while CE F 120 may be connected to PE D 122 and PE E 124 via links 126, 128, respectively. Each PE device 112, 114, 122, 124 may be a router located at a provider premises. For example, PE B 112 may be located at a premises of a first provider, PE C 114 may be located at a premises of a second provider, and both PE D 122 and PE E 124 may be located at the premises of a third provider. Various alternative arrangements for the ownership and location of PE devices 112, 114, 122, 124 will be apparent to those of skill in the art. Links 116, 118, 126, 138 may be Ethernet, ATM, Frame Relay, or other connections. In various embodiments, paired PE devices may further be directly connected via, for example, interchassis-backup (ICB) pseudowires (PW) (not shown). For example, PE B 112 and PE C 114 may be connected by one or more ICB PWs while PE D 122 and PE E 124 may also be connected by one or more ICB PWs. Such ICB PWs may be used to redirect traffic between paired PE devices immediately after a CE device or other device switches traffic from one PE to another.
  • PE devices 112, 114, 122, 124 may enable communication between CE devices 110, 120 over packet network 130. Packet network 130 may be a backbone network and may enable communication according to the multi-protocol label switching (MPLS) protocol. Accordingly, packet network 130 may include a number of intermediate devices (not shown) for enabling communication between PE devices 112, 114, 122, 124. PE devices 112, 114, 122, 124 may communicate with each other via links 132, 134, 136, 138. Links 132, 134, 136, 138 may each constitute paths across packet network 130 and may represent pseudowires established for a service across exemplary network 100. As shown, PE B 112 may be in communication with both PE D 122 and PE E 124 via links 132, 136, respectively. PE C may also be in communication with both PE D 122 and PE E 124 via links 134, 138, respectively.
  • As illustrated, packets may be exchanged between CE A 110 and CE F 120 over multiple different paths. In particular, CE A 110 may transmit packets to either PE B 112 or PE C 114, each of which may forward the packets to either PE D 122 or PE E 124, each of which may, in turn, forward packets to CE F 120. In various embodiments, it may be desirable for related traffic to traverse only one such path. Accordingly, CE A 110 may decide to forward traffic to only one of PE devices 112, 114. To provide such functionality, CE A 110 may implement Ethernet linear protection switching, as defined in ITU-T G.8031. It will be apparent to those of ordinary skill in the art that other redundancy or path selection methods may be employed other than G.8031. As shown, CE A 110 may regard link 116 as active and link 118 as inactive for a particular connection 140. Likewise, CE F 120 may regard link 126 as inactive and link 128 as active for the connection 140. Thus connection 140, which may be, for example, a virtual leased line (VLL) service, may traverse links 116, 136, 128 to provide service between CE A 110 and CE F 120.
  • Later, if some fault or other change to network 100 renders this path severed or inefficient, the path taken by connection 140 may be altered to maintain communication. For example, if a fault occurs in link 116, PE B 112, or both links 132, 136, CE A 110 may determine that link 118 should be regarded as active and link 116 as inactive. In various embodiments herein, as will be described below, this determination by CE A 100 may be driven by separate processes running on PE B 112 and/or PE C 114. In various embodiments, these PE processes may operate prior to CE link switching and thus fully drive the switch, while in other embodiments, the PE processes and CE link switching may operate in parallel. Thereafter, connection 140 may instead traverse links 118, 138, 128.
  • It should be noted that, in various embodiments, active and inactive links may be chosen on a per connection or per connection group basis. For example, a second connection (not shown) may traverse links 118, 138, 128 while connection 140 traverses the links as illustrated. In this way, redundant devices and links may also be leveraged for load balancing.
  • FIG. 2 illustrates an exemplary network 200 for enabling connection redundancy. Exemplary network 200 may illustrate a more detailed view of CE A 110, PE B 120, and PE C 130 of exemplary network 100. CE A 210, PE B 230, and PE C 250 may correspond to CE A 110, PE B 120, and PE C 130, respectively. As shown, CE A 210 may be configured with a VLL Epipe endpoint 212 for providing a VLL Epipe service to another CE such as, for example, CE F 120 of exemplary network 100. The person of ordinary skill in the art will understand the term “Epipe” to refer to a VLL service for transporting Ethernet frames over an IP/MPLS network and may encompass an E-Line service. It should be apparent that the various mechanisms described herein may be applicable to other VLL services such as, for example, Ipipes, Apipes, Fpipes, and/or Cpipes.
  • CE A 210 may also be configured with a service access point (SAP) 214 facing the customer and providing a user device access to the Epipe 212. The Epipe 212 may be configured to provide an Ethernet linear protection switching service between PE B 230 and PE C 250 according to ITU-T G.8031 220. As part of the G.8031 service, CE A 210 may maintain maintenance endpoints (MEPs) 224, 226 for monitoring the status of the connection to PE B 230 and PE C 250, respectively. MEPS 224, 226 may be implemented according to various Ethernet operations, administration, and maintenance (OAM) protocols known to those of skill in the art. The G.8031 service may use status information obtained from MEPs 224, 226 to make decisions regarding protection switching. For example, if MEP 226 detects a fault or receives an indication of a fault from an associated MEP, G.8031 may direct traffic to PE B 230 instead.
  • PE B 230 may be configured to support the Epipe service 232 and may be configured with a SAP 240 and a MEP 242. MEP 242 may be paired with MEP 224 on the CE A 210 to monitor the link between the two devices. PE B 230 may also be configured with pseudowire (PW) services 236, 238 for communicating with provider edge devices (not shown) at other locations such as, for example, PE D 122 and PE 124 of exemplary network 100, respectively. The Epipe service 232 on PE B 230 may select a PW 236, 238 for carrying Epipe traffic and forward all such traffic over the selected PW 236, 238. This selection may be based on coordination with other PEs or CEs. For example, if PE B 230 is aware that the PE to which PW 238 connects is active for the Epipe, PE B 230 may forward all Epipe traffic over PW 238.
  • PE C 250 may be implemented in a similar manner to PE B 230. For example, PE C may be configured to support Epipe 252, and PWs 256, 258. PE C 250 may also maintain a SAP 260 and a MEP 262 that is paired with MEP 226 of CE A 210. Because PE B 230 and PE C 250 provide redundant service to CE A 210, the PE devices may be referred to as “paired.” As previously explained, PE B 230 and PE C 250 may be connected via one or more ICB PWs (not shown) for redirecting in-flight traffic after CE A 210 redirects traffic from one PE to another.
  • PE B 230 and PE C 250 may exert some control over the operation of the G.8031 service on CE A 210. For example, PE B 230 and PE C 250 may each be configured to operate a border gateway protocol (BGP) multi-homing (MH) service 234, 254 configured to control at least two connection points independently of other connections such as, in this case, SAP 240, 260, respectively, and an endpoint on CE A 210. BGP- MH service 234, 254 may operate between the two PE devices 230, 250 to elect one of PE B 230 and PE C 250 as designated forwarder according to the specifics of that protocol. In various embodiments, BGP- MH services 234, 254 may communicate with each other via an additional or existing link (not shown) between PE device 230, 250. The elected designated forwarder may then operate as an active gateway (AG). It should be apparent that various alternative protocols may be used instead of BGP-MH to elect an active gateway or to otherwise select one of PE B 230 and PE C 250 to carry traffic.
  • As shown, the BGP-MH service 234 running on PE B 230 may determine that PE B 230 is designated forwarder for the Epipe. In response, BGP-MH service 234 may cause MEP 242 to indicate to MEP 224 on CE A 210 that no fault has been detected in association with the link between CE A 210 and PE B 230. This indication may include affirmatively sending a connectivity fault management (CFM) message 244 indicating “NoFault” in an interface status (if Status) type-length-value (TLV) field. Alternatively, this indication may include refraining from sending such a message when a previous CFM message sent by MEP 242 has indicated “NoFault,” thereby allowing CE A 210 to continue under the assumption that there is no fault in the connection between CE A 210 and PE B 230. By making this indication, PE B 230 may indicate that it is available to receive traffic.
  • BGP-MH service 254 running on PE C 250, on the other hand, may come to the conclusion that PE C 250 should not operate as designated forwarder for the Epipe. In response to this determination, BGP-MH service 254 may cause MEP 262 to indicate a fault to MEP 226. This indication may include affirmatively sending a CCM message 264 that notifies MEP 226 of a fault or refraining from sending a message when a previously sent CCM message indicated a fault. Thereafter, the G.8031 service on CE A 210 will set PE C 250 as inactive for the purposes of the Epipe 212 because CE A 210 believes PE C 250 to be unreachable or otherwise unusable.
  • It should be apparent from the foregoing description that the system described enables BGP- MH implementations 234, 254 to control the operation of a G.8031 service without any modification to the operation of the G.8031 service. In particular, the BGP- MH implementations 234, 254 may select one PE 230, 250 to operate as designated forwarder and thereafter may use CFM methods to indicate that only the active gateway has a working connection to CE A 210. On this assumption, the CE A 210 may have no choice but to forward traffic to the active gateway, which is PE B 230 in the illustrated example.
  • It should also be apparent that while the examples provided herein make reference to particular protocols such as VLL, BGP-MH, G.8031, and CFM, various alternative combinations of protocols may be used to provide the described functionality. For example, an alternative embodiment may utilize virtual private LAN service (VPLS) instead of VLL. Various modifications to enable the use of such protocols will be apparent to those of skill in the art.
  • Various embodiments may provide for updating the active gateway upon the occurrence of particular events within network 200. For example, PE B 230 may detect a true fault associated with the link between CE A 210 and PE B 230. In various embodiments, the fault associated with the link between CE A 210 and PE B 230 may include, for example, PE B 230 becoming inoperable, the link between CE A 210 and PE B 230 itself going down, or faults occurring on other links down- or upstream that are likely to impact traffic over the link between CE A 210 and PE B 230. Such a fault may be detected by, for example, by the PE device 230 itself discovering a fault or by the PE device 230 receiving a message from another device indicating the detection of a fault elsewhere in the network.
  • As another example, PE B 230 may determine that both PWs 236, 238 are currently faulty and cannot be used to communicate with the PEs on the opposite side of the network. Either of these conditions may render PE B 230 an unsatisfactory choice for carrying the traffic related to the Epipe service. In response to detection of either condition, PE B 230 may send an indication to its paired PE, PE C 250, indicating that PE B 230 is currently experiencing a fault. This may trigger both BGP-MH 234 and BGP-MH 254 to perform the active gateway election procedure again. This time, based on the knowledge of connectivity faults associated with PE B 230, BGP-MH 254 may determine that PE C 250 should now be designated forwarder. BGP-MH 254 may then proceed to indicate, via MEP 262, that there is no fault in the connection between MEP 262 and MEP 226, as discussed above with respect to PE B 230. Thereafter, the G.8031 service on CE A 210 may transmit traffic associated with Epipe 212 to PE C.
  • Various embodiments may further implement “fate sharing” to reduce signaling and state overhead. In such embodiments, PE B 230 and PE C 250 may select an existing Epipe to serve as a control. Alternatively, PE B 230 and PE C 250 may establish a new Epipe to serve exclusively as a control. The operation of BGP- MH 234, 254 may then occur as described above with respect to this control Epipe. PE B 230 and PE C 250 may also support a number of additional Epipes (not shown) that are configured to share a fate with the control Epipe. A SAP configured on the PE 230, 250 for each such fate-shared Epipe may monitor the status of the control Epipe and mirror the monitored status. Thus, if the control Epipe indicates a fault, the SAP for each fate-shared Epipe may also indicate a fault, thereby ensuring that the CE 210 chooses the same PE 230, 250 to handle all traffic from any of the fate-shared Epipes.
  • FIG. 3 illustrates an exemplary provider edge (PE) device 300 for enabling connection redundancy. PE device 300 may correspond to one or more of PE devices 112, 114, 122, 124, 230, 250. PE device 300 may include a customer edge interface 310, virtual leased line module 320, pseudowire module 330, backbone interface 340, connectivity fault management module 350, border gateway protocol module 360, and/or provider edge interface 370. It will be understood that various components of PE device 300 may be abstracted to a degree and that PE device 300 may include a number of hardware components implementing or supporting the components described herein. For example, PE device 300 may include one or more processors for implementing the functionality described herein. As used herein, the term “processor” will be understood to include processors and other similar hardware components such as field programmable gate arrays and/or application-specific integrated circuits.
  • Customer edge interface 310 may be an interface comprising hardware and/or executable instructions encoded on a machine-readable storage medium configured to communicate with at least one other device, such as a CE device. In various embodiments, customer edge interface 310 may include one or more interfaces that communicate according to a protocol such as Ethernet, Frame Relay, ATM, and/or PPP. During operation, customer edge interface 310 may communicate with one or more customer edge devices.
  • Virtual leased line (VLL) module 320 may include hardware and/or executable instructions on a machine-readable storage medium configured to provide a VLL service. VLL module 320 may be configured with one or more SAPs for VLL services and, upon receiving traffic from a CE device, associate the traffic with an appropriate SAP. After determining that received traffic is associated with a particular SAP for a VLL service, VLL module 320 may select an appropriate pseudowire over which to forward the traffic. VLL module 320 may then pass the traffic and selection on to pseudowire module 330 for further processing. VLL Module 320 may also be configured to process traffic in the reverse direction as well. In particular, VLL module 320 may receive traffic from pseudowire module 330, associate it with a particular VLL service, and forward the traffic to one or more customer edge devices via customer edge interface 310. It will be apparent that the foregoing description of implementing a VLL service may be a simplification in some respects. Various additional or alternative details for implementing VLL services will be apparent to those of skill in the art.
  • Pseudowire (PW) module 330 may include hardware and/or executable instructions on a machine-readable storage medium configured to provide and maintain pseudowires across a network to other PE devices. For example, PW module 320 may receive traffic from VLL module 320 and an indication of a PW over which to transmit the traffic. PW module 330 may then encapsulate the traffic in an appropriate tunneling protocol such as, for example, MPLS, and forward the encapsulated traffic to another PE device via backbone interface 340. PW module 330 may also handle traffic flowing in the opposite direction. For example, PW module 330 may receive traffic via backbone interface 340, decapsulate the traffic, and pass the traffic to VLL module 320 for further processing. It will be apparent that the foregoing description of implementing a PW service may be a simplification in some respects. Various additional or alternative details for implementing PW services will be apparent to those of skill in the art.
  • PW module 330 may also provide various maintenance functions with respect to established pseudowires. For example, PW module 330 may detect faults in established PWs or receive indications of faults from other devices supporting a multi-segment PW. Upon determining that one or more PWs associated with a VLL are experiencing faults, PW module 330 may send an indication of such to border gateway protocol module 360. In some embodiments, PW module 330 may only send such an indication when all PWs associated with a VLL are experiencing faults.
  • Backbone interface 340 may be an interface comprising hardware and/or executable instructions encoded on a machine-readable storage medium configured to communicate with at least one other device that forms part of a network backbone. In various embodiments, backbone interface 340 may include one or more interfaces that communicate according to a protocol such as MPLS.
  • Connectivity fault management (CFM) module 350 may include hardware and/or executable instructions on a machine-readable storage medium configured to provide connectivity fault management with respect to various links established via customer edge interface 310. For example, CFM module 350 may implement Ethernet OAM according to IEEE 802.1ag. As such, CFM module 350 may establish and maintain various MEPs associated with customer edge interface 310. During the course of operation, CFM module 350 may discover faults on various links associated with customer edge interface 310. Upon discovering such a fault, CFM module 350 may report the fault to border gateway protocol module 360. It will be appreciated that various alternative fault management protocols may be used instead of Ethernet OAM. Accordingly, CFM module 350 may be referred to as a “fault reporting module,” to refer to a module that implements any fault management functions, regardless of whether it is implemented according to Ethernet OAM or another protocol.
  • In addition to the normal CFM operation, CFM module 350 may perform various functions at the request of BGP module 360. For example, under various circumstances, BGP module 360 may instruct CFM module 350 to construct and send a CFM message to a particular MEP. Thus, upon request, CFM module 350 may construct and transmit a CFM message indicating a fault regardless of the actual existence of such a fault. Likewise, the CFM module 350 may, on request by BGP module 360, construct and send a CFM message indicating that no fault exists.
  • Border gateway protocol (BGP) module 360 may include hardware and/or executable instructions on a machine-readable storage medium configured to implement various aspects of the border gateway protocol. For example, BGP module 360 may implement the designated forwarder election process defined for BGP multi-homing applications. This designated forwarder may then be used as the active gateway (AG). It will be appreciated that various alternative AG election methods may be employed instead of BGP. Accordingly, BGP module 360 may be referred to as an “AG election module” to refer to a module configured to elect an AG, regardless of whether it is implemented according to BGP or some other protocol.
  • BGP module 360 may elect an AG under various circumstances. For example, on the establishment of a new VLL service, BGP module 360 may make an initial election of an AG. BGP module 360 may perform the election process again in response to changing network conditions. For example, if either CFM module 350 or PW module 330 reports a fault to BGP module 360, BGP module 360 may proceed to perform AG election based on the new information.
  • BGP module 360 may further be configured to communicate with one or more paired PE devices via provider edge interface 370. In cases where CFM module 350 or PW module 330 report a fault to BGP module 360, BGP module 360 may send an indication that PE 300 is experiencing a fault to one or more paired PE devices via provider edge interface 370. BGP module 360 may also receive similar indications from paired PE devices via provider edge interface 370. BGP module 360 may perform AG election again in response to receiving such an indication.
  • After performing the AG election process, as will be described in greater detail below with respect to FIG. 6, BGP module 360 may have decided whether PE 300 will be designated forwarder for the VLL service. If the PE 300 will be designated forwarder for the VLL service, BGP module may indicate to an appropriate CE device that there is no fault on the link between CE interface 310 and that CE device. This may include instructing CFM module 350 to construct and transmit a CFM message. On the other hand, if the PE 300 will not be designated forwarder for the VLL service, BGP module may indicate to an appropriate CE device that a fault exists on the link between CE interface 310 and that CE device. Again, this may include instructing CFM module 350 to construct and transmit a CFM message.
  • Provider edge interface 370 may be an interface comprising hardware and/or executable instructions encoded on a machine-readable storage medium configured to communicate with at least one other device, such as a paired PE device. In various embodiments, provider edge interface 370 may include one or more interfaces that communicate according to a protocol such as Ethernet, Frame Relay, ATM, and/or PPP. During operation, provider edge interface 370 may communicate with one or more customer edge devices. In various embodiments, provider edge interface 370 may share at least some hardware in common with customer edge interface 310.
  • FIG. 4 illustrates an exemplary method 400 for controlling an initial selection of a provider edge device. Method 400 may be performed by the components of a PE device such as PE device 300. For example, method 400 may be performed by CFM module 350 and/or BGP module 360.
  • Method 400 may begin in step 405 and proceed to step 410 where the PE device may send initial CFM signals to a CE device. For example, the PE device may send a CFM message indicating a fault to an appropriate MEP configured on the CE device. Next, in step 415, the PE device may perform AG election to determine whether the PE device will be designated forwarder. An example of an AG election process will be described in greater detail below with respect to FIG. 6.
  • In step 420, the PE device may evaluate whether the AG election process has elected the PE device as designated forwarder. If not, method 400 may proceed to step 425 where the PE device may indicate a fault to the CE device. In various embodiments, this step may include simply refraining from sending additional CFM messages. In particular, because a fault CFM message was sent previously in step 410, it may be unnecessary to send an additional fault CFM message. Method 400 may then proceed to end in step 435.
  • If, on the other hand, the AG election process of step 415 elects the PE as designated forwarder, method 400 may instead proceed from step 420 to step 430. In step 430, the PE device may indicate a “no fault” condition to the CE device. In various embodiments, this step may include simply refraining from sending additional CFM messages. Alternatively, because the previous message sent in step 410 indicated a fault, the PE device may construct and transmit a new “no fault” CFM message to the appropriate MEP configured on the CE device. Method 400 may then proceed to end in step 435.
  • FIG. 5 illustrates an exemplary method 500 for controlling a selection of a provider edge device based on the occurrence of various faults. Method 500 may be performed by the components of a PE device such as PE device 300. For example, method 500 may be performed by CFM module 350 and/or BGP module 360.
  • Method 500 may begin in step 505 and proceed to step 510 where the PE device may monitor for various events that may impact the network. After receiving an indication of such an event, method 500 may proceed to step 515 where the PE device may determine whether the event included the detection of a new CFM fault at the PE device. If so, method 500 may proceed to step 525. Otherwise, method 500 may proceed to step 520. In step 520, the PE device may determine whether the event included the detection of a new pseudowire fault. Again, if so, method 500 may proceed to step 525. Otherwise, method 500 may proceed to step 530 where the PE device may determine whether the event included receiving an indication that a paired PE is currently experiencing a fault. For example, the PE device may receive a message indicating that a paired PE device has detected a CFM or PW fault. If a paired PE device is experiencing a fault, method 500 may proceed to step 535. Otherwise, method 500 may proceed to end in step 555.
  • In step 525, the PE device may send an indication to any paired PE devices that the PE device is experiencing a fault. This indication may include specific details describing the fault such as, for example, whether the fault is a CFM or PW fault. Various methods of communicating such fault information between paired PE devices will be apparent to those of skill in the art. Method 500 may then proceed to step 535. Steps 535-550 may correspond to steps 415-430 of method 400. After indicating a “fault” or “no fault” status to the CE device, method 500 may proceed to end in step 555.
  • FIG. 6 illustrates an exemplary method 600 for electing an active gateway. Method 600 may be performed by the components of a PE device such as PE device 300. For example, method 600 may be performed by BGP module 360. It should be noted that method 600 is one example of an AG election process and that alternative methods may be useful or appropriate in various alternative embodiments.
  • Method 600 may begin in step 605 and proceed to step 610 where the PE device may determine whether the PE device is the only device not currently experiencing a CFM fault. If the PE device is not experiencing a CFM fault but any paired PE devices are experiencing CFM fault, method 600 may proceed to elect the PE device AG in step 630. Otherwise, method 600 may proceed to step 615.
  • In step 615, the PE device may determine whether it is currently experiencing a CFM fault while at least one other PE device is not experiencing such a fault. If so, method 600 may proceed to determine that the PE device should not be elected AG in step 635. Otherwise, method 600 may proceed to step 620.
  • In step 620, the PE device may determine whether the PE device is the only device not currently experiencing a PW fault. In various embodiments, a PW fault may exist only when all appropriate PWs for a VLL are experiencing faults. If the PE device is not experiencing a PW fault but any paired PE devices are experiencing PW fault, method 600 may proceed to elect the PE device AG in step 630. Otherwise, method 600 may proceed to step 625.
  • In step 625, the PE device may determine whether it is currently experiencing a PW fault while at least one other PE device is not experiencing such a fault. If so, method 600 may proceed to determine that the PE device should not be elected AG in step 635. Otherwise, method 600 may proceed to step 640.
  • In step 640, the PE device may proceed to perform further election procedures based on the BGP-MH protocol. For example, the PE device may attempt to make an election based on a local preference, an AS-PATH attribute, and/or a NEXT-HOP attribute. Various modifications will be apparent to those of skill in the art. Once an AG has been elected, method 600 may proceed to end in step 645.
  • According to the foregoing, various embodiments enable the provision of a redundant, multi-technology, point-to-point service that does not require the learning of MAC addresses. For example, by leveraging BGP-MH designated forwarder election processes to control a linear protection switching, traffic can be reliably transported across a backbone or other network without incurring the overhead of an address learning system. Various additional advantages will be apparent to those of skill in the art.
  • It should be apparent from the foregoing description that various exemplary embodiments of the invention may be implemented in hardware and/or firmware. Furthermore, various exemplary embodiments may be implemented as instructions stored on a machine-readable storage medium, which may be read and executed by at least one processor to perform the operations described in detail herein. A machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device. Thus, a tangible and non-transitory machine-readable storage medium may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.
  • It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
  • Although the various exemplary embodiments have been described in detail with particular reference to certain exemplary aspects thereof, it should be understood that the invention is capable of other embodiments and its details are capable of modifications in various obvious respects. As is readily apparent to those skilled in the art, variations and modifications can be effected while remaining within the spirit and scope of the invention. Accordingly, the foregoing disclosure, description, and figures are for illustrative purposes only and do not in any way limit the invention, which is defined only by the claims.

Claims (17)

What is claimed is:
1. A method performed by a provider edge device for enabling connection redundancy, the method comprising:
performing an active gateway election to determine whether the provider edge device will be an active gateway for a connection;
based on a determination that the provider edge device will be the active gateway for the connection and regardless of whether a fault has been detected, sending a no fault message to a customer edge device indicating that no fault is currently associated with a link between the customer edge device and the provider edge device; and
based on a determination that the provider edge device will not be the active gateway for the connection and regardless of whether a fault has been detected, sending a fault message to the customer edge device indicating that a fault is currently associated with the link between the customer edge device and the provider edge device.
2. The method of claim 1, further comprising:
determining that a paired provider edge device is currently experiencing a fault,
wherein the step of performing an active gateway election is performed in response to determining that a paired provider edge device is currently experiencing a fault.
3. The method of claim 1, further comprising:
detecting a fault associated with the link between the provider edge device and the customer edge device; and
sending an indication to a paired provider edge device that the provider edge is currently experiencing a fault.
4. The method of claim 1, further comprising:
detecting a fault on at least two links between the provider edge device and other devices in the network;
sending an indication to a paired provider edge device that the provider edge is currently experiencing a fault.
5. The method of claim 1, wherein the step of sending a no fault message to a customer edge device indicating that no fault is currently associated with a link between the customer edge device and the provider edge device is comprises:
constructing a connectivity fault message that indicates that no fault has been detected; and
transmitting the connectivity fault message to a maintenance endpoint of the customer edge device.
6. The method of claim 1, wherein the active gateway election is performed according to the border gateway protocol.
7. The method of claim 1, wherein the active gateway election comprises:
determining whether the provider edge is currently experiencing a connectivity fault management (CFM) fault;
determining whether a paired provider edge is currently experiencing a CFM fault;
based on a determination that the provider edge is not currently experiencing a CFM fault and the paired provider edge is currently experiencing a CFM fault, determining that the provider edge device will be the active gateway; and
based on a determination that the provider edge is currently experiencing a CFM fault and the paired provider edge is not currently experiencing a CFM fault, determining that the provider edge device will not be the active gateway.
8. The method of claim 1, wherein the active gateway election comprises:
determining whether the provider edge is currently experiencing a pseudowire (PW) fault;
determining whether a paired provider edge is currently experiencing a PW fault;
based on a determination that the provider edge is not currently experiencing a PW fault and the paired provider edge is currently experiencing a PW fault, determining that the provider edge device will be the active gateway; and
based on a determination that the provider edge is currently experiencing a PW fault and the paired provider edge is not currently experiencing a PW fault, determining that the provider edge device will not be the active gateway.
9. The method of claim 1, wherein the connection is a control connection, the method further comprising:
identifying a fate-shared connection associated with the control connection;
based on a determination that the provider edge device will be the active gateway for the control connection, sending a no fault message to a customer edge device indicating that no fault is currently associated with a link between the customer edge device and the provider edge device for the fate-shared connection; and
based on a determination that the provider edge device will not be the active gateway for the control connection, sending a fault message to the customer edge device indicating that a fault is currently associated with the link between the customer edge device and the provider edge device for the fate-shared connection.
10. A provider edge device for enabling connection redundancy, the provider edge device comprising:
a customer edge interface configured to communicate with a customer edge device;
an active gateway election module configured to determine whether the provider edge device will be an active gateway for a connection; and
a fault reporting module configured to:
based on a determination that the provider edge device will be the active gateway for the connection and regardless of whether a fault has been detected, send a no fault message to a customer edge device indicating that no fault is currently associated with a link between the customer edge device and the provider edge device, and
based on a determination that the provider edge device will not be the active gateway for the connection and regardless of whether a fault has been detected, send a fault message to the customer edge device indicating that a fault is currently associated with the link between the customer edge device and the provider edge device.
11. The provider edge device of claim 10, further comprising:
a provider edge interface configured to communicate with a paired provider edge device;
wherein the active gateway election module is further configured to:
determine, based on information received via the provider edge interface, that the paired provider edge device is currently experiencing a fault,
wherein the active gateway election module is configured to determine whether the provider edge device will be an active gateway for a connection in response to determining that the paired provider edge device is currently experiencing a fault.
12. The provider edge device of claim 10, further comprising:
a provider edge interface configured to communicate with a paired provider edge device,
wherein the fault reporting module is further configured to detect a fault associated with the link between the provider edge device and the customer edge device, and
wherein the active gateway election module is further configured to, in response to the fault reporting module detecting the fault, send an indication to the paired provider edge via the provider edge interface that the provider edge is currently experiencing a fault.
13. The provider edge device of claim 10, further comprising:
a provider edge interface configured to communicate with a paired provider edge device,
a backbone interface configured to communicate with at least one other device;
a pseudowire module configured to detect a fault on a link between the provider edge device and the at least one other device, and
wherein the is further configured to, in response to the pseudowire module detecting the fault, send an indication to the paired provider edge via the provider edge interface that the provider edge is currently experiencing a fault.
14. The provider edge device of claim 10, wherein the fault reporting module is a connectivity fault management module and, in sending a no fault message to a customer edge device indicating that no fault is currently associated with a link between the customer edge device and the provider edge device, the connectivity fault management module is configured to:
construct a connectivity fault message that indicates that no fault has been detected; and
transmit the connectivity fault message, via the customer edge interface, to a maintenance endpoint of the customer edge device.
15. The provider edge device of claim 10, wherein the active gateway module is a border gateway protocol module and, in determining whether the provider edge device will be an active gateway for a connection, the border gateway protocol module is configured to elect an active gateway based on the border gateway protocol.
16. The provider edge device of claim 10, wherein the active gateway module is configured to elect an active gateway based on at least one of:
current connectivity fault management faults;
current pseudowire faults; and
border gateway protocol attributes.
17. The provider edge device of claim 10, wherein the connection is a control connection, and
wherein the fault reporting module is further configured to, for each fate shared connection of a set of fate-shared connections:
based on a determination that the provider edge device will be the active gateway for the control connection, send a no fault message to a customer edge device indicating that no fault is currently associated with a link between the customer edge device and the provider edge device for the fate-shared connection; and
based on a determination that the provider edge device will not be the active gateway for the control connection, send a fault message to the customer edge device indicating that a fault is currently associated with the link between the customer edge device and the provider edge device for the fate-shared connection.
US14/521,174 2012-01-27 2014-10-22 Redundant network connections Abandoned US20150043326A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/521,174 US20150043326A1 (en) 2012-01-27 2014-10-22 Redundant network connections

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/359,993 US8908537B2 (en) 2012-01-27 2012-01-27 Redundant network connections
US14/521,174 US20150043326A1 (en) 2012-01-27 2014-10-22 Redundant network connections

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/359,993 Continuation US8908537B2 (en) 2012-01-27 2012-01-27 Redundant network connections

Publications (1)

Publication Number Publication Date
US20150043326A1 true US20150043326A1 (en) 2015-02-12

Family

ID=47679039

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/359,993 Expired - Fee Related US8908537B2 (en) 2012-01-27 2012-01-27 Redundant network connections
US14/521,174 Abandoned US20150043326A1 (en) 2012-01-27 2014-10-22 Redundant network connections

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/359,993 Expired - Fee Related US8908537B2 (en) 2012-01-27 2012-01-27 Redundant network connections

Country Status (6)

Country Link
US (2) US8908537B2 (en)
EP (1) EP2807797A1 (en)
JP (1) JP5913635B2 (en)
KR (1) KR101706439B1 (en)
CN (1) CN104255002A (en)
WO (1) WO2013112472A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016133532A1 (en) * 2015-02-20 2016-08-25 Hewlett Packard Enterprise Development Lp Providing a redundant connection in response to a modified connection

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908537B2 (en) * 2012-01-27 2014-12-09 Alcatel Lucent Redundant network connections
JP6036118B2 (en) * 2012-09-28 2016-11-30 ブラザー工業株式会社 Communication device
JP5907033B2 (en) 2012-09-28 2016-04-20 ブラザー工業株式会社 Communication device
TW201434292A (en) * 2012-10-15 2014-09-01 Interdigital Patent Holdings Failover recovery methods with an edge component
US10032003B2 (en) 2013-05-03 2018-07-24 Sierra Nevada Corporation Patient medical data access system
GB2529680B (en) * 2014-08-28 2021-03-03 Metaswitch Networks Ltd Network connectivity
US9923756B2 (en) * 2014-09-11 2018-03-20 Adva Optical Networking Se Maintenance entity group end point of a subnetwork within a multi-domain network
KR101761648B1 (en) 2016-04-27 2017-07-26 주식회사 삼진엘앤디 Method for building dynamic bridge node in wireless mesh-network
JP6743494B2 (en) * 2016-06-03 2020-08-19 富士通株式会社 Transmission system, communication device, and path switching method
US20180091445A1 (en) * 2016-09-29 2018-03-29 Juniper Networks, Inc. Evpn designated forwarder state propagation to customer edge devices using connectivity fault management
US10326689B2 (en) * 2016-12-08 2019-06-18 At&T Intellectual Property I, L.P. Method and system for providing alternative communication paths
US10263661B2 (en) 2016-12-23 2019-04-16 Sierra Nevada Corporation Extended range communications for ultra-wideband network nodes
US10523498B2 (en) 2016-12-23 2019-12-31 Sierra Nevada Corporation Multi-broker messaging and telemedicine database replication
JP6839115B2 (en) * 2018-02-08 2021-03-03 日本電信電話株式会社 Carrier network equipment, network systems, and programs
US10742728B2 (en) * 2018-06-07 2020-08-11 At&T Intellectual Property I, L.P. Edge sharing orchestration system
US10587488B2 (en) * 2018-06-29 2020-03-10 Juniper Networks, Inc. Performance monitoring support for CFM over EVPN
CN110661701B (en) * 2018-06-30 2022-04-22 华为技术有限公司 Communication method, equipment and system for avoiding loop
US10996971B2 (en) * 2018-07-12 2021-05-04 Verizon Patent And Licensing Inc. Service OAM for virtual systems and services
US20230078931A1 (en) * 2020-02-28 2023-03-16 Nippon Telegraph And Telephone Corporation Network system and network switching method
CN111740898B (en) * 2020-05-26 2023-03-31 新华三信息安全技术有限公司 Link switching method and device and service provider edge equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182122A1 (en) * 2005-02-11 2006-08-17 Davie Bruce S Inter-autonomous-system virtual private network with autodiscovery and connection signaling
US20060274746A1 (en) * 2005-06-01 2006-12-07 Phoenix Contact Gmbh & Co. Kg Apparatus and method for combined transmission of input/output data in automation bus systems
US20070047436A1 (en) * 2005-08-24 2007-03-01 Masaya Arai Network relay device and control method
US20090010153A1 (en) * 2007-07-03 2009-01-08 Cisco Technology, Inc. Fast remote failure notification
US7515525B2 (en) * 2004-09-22 2009-04-07 Cisco Technology, Inc. Cooperative TCP / BGP window management for stateful switchover
US20120127855A1 (en) * 2009-07-10 2012-05-24 Nokia Siemens Networks Oy Method and device for conveying traffic
US20130194911A1 (en) * 2012-01-27 2013-08-01 Alcatel-Lucent Canada, Inc. Redundant network connections

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL176330A0 (en) 2006-06-15 2007-07-04 Eci Telecom Ltd Technique of traffic protection loop-free interconnection for ethernet and/or vpls networks
EP1956766A1 (en) 2007-02-12 2008-08-13 Nokia Siemens Networks Gmbh & Co. Kg Method and device in particular for forwarding layer3- information and communication system comprising such device
US20120113835A1 (en) 2008-11-07 2012-05-10 Nokia Siemens Networks Oy Inter-network carrier ethernet service protection
US8593973B2 (en) 2010-03-09 2013-11-26 Juniper Networks, Inc. Communicating network path and status information in multi-homed networks
US8817594B2 (en) 2010-07-13 2014-08-26 Telefonaktiebolaget L M Ericsson (Publ) Technique establishing a forwarding path in a network system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515525B2 (en) * 2004-09-22 2009-04-07 Cisco Technology, Inc. Cooperative TCP / BGP window management for stateful switchover
US20060182122A1 (en) * 2005-02-11 2006-08-17 Davie Bruce S Inter-autonomous-system virtual private network with autodiscovery and connection signaling
US20060274746A1 (en) * 2005-06-01 2006-12-07 Phoenix Contact Gmbh & Co. Kg Apparatus and method for combined transmission of input/output data in automation bus systems
US20070047436A1 (en) * 2005-08-24 2007-03-01 Masaya Arai Network relay device and control method
US20090010153A1 (en) * 2007-07-03 2009-01-08 Cisco Technology, Inc. Fast remote failure notification
US20120127855A1 (en) * 2009-07-10 2012-05-24 Nokia Siemens Networks Oy Method and device for conveying traffic
US20130194911A1 (en) * 2012-01-27 2013-08-01 Alcatel-Lucent Canada, Inc. Redundant network connections

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016133532A1 (en) * 2015-02-20 2016-08-25 Hewlett Packard Enterprise Development Lp Providing a redundant connection in response to a modified connection

Also Published As

Publication number Publication date
US20130194911A1 (en) 2013-08-01
EP2807797A1 (en) 2014-12-03
KR20140116161A (en) 2014-10-01
CN104255002A (en) 2014-12-31
KR101706439B1 (en) 2017-02-13
JP5913635B2 (en) 2016-04-27
US8908537B2 (en) 2014-12-09
WO2013112472A1 (en) 2013-08-01
JP2015508631A (en) 2015-03-19

Similar Documents

Publication Publication Date Title
US8908537B2 (en) Redundant network connections
US9900245B2 (en) Communicating network path and status information in multi-homed networks
US8982710B2 (en) Ethernet operation and maintenance (OAM) with flexible forwarding
US9344325B2 (en) System, method and apparatus providing MVPN fast failover
US7969866B2 (en) Hierarchical virtual private LAN service hub connectivity failure recovery
US20160041888A1 (en) Link state relay for physical layer emulation
US20100287405A1 (en) Method and apparatus for internetworking networks
EP2110987A1 (en) Connectivity fault management traffic indication extension
US9185025B2 (en) Internetworking and failure recovery in unified MPLS and IP networks
US8817601B2 (en) HVPLS hub connectivity failure recovery with dynamic spoke pseudowires
CN111385138B (en) Core isolation for logical tunnels splicing multi-homed EVPN and L2 circuits
US8670299B1 (en) Enhanced service status detection and fault isolation within layer two networks
CN106856446B (en) Method and system for improving virtual network reliability
US20220294728A1 (en) Packet Transmission Path Switching Method, Device, and System
US9565054B2 (en) Fate sharing segment protection

Legal Events

Date Code Title Description
AS Assignment

Owner name: ALCATEL-LUCENT, USA, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FEDYK, DON;REEL/FRAME:034010/0569

Effective date: 20120125

Owner name: ALCATEL-LUCENT, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALCATEL-LUCENT CANADA, INC.;REEL/FRAME:034010/0683

Effective date: 20130220

Owner name: ALCATEL-LUCENT CANADA, INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PRIBHAI, SHAFIQ;REEL/FRAME:034010/0574

Effective date: 20140126

Owner name: ALCATEL-LUCENT, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALCATEL-LUCENT USA, INC.;REEL/FRAME:034010/0642

Effective date: 20130220

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION