WO2020136661A1 - First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment - Google Patents

First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment Download PDF

Info

Publication number
WO2020136661A1
WO2020136661A1 PCT/IN2018/050887 IN2018050887W WO2020136661A1 WO 2020136661 A1 WO2020136661 A1 WO 2020136661A1 IN 2018050887 W IN2018050887 W IN 2018050887W WO 2020136661 A1 WO2020136661 A1 WO 2020136661A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
indication
data traffic
nodes
destination
Prior art date
Application number
PCT/IN2018/050887
Other languages
French (fr)
Inventor
Atul VADERA
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IN2018/050887 priority Critical patent/WO2020136661A1/en
Publication of WO2020136661A1 publication Critical patent/WO2020136661A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/40169Flexible bus arrangements
    • H04L12/40176Flexible bus arrangements involving redundancy
    • H04L12/40182Flexible bus arrangements involving redundancy by using a plurality of communication lines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level

Definitions

  • the present disclosure relates generally to a first node and methods performed thereby for handling data traffic in an Ethernet Segment (ES) within a communications network.
  • the present disclosure also relates generally to a second node, and methods performed thereby for handling data traffic in the ES within the communications network.
  • the present disclosure also relates generally to a third node, and methods performed thereby for handling data traffic in the ES within the communications network.
  • the present disclosure further also relates generally to computer programs and computer- readable storage mediums, having stored thereon the computer programs to carry out these methods.
  • Computer systems in a communications network may comprise one or more nodes, which may also be referred to simply as nodes.
  • a node may comprise one or more processors which, together with computer program code may perform different functions and actions, a memory, a receiving and a sending port.
  • a node may be, for example, a router.
  • Virtual Private Local Area Network Service enables to provide Ethernet-based multipoint to multipoint communication over Internet Protocol (IP) or Multiprotocol Label Switching (MPLS).
  • IP Internet Protocol
  • MPLS Multiprotocol Label Switching
  • VPLS may be understood to enable geographically dispersed sites, e.g., servers and clients, to share an Ethernet broadcast domain by connecting sites through pseudowires or Border Gateway Protocol (BGP) neighbors belonging to the same Ethernet VPN instance (EVI).
  • BGP Border Gateway Protocol
  • EVI Ethernet VPN instance
  • LAN local area network
  • the provider network may then emulate a switch or bridge to connect all of the customer LANs to create a single bridged LAN.
  • a Broadcast domain may be understood as a logical division of a computer network, in which all nodes comprised in the domain may reach each other by broadcast at the data link layer.
  • a Provider Edge may be understood as a router that may be located between an area of a provider of a network service or areas that may be under the administration of other network providers.
  • a PE may be connected to multiple remote PEs which may belong to the same Broadcast Domain for an EVPN instance (EVI).
  • EVPN Ethernet Virtual Private Network
  • a remote PE may be understood as a PE towards which the traffic may be forwarded or flooded. That is, PEs which may be upstream of a destination host.
  • Broadcast, Unknown Unicast and Multicast (BUM) Traffic may be sent to all the remote PEs, or, in other words, BUM traffic may be flooded.
  • Flooding may be understood as forwarding by, in this case a PE, of traffic to all the remote PEs attached to the PE.
  • a host or a computer network may be understood to be connected to more than one network.
  • VPLS multi-homing may be understood to enable connecting a customer site to a plurality of PEs in order to provide redundant connectivity.
  • a redundant PE router may therefore be understood to be enabled to provide network service to a customer site upon detection of a failure.
  • An Ethernet segment may be understood as a set of Ethernet links where a customer site, e.g., a device or network, may be connected to one or more PEs via this set of links.
  • the set of PEs belonging to the same Ethernet Segment (ES) may be referred to as a redundancy group.
  • the BUM traffic may be forwarded to all the PEs.
  • this redundancy group may have any number of PEs.
  • One of the PEs may be elected as the Designated-Forwarder (DF) for the ES.
  • the DF PE may be understood to be the one to forward the BUM traffic on the access interface towards the Customer Edge (CE), whereas all non-DF PEs may be understood to drop the BUM traffic.
  • Flooding may be understood to play an important role for achieving VPLS functionality using EVPN, where all BUM traffic may be understood to be flooded.
  • the current flooding approach in the case of multi-homing where several PEs may be part of an ES results in waste of bandwidth and energy resources, as well as in extended convergence outage in the network.
  • ES Ethernet Segment
  • the object is achieved by a method performed by a second node.
  • the method is for handling data traffic in an ES within a communications network.
  • the data traffic has an unknown destination.
  • the ES comprises a plurality of nodes providing multi-homing service.
  • the plurality of nodes comprises the second node.
  • the second node sends a first indication to a first node having a first connection to the ES.
  • the first indication indicates that the second node is, within the ES, a Designated Forwarder (DF).
  • DF Designated Forwarder
  • the second node also receives, based on the sent first indication, data traffic with unknown destination from the first node.
  • the object is achieved by a method performed by the first node.
  • the method is for handling data traffic in the ES within the communications network.
  • the data traffic has an unknown destination.
  • the ES comprises the plurality of nodes providing multi-homing service.
  • the first node receives the first indication from the second node comprised in the ES.
  • the first indication indicates that the second node is, within the ES the DF.
  • the first node Upon receipt of data traffic with unknown destination to propagate via the ES, the first node then forwards the data traffic with unknown destination to the second node, based on the received first indication.
  • the first node also refrains from forwarding the data traffic to the other nodes comprised in the ES different from the second node.
  • the object is achieved by a method performed by a third node.
  • the method is for handling data traffic in the ES within the communications network.
  • the data traffic has an unknown destination.
  • the ES comprises the plurality of nodes providing multi-homing service.
  • the plurality of nodes comprises the third node.
  • the third node sends the second indication to the second node within the ES.
  • the second indication indicates the third indication that is to be used when forwarding data traffic with unknown destination to the third node as backup path within the ES to forward data traffic.
  • the third node also receives, along with the third indication, data traffic with unknown destination from the second node.
  • the object is achieved by a second node, for handling data traffic in the ES within the communications network.
  • the data traffic is configured to have an unknown destination.
  • the ES is configured to comprise the plurality of nodes being configured to provide multi-homing service.
  • the plurality of nodes is configured to comprise the second node.
  • the second node is also configured to, send the first indication to the first node configured to have the first connection to the ES.
  • the first indication is configured to indicate that the second node is, within the ES, the DF.
  • the second node is further configured to receive, based on the first indication configured to be sent, data traffic with unknown destination from the first node.
  • the object is achieved by a first node for handling data traffic in ES within the communications network.
  • the data traffic is configured to have an unknown destination.
  • the ES is configured to comprise the plurality of nodes configured to provide multi-homing service.
  • the first node is further configured to receive the first indication from the second node configured to be comprised in the ES.
  • the first indication is configured to indicate that the second node is, within the ES, the DF.
  • the first node is also configured to, upon receipt of data traffic with unknown destination to propagate via the ES, forward the data traffic with unknown destination to the second node, based on the first indication configured to be received.
  • the first node is also configured to refrain from forwarding the data traffic to the other nodes comprised in the ES different from the second node.
  • the object is achieved by a third node for handling data traffic in the ES within the communications network.
  • the data traffic is configured to have an unknown destination.
  • the ES is configured to comprise the plurality of nodes configured to provide multi-homing service.
  • the plurality of nodes is configured to comprise the third node.
  • the third node is also configured to send the second indication to the second node within the ES.
  • the second indication is configured to indicate the third indication that is to be used when forwarding data traffic with unknown destination to the third node as backup path within the ES to forward data traffic.
  • the third node is further configured to receive, along with the third indication, data traffic with unknown destination from the second node.
  • the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second node.
  • the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second node.
  • the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first node.
  • the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first node.
  • the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the third node.
  • the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the third node.
  • the second node By the second node sending the first indication to the first node and thereby communicating its role to the first node, the second node enables the first node to know that it may only need to send the data traffic, e.g., BUM traffic, to the DF, the second node in this case.
  • the first node receives the first indication and forwarding the data traffic with unknown destination to the second node, refraining from forwarding the data traffic to the other nodes comprised in the ES.
  • resources are spared by first node, since it may no longer need to forward the data traffic to the other nodes in the ES that are not DF, which may be understood to otherwise drop the traffic.
  • the bandwidth consumed is optimal.
  • dropped traffic is reduced to a good extent, leading to better convergence, that is, a shorter time that may be required for the traffic to restore after the failure event.
  • Figure l is a schematic diagram illustrating an example of a topology of a communications network, according to existing methods.
  • Figure 2 is a schematic diagram illustrating a non-limiting example of a communications network, according to embodiments herein.
  • Figure 3 is a flowchart depicting embodiments of a method in a second node, according to embodiments herein.
  • Figure 4 is a flowchart depicting embodiments of a method in a first node, according to embodiments herein.
  • Figure 5 is a schematic diagram illustrating an example of a method in a third node
  • Figure 6 is a schematic diagram illustrating an example of a third indication, according to embodiments herein.
  • Figure 7 is a schematic diagram illustrating an example of handling data traffic, e.g., BUM traffic, in an Ethernet Segment, according to embodiments herein.
  • data traffic e.g., BUM traffic
  • Figure 8 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a second node, according to embodiments herein.
  • Figure 9 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a first node, according to embodiments herein.
  • Figure 10 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a third node, according to embodiments herein.
  • the current approach of flooding BUM traffic in EVPN is not optimal. Although only one PE, that is, the DF PE, forwards the traffic on the access interface towards the CE, the same traffic is sent to multiple PEs belonging to the same ES across the MPLS core, thus consuming much more network bandwidth than the actual bandwidth required.
  • Figure l is a schematic representation of an example topology in an EVPN to illustrate the problem of existing methods.
  • Two different CEs are represented in Figure 1, a first CE, CE1, and a second CE, CE 2.
  • Six different PEs are also represented, PEI, PE2, PE3, PE4, PE5 and PE6.
  • an T type of interface is the interface between two PEs across an MPLS core.
  • PE4, PE5 and PE6 is represented as II, 12, 13, 14, and 15, respectively.
  • An‘IC’ type of interface is the access interface between a PE and a CE.
  • CE2 is multi homed to PEs PE2 to PE6, which belong to the same ES.
  • Each of the interfaces between PE2, PE3, PE4, PE5 and PE6 and CE2 is respectively represented as IC1, IC2, IC3, IC4, and IC5.
  • PE2 is the elected DF for this ES. From the point of view of PEI, it is connected to multiple PEs, PE2 to PE6, across the core.
  • Unknown Unicast traffic that is, traffic for which the destination is not known, originated from CE1 will be flooded by PEI to all the remote PEs, that is, PE2 to PE6.
  • PEs program the forwarding path such that if the BUM Traffic is received from a PE across the MPLS core and the PE is acting as the DF for this ES, then the PE is programmed to forward the traffic to the access interface belonging to the ES. If the traffic is received with the ES label for the ES for which the PE is acting as a DF or a wrong ES label or the PE is not the DF for this ES, the PE is programmed to drop the traffic. Dropping BUM traffic by all non-DF PEs may be understood to ensure that duplicate traffic is not forwarded to CE2.
  • Ingress filtering may be understood as applying a rule at ingress in order to decide whether to forward the traffic or not.
  • PEI does not base its decision of forwarding the Flood traffic on the role played by the remote PE, but simply forwards the BUM traffic to all the PEs that are part of the same EVI.
  • PE2 will forward this traffic to CE2, since it is the elected DF for this ES, but all other PEs, namely, PE3 to PE6, will drop this traffic.
  • An access interface link may be understood as a link between a PE and a CE towards the access side.
  • a DF Access Link may be understood as a link between the DF PE and a CE towards the access side.
  • PEs belonging to the same ES may be enabled to either operate in single-active mode or all-active mode.
  • Single-active mode may be understood to be a mode wherein only one PE out of all the PEs in an ES may be understood to be responsible for forwarding traffic destined to the CE or the traffic originated from the CE .
  • All-active mode may be understood to be a mode wherein all the PEs belonging to the ES may forward traffic destined to the CE or the traffic originated from the CE.
  • the rules for these modes are the same as defined in RFC 7432 BGP MPLS-Based Ethernet VPN. Initially, when no Media Access Controls (MACs) are learned at PEI, all the traffic originated from CE1 will be flooded by PEI.
  • MACs Media Access Controls
  • MACs will be learned from remote PEs, namely PE2 to PE6 in the example of Figure 1, and the traffic will be forwarded as Unicast traffic by PEI .
  • the reverse traffic originating from CE2 will be forwarded by the DF PE, here PE2, in case of single-active and will be load-balanced across all the PEs, PE2 to PE6, in the case of all-active.
  • PEI will learn MACs from PE2 in the case of single-active or different set of MACs from the PEs PE2 to PE6 in the case of all-active.
  • the link between PE2 and CE2, that is, IC1 goes down
  • the following sequence of events takes place as per the existing methods: 1) the link IC1 goes down, 2) the DF, here PE2, withdraws the ES route, that is, a control message that may be sent by BGP, which may contain information specific to the ES, 3) the DF, here PE2, withdraws Ethernet Auto Discovery (A-D) per ES route, 4) other PEs in the ES receive the withdrawn routes, 5) the DF election is re-run on PEs belonging to the ES, in this example PE3 to PE6, and a new DF is elected, and 6) the new DF, e.g., PE4, re-programs the forwarding path to forward BUM traffic for that ES on to the access interface towards the CE.
  • the new DF e.g., PE4 re-programs the forwarding path to forward BUM traffic for that ES on to the access interface towards the CE.
  • PEI will flood the traffic to all the PEs until the new DF is elected and PEI learns the MAC-IP routes from the new DF.
  • any flood traffic received at PEs PE3 to PE6 from PEI will be dropped. This not only consumes unnecessary bandwidth across the MPLS core but also causes extended convergence outage.
  • Embodiments herein may be understood to relate to flood traffic optimization in BGP Ethernet VPN. According to embodiments herein, bandwidth consumed is optimal, and BUM traffic dropped is reduced to a good extent leading to better convergence, as will be described below.
  • FIG 2 is a schematic diagram depicting a non-limiting example of a communications network 100, in which embodiments herein may be implemented.
  • the communications network 100 may be understood as a computer network, as depicted in in the non-limiting example of Figure 2.
  • the communications network 100 may be an MPLS or IP network providing transport for end-to-end VPLS service using BGP EVPN, or a network with similar functionality.
  • the communications network 100 comprises nodes, whereof a first node 101, a second node 102, a third node 103, a fourth node 104, a fifth node 105, fifth node 105, also referred to herein as a source node and a sixth node, also referred to herein as a destination node 106, are depicted in the non-limiting example of Figure 2. It may be understood that more nodes may be comprised in the communications network 100, and that the number of nodes depicted in Figure 2 is for illustration purposes only.
  • Each of the first node 101, the second node 102, the third node 103, the fourth node 104, the fifth node 105 and the destination node 106 may be understood, respectively, as a first computer system, a second computer system, a third computer system, a fourth computer system, a fifth computer system and a sixth computer system.
  • each of the first node 101, the second node 102, the third node 103, the fourth node 104, the fifth node 105 and the destination node 106 may be a router that is, a networking device that may be enabled to forward data packets between nodes.
  • each of the first node 101, the second node 102, the third node 103, and the fourth node 104 may be, respectively a first Provider Edge (PE), a second PE, a third PE, and a fourth PE
  • each of the fifth node 105 and the destination node 106 may be, respectively, a first Customer Edge (CE) and a second CE.
  • PE Provider Edge
  • CE Customer Edge
  • the communications network 100 comprises an Ethernet Segment (ES) 107.
  • the ES 107 comprises a plurality of nodes 108 providing multi -homing service.
  • the plurality of nodes 108 in the ES 107 comprises the second node 102, and may comprise the third node 103 and the fourth node 104, as will be described later in the various embodiments described herein.
  • the plurality of nodes 108 may be understood to be a network of routers a packet may need to go through from a source entity, such as the fifth node 105 to a destination entity such as the destination node 106, e.g., a pipeline.
  • the ES 107 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider.
  • Each of the fifth node 105 and the destination node 106 may be implemented, as a standalone server in e.g., a host computer in the cloud.
  • Each of the fifth node 105 and the destination node 106 may in some examples be a distributed node or distributed server, with some of its functions being implemented locally, e.g., by a client manager, and some of its functions implemented in the cloud, by e.g., a server manager.
  • each of the fifth node 105 and the destination node 106 may also be
  • each of the fifth node 105 and the destination node 106 may also be routers in the communications network 100.
  • the first node 101 is configured to communicate within the communications network
  • the second node 102 is configured to communicate within the communications network 100 with the destination node 106 over a second link or second connection 122.
  • the third node 103 is configured to communicate within the communications network 100 with the destination node 106 over a third link or third connection 123.
  • the third node 103 is configured to communicate within the communications network 100 with the first node
  • the fourth node 104 is configured to communicate within the communications network 100 with the first node 101 over a fifth link or fifth connection 125.
  • the fourth node 104 is configured to communicate within the communications network 100 with the destination node 106 over a sixth link or sixth connection 126.
  • the second node 102 is configured to communicate within the communications network 100 with the third node 103 over a seventh link or seventh connection 127.
  • the third node 103 is configured to communicate within the
  • the fifth node 105 is configured to communicate within the
  • Each of the first connection 121, the second connection 122, the third connection 123, the fourth connection 124, the fifth connection 125, the sixth connection 126, the seventh connection 127, the eighth connection 128 and the ninth connection 129 may be typically a wired link. Although they may also be, e.g., a radio link, an infrared link, etc...
  • any of the first connection 121, the second connection 122, the third connection 123, the fourth connection 124, the fifth connection 125, the sixth connection 126, the seventh connection 127, the eighth connection 128 and the ninth connection 129 may be understood to be able to be comprised of a plurality of individual links. Any of the first connection 121, the second connection 122, the third connection 123, the fourth connection 124, the fifth connection 125, the sixth connection 126, the seventh connection 127, the eighth connection 128 and the ninth connection 129 may be a direct link or it may go via one or more computer systems or one or more core networks in the communications network 100, which are not depicted in Figure 2, or it may go via an optional intermediate network.
  • the intermediate network may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network, if any, may be a backbone network or the Internet; in particular, the intermediate network may comprise two or more sub-networks, which is not shown in Figure 2.
  • “seventh” ,“eighth” and/or“ninth” herein may be understood to be an arbitrary way to denote different elements or entities, and may be understood to not confer a cumulative or chronological character to the nouns they modify.
  • the method may be understood to be for handling data traffic in the Ethernet Segment (ES) 107 within the communications network 100.
  • the data traffic has an unknown destination. That is, the data traffic may be destined to a node for which the path or route to reach the node is not known.
  • the data traffic may be BUM traffic.
  • the ES 107 comprises the plurality of nodes 108 providing multi-homing service.
  • the plurality of nodes 108 comprises the second node 102.
  • all nodes in the plurality of nodes 108 in the ES 107 may be understood to be connected to the destination node 106, as well as to the first node 101. That is, all the nodes such as the destination node 106, which may have hosts belonging to the same broadcast domain and which may be understood to be CEs behind the PEs providing multi-homing, may be connected to all the PEs in the ES 107.
  • embodiments herein may not be applicable. Also, embodiments herein may be understood to not relate to a single-homing scenario, in which case the flooding may be understood to be already optimal.
  • the method may comprise the actions described below. Several embodiments are comprised herein. In some embodiments all the actions may be performed. In some embodiments some of the actions may be performed. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. It should be noted that the examples herein are not mutually exclusive.
  • the first node 101 may receive data traffic from the fifth node 105, and it may forward it to the nodes belonging to the ES 107.
  • the data traffic may then be understood to be forwarded by an elected Designated Forwarder (DF) in the ES 107 on to the access interface belonging to that ES 107.
  • DF Designated Forwarder
  • the access interface may be understood as an Ethernet link between a node in the ES 107, e.g., a PE, and the destination node 106, e.g., a CE.
  • the data traffic may be BUM traffic which may be received across an MPLS core.
  • a node e.g., a PE
  • DF Designated Forwarder
  • the second node 102 may be understood to have been elected DF. Therefore, in this Action 301, the second node 102, sends, a first indication to the first node 101 having the first connection 121 to the ES 107. The first indication indicates that the second node 102 is, within the ES 107, a DF.
  • the sending in this Action 301 may be implemented, e.g., via the first connection
  • the second node 102 may advertise the first indication, and that therefore, it may send the first indication to all nodes in the ES 107 as well.
  • the first indication may, for example, be one bit in a‘Flags’ field of a‘Backup ESI Label Extended Community’. If the bit is set, the advertising node, in this case the second node 102, may be understood to be acting as the DF for this ES 107. The advertising node may increment a sequence number in the Backup ESI Label Extended Community by one from the one received. The first DF may always set it as zero.
  • the second node 102 By communicating its role to the first node 101, that is, the node originating the data traffic in this case, the second node 102 enables the first node 101 to know that it may only need to send the data traffic, e.g., BUM traffic, to the DF, the second node 102 in this case.
  • the first node 101 may also be enabled to then continue to send the data traffic to this DF until it may receive another indication indicating change of DF with a different sequence number, at which time it may then be enabled to switch the data traffic to the new DF.
  • resources are spared by first node 101, since it may no longer need to forward the data traffic to the other nodes in the ESI 07 that are not DF, which may be understood to otherwise drop the traffic anyway.
  • All nodes belonging to the ES 107 may be understood to allocate a‘Backup ES Label’ and advertise the Backup Label via a BGP Control message in‘Backup ESI Label Extended Community’ as a part of Ethernet A-D per ES route advertisement.
  • Each nodes within the ES 107 may be understood to program a backup path for access interface, such that if the access link goes down, the BUM traffic may then be forwarded to a node having a connection via the backup path with the node in question.
  • Each of the nodes within the ES 107 may then advertise an indication to be used when signalling that that particular node may have been chosen as backup path.
  • the second node 102 may receive a second indication from the third node 103.
  • the second indication may indicate a third indication to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic.
  • the received data traffic may then be forwarded to the third node 103 along with the third indication when the second connection 122 between the second node 102 and the destination node 106 may fail.
  • the third indication may be understood to enable the third node 103 to know how to then process the data traffic bearing the third indication.
  • At least one of the first indication and the second indication may be comprised in an Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
  • the third indication may be a Backup ES Label defined in an Extended Community.
  • the second node 102 may be enabled to know how to indicate to the third node 103 that data traffic the second node 102 may be forwarding to the third node 103 is to be treated as data traffic on the backup path, and therefore forwarded to the destination node 106. It may be understood that the second node 102 may receive respective second indications from the other nodes comprised in the ES 107, advertising their respective third indications, with a similar purpose to that of the third indication from the third node 103.
  • Each of the nodes in the ES 107 may then select another node as a backup path in the event of failure of its respective connection or link with the destination node 106, so the data traffic may continue to be forwarded.
  • the second node 102 may determine that the third node 103 is the backup path to be used within the ES 107 to forward data traffic to the destination node 106, upon failure of the link between the second node 102 and the destination node 106, that is, upon failure of the second connection 122.
  • the selection of the backup node may be performed by all the nodes in the ES 107 as soon as they receive the third indication from other nodes in the ES 107.
  • the DF node may select a backup node from the nodes in the ES 107 except for itself. All other nodes in the ES 107 will select a backup node except for themselves, and the elected DF.
  • a variety of methods may understood as being suitable to be used for selecting the backup node.
  • each node may create a list of the nodes in the ES 107 in the ascending order of their IP addresses. Each node may then select the node which is next to itself in the list after applying any filters such as that the DF node may not be selected, etc. It will be understood to one of skill in the art that other alternative methods may also be applied to select the backup node.
  • the second node 102 may select the nearest node, in this case the third node 103. The second node 102 may then program the forwarding such that if the second connection 122 goes down, the data traffic may be forwarded to the selected third node 103, encapsulated with the third indication, e.g., a‘Backup ES Label’, advertised by the third node 103.
  • the third indication e.g., a‘Backup ES Label’
  • All other nodes in the ES 107 may be understood to also program a backup path for the access interface, but they may pick a non-DF node as the backup node.
  • An advantage provided by this Action 303 is that forwarding of the data traffic across the ES 107 is guaranteed in the event of a failure of the second connection 122, since according to embodiments here, the nodes in the ES 107 that are not DF, may no longer receive the data traffic from the first node 101. Determining a backup path, enables the second node 102 to secure a backup path should the second connection 122 fail thereby ensuring the data traffic may still be enabled to reach the destination node 106.
  • the second node 102 receives, based on the sent first indication, data traffic with unknown destination from the first node 101.
  • the data traffic with unknown destination may be, e.g., BUM traffic.
  • the receiving in this Action 304 may be implemented, e.g., via the first connection
  • the second node 102 may then normally forward the received data traffic to the destination node 106, since it may be understood to be acting as a DF for the ES 107 and the received traffic may be BUM traffic as indicated by a label with which the data traffic may be received.
  • the second connection 122 between the second node 102 and the destination node 106 may fail.
  • the second node 102 may then determine to forward the received data traffic to the backup path to ensure its delivery to the destination node 106. Based on the
  • the backup path is the third node 103.
  • the second node 102 may, based on the determination performed in Action 303 and on the second indication received in Action 302, encapsulate the received data traffic with the third indication prior to forwarding, in the next Action 306, the received data traffic to the third node 103.
  • the second node 102 may enable the third node 103 upon receiving the encapsulated data traffic, to know how to process the data traffic, and forward it to the destination node 106.
  • the second node 102 may, in this Action 306, forwarding the received data traffic to the third node 103 comprised in the ES 107, based on a determination that: i) the second connection 122 between the second node 102 and the destination node 106 has failed, and ii) the second node 102 is the Designated Forwarder, DF, within the ES 107.
  • the forwarding in this Action 306 may be implemented, e g., via the seventh connection 127.
  • the second node 102 may, in this Action 307, refrain from forwarding the received data traffic to the other nodes comprised in the ES 107 different from the third node 103, namely the node the second node 102 may have determined to be the backup path.
  • the second node 102 may continue to save resources while guaranteeing that the data traffic may reach the destination node 106.
  • the second node 102 may, in this Action 308, withdraw, based on the determination that the second connection 122 between the second node 102 and the destination node 106 has failed, an ES route associated with the ES 107.
  • the ES route may be understood to indicate that a particular node, in this case, the second node 102, e.g., a PE, is a member of the ES 107.
  • To withdraw the ES route may be understood as indicating that a particular node, in this case, the second node 102, e.g., a PE, is no longer a member of the ES 107.
  • the second node 102 may enable other nodes in the ES 107 to elect a new DF.
  • the second node 102 may, in this Action 309, delay withdrawal of an Ethernet Auto- Discovery per ES route associated with the ES 107, until receiving a fourth indication from one of the nodes in the ES 107 indicating that a new DF of data traffic has been elected within the ES 107.
  • the Ethernet Auto-Discovery per ES route may be understood to advertise the role of a node, DF or not, to provide a Backup ES label to use in case of failure, and to allow other nodes, e.g., PEs, to forward traffic to the ES 107.
  • the second node 102 e.g., a PE
  • the second node 102 is no longer a DF, and it may no longer be used as a Backup PE for this ES 107, and the data traffic belonging to the ES 107 may need to not be forwarded to this node.
  • the second node 102 may be understood to allow enough time for the traffic on the first node 101 to switch to the new DF, without any intermittent loss of data traffic.
  • Action 310
  • the nodes comprised in the ES 107 may re-run a DF Election, and another node in the ES 107 may then be elected as the new DF.
  • the newly elected DF may then also advertise that it is the new DF. This may be performed as soon as the nodes comprised in the ES 107 may realize that the current DF is no longer part of the ES 107 as a result of receiving withdraw of ES route by the existing DF.
  • the fourth node 104 may be elected DF. Accordingly, in this Action 310, the second node 102 may receive the fourth indication from the fourth node 104 comprised in the ES 107. The fourth indication may indicate that the fourth node 104 is the new DF within the ES 107.
  • Action 311
  • the second node 102 may, in this Action 311, withdraw, the Ethernet Auto- Discovery per ES route based on the receipt of the fourth indication.
  • the second node 102 may allow the data traffic on the first node 101 to switchover to the new DF without causing traffic loss.
  • Embodiments of a method performed by a first node 101 will now be described with reference to the flowchart depicted in Figure 4.
  • the method is for handling data traffic in the ES 107 within the communications network 100.
  • the data traffic has an unknown destination.
  • the ES 107 comprises the plurality of nodes 108 providing multi-homing service.
  • the method may comprise some of the following actions. In some embodiments all the actions may be performed. Several embodiments are comprised herein. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. It should be noted that the examples herein are not mutually exclusive. Components from one example may be tacitly assumed to be present in another example and it will be obvious to a person skilled in the art how those components may be used in the other examples. In Figure 4, optional actions are indicated with dashed boxes.
  • the data traffic may be, e.g., BUM traffic.
  • the first node 101 receives the first indication from the second node 102 comprised in the ES 107.
  • the first indication indicates that the second node 102 is, within the ES 107, the DF.
  • the receiving in this Action 401 may be implemented, e.g., via the first connection
  • the first indication may be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
  • the first node 101 upon receipt of data traffic with unknown destination to propagate via the ES 107, forwards the data traffic with unknown destination to the second node 102, based on the received first indication.
  • the forwarding in this Action 402 may be understood as flooding.
  • the forwarding in this Action 402 may be implemented, e.g., via the first connection
  • the first node 101 may refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the second node 102, that is the nodes in the ES 107 that are not DF.
  • the first node 101 saves resources that would otherwise be wasted if the data traffic were to be forwarded to all nodes in the ES 107, and all nodes other than the DF were to drop the traffic anyway.
  • the first node 101 may, in this Action 404, receive the fourth indication from the fourth node 104 comprised in the ES 107.
  • the fourth indication indicates that the fourth node 104 is, within the ES 107, the new DF.
  • the receiving in this Action 404 may be implemented, e.g., via the fifth connection
  • the first node 101 in this Action 406, may refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the fourth node 104, that is, different from the current DF.
  • the method is for handling data traffic in the ES 107 within the communications network 100.
  • the data traffic has an unknown destination.
  • the ES 107 comprises the plurality of nodes 108 providing multi-homing service.
  • the plurality of nodes 108 comprises the third node 103.
  • the method comprises the following actions. Several embodiments are comprised herein. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. It should be noted that the examples herein are not mutually exclusive. Components from one example may be tacitly assumed to be present in another example and it will be obvious to a person skilled in the art how those components may be used in the other examples.
  • the data traffic may be, e.g., BUM traffic.
  • the third node 103 in this Action 501, sends the second indication to the second node 102 within the ES 107.
  • the second indication indicates the third indication that is to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic.
  • the sending in this Action 501 may be implemented, e.g., via the seventh connection
  • the second indication may be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
  • the third indication may be a Backup ES Label defined in an Extended Community.
  • the third node 103 may be understood to equally send the second indication to all the other nodes in the ES 107.
  • the third node 103 receives, along with the third indication, data traffic with unknown destination from the second node 102. That is, the third node 103 may receive the data traffic with unknown destination encapsulated with the third indication.
  • the receiving in this Action 502 may happen e.g., upon failure of the second communication 122.
  • the receiving in this Action 502 may be implemented, e.g., via the seventh connection 127.
  • the third node 103 forwards, based on the received third indication, the received data traffic to the destination node 106 having the third connection 123 to the ES 107.
  • the forwarding in this Action 503 may be implemented, e.g., via the third connection 123.
  • Figure 6 is a schematic diagram depicting a non-limiting example of the third indication, according to embodiments herein.
  • the corresponding bits in each byte are indicated by numbers 0-9.
  • embodiments herein introduce a new label which may be used only during the time when the access interface between the DF node and the destination node 106 may go down, and before the traffic may finally converge on the first node 101.
  • This new label may be called‘Backup ES Label’ and may be defined as a part of a new extended community as‘Backup Ethernet Segment Identifier (ESI) Label Extended Community’. This community may be included in the Ethernet A-D per ES route advertised to BGP Neighbors via MP-BGP.
  • ESI Backup Ethernet Segment Identifier
  • the format of this new community is depicted in Figure 6.
  • the advertisement of this community may be understood to signify that the advertising node may support the flood optimization described in embodiments herein, whereby resources are saved in comparison with existing methods. Unless and until the support for this optimization is not advertised by all the participating nodes, the optimization procedures may be understood to not be available to be applied.
  • the third indication, the Backup ESI Label 600 is comprised in the second indication, in this example, a Backup ESI Label Extended Community 601, which also comprises a Community Type 602, a Sub-type 603, a DF bit 604, a Sequence Number 605 and a Reserved field 606.
  • FIG. 7 is a schematic diagram depicting a non-limiting example of handling of data traffic in the ES 107 within the communications network 100, according to embodiments herein.
  • the elements of the Figure with the same reference numerals as those in Figure 2 correspond to the elements in Figure 2 having the same reference numeral.
  • the first node 101 is referred to as PEI
  • the second node 102 is referred to as PE2
  • the third node 103 is referred to as PE3
  • the fourth node 104 is referred to as PE4.
  • the seventh node 107 is referred to as the CE
  • the fifth node 105 is referred to herein as the CE1.
  • PE2 to PE4 belong to the ES 107.
  • PE2 is acting as the DF for the ES 107.
  • the EVtET Label, ES Label and the Backup ES Label advertised by the PEs are as per RFC 7432:
  • PE2 has communicated that it is acting as the DF for this ES with sequence number as zero. All PEs have programmed the ES Label in the forwarding path with PE2 having enabled the forwarding of the BUM traffic on to the access interface. All PEs have programmed the Backup ES Label in the forwarding path enabling the forwarding of BUM traffic on to the respective access interface.
  • PE2 has programmed the backup path for access interface such that if the access link goes down the BUM traffic is forwarded to PE3.
  • the traffic formats below show the Transport label also, though the traffic reaching the destination PE might not have this label if it is removed by the penultimate router.
  • the traffic is forwarded on to the access link, the second connection 122 when this link is UP.
  • the link between PE2 and CE, the second connection 122, is down, as indicated by the bold cross at 703.
  • the second connection 122 may be referred to herein as IC1. Since the access interface between PE2 and the CE is down the backup path in the forwarding will activate.
  • the BUM traffic reaching PE2 will be forwarded to PE3. at 704.
  • the traffic as originated by PEI does not include the Backup label but when it is forwarded by PE2 to PE3, PE2 imposes the Backup label advertised by PE3 by encapsulating the data traffic with the third indication.
  • This data traffic at 704 as observed on this interface, which reaches PE3, may have the format:
  • the Backup ES label is compared and if the label is the same as the one allocated for the ES 107, the data traffic is forwarded at 705 via the access interface of PE3 to the destination node 106 that is, via the third connection 123.
  • the third connection 123 may be referred to herein as IC2. It may be noted that the significance of the Backup label is only across the PEs belonging to the ES 107.
  • the following sequence of events may take place: 1) The IC1 link goes down; 2) BUM Traffic is diverted to PE3 by PE2; 3) The DF, or PE2, withdraws the ES route; 4) PE2 holds the withdrawal of Ethernet A-D per ES route until a new DF is elected; 5) Other PEs in the ES receive the withdrawn ES route; 6) DF Election is re-run on PEs belonging to the ES and PE4 is elected as the new DF; 7) The new DF, e.g., PE4, re-programs the forwarding path to forward BUM traffic for that ES 107 on to the access interface along with the backup path for the access interface, avoiding the old DF PE for the Backup path; 8) PE4 advertises that it has assumed the role of DF for the ES with sequence number as 1; 9) PEI, on receiving a route advertising a new DF, with a different sequence number as the one received earlier, will switch the BUM traffic to PE4
  • PEs PEI to PE3
  • ES electrospray
  • PEI is the acting DF and forwards BUM traffic to the access side.
  • BUM traffic therefore consumes thrice the bandwidth required in the MPLS core because of 3 PEs in the ES;
  • PE2 Once a new PE, e.g., PE2, becomes DF and enables forwarding of BUM traffic towards the CE, it starts forwarding on the access interface.
  • PE2 e.g., PE2
  • step 2 and 4 the flood traffic is consuming twice the bandwidth required, in the MPLS core, since it has to reach PE2 and PE3.
  • PEI is the acting DF and forwards BUM traffic to the access side.
  • BUM traffic consumes only the required bandwidth in the MPLS core;
  • BUM traffic is still flowing, no flooding, and will reach PE3 from PEI which will forward and not drop. 4. Once a new PE, e.g., PE2, becomes DF, it will start receiving BUM traffic and will forward it to the CE.
  • PE2 e.g., PE2
  • One advantage of embodiments herein is that, by the first node 101 refraining from forwarding the data traffic to the other nodes comprised in the ES 107 different from the DF, a bandwidth consumed is optimal.
  • Another advantage of embodiments herein is that by enabling dropped BUM traffic to be reduced to a good extent, better convergence may be achieved. Convergence may be understood as the time it takes to recover traffic after a failure. In embodiments herein, since the traffic may be understood to continue to flow between the failure and the election of new DF, the dropped traffic is reduced. Thus, the recovery time is shorter.
  • embodiments herein allow to handles multiple events such as multiple access links belonging to the ES going down simultaneously on different PEs, as each node in the ES may have a backup path that may be used in the event of its connection to the destination node 106 suffering a failure.
  • Figure 8 depicts two different examples in panels a) and b), respectively, of the arrangement that the second node 102 may comprise to perform the method actions described above in relation to Figure 3.
  • the second node 102 may comprise the following arrangement depicted in Figure 8a.
  • the second node 102 is for handling data traffic in the ES 107 within the communications network 100.
  • the data traffic is configured to have an unknown destination.
  • the ES 107 is configured to comprise the plurality of nodes 108 being configured to provide multi -homing service.
  • the plurality of nodes 108 is configured to comprise the second node 102.
  • the data traffic may be, e.g., BUM traffic.
  • optional modules are indicated with dashed boxes.
  • the second node 102 is configured to, e.g. by means of a sending unit 801 within the second node 102 configured to, send the first indication to the first node 101 configured to have the first connection 121 to the ES 107.
  • the first indication is configured to indicate that the second node 102 is, within the ES 107, the Designated Forwarder.
  • the second node 102 is also configured to, e g. by means of a receiving unit 802 within the second node 102 configured to, receive, based on the first indication configured to be sent, the data traffic with unknown destination from the first node 101.
  • the second node 102 may be further configured to, e g. by means of a forwarding unit 803 within the second node 102 configured to, forward the data traffic configured to be received to the third node 103 configured to be comprised in the ES 107, based on the determination that: i) the second connection 122 between the second node 102 and the destination node 106 has failed, and ii) the second node 102 is the DF within the ES 107.
  • the second node 102 may be further configured to, e.g. by means of a refraining unit 804 within the second node 102 configured to, refrain from forwarding the data traffic configured to be received to the other nodes configured to be comprised in the ES 107 different from the third node 103.
  • the second node 102 may be further configured to, e.g. by means of an
  • the encapsulating unit 805 within the second node 102 configured to, receive the second indication from the third node 103.
  • the second indication is configured to indicate the third indication to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic.
  • the data traffic configured to be received is configured to be forwarded to the third node 103 along with the third indication.
  • At least one of the first indication and the second indication may be configured to be comprised in the Extended Community part of an Ethernet Auto- Discovery per ES advertisement message.
  • the third indication may be configured to be a Backup ES Label defined in an Extended Community.
  • the second node 102 may be further configured to, e.g. by means of the encapsulating unit 805 within the second node 102 configured to, encapsulate the data traffic configured to be received with the third indication prior to forwarding the data traffic configured to be received to the third node 103.
  • the second node 102 may be further configured to, e.g. by means of a determining unit 806 within the second node 102 configured to, determine that the third node 103 is a backup path to be used within the ES 107 to forward data traffic to the destination node 106, upon failure of the link between the second node 102 and the destination node 106.
  • the second node 102 may be further configured to, e g. by means of a withdrawing unit 807 within the second node 102 configured to, withdraw, based on the determination that the second connection 122 between the second node 102 and the destination node 106 has failed, an ES route associated with the ES 107.
  • the second node 102 may be further configured to, e.g. by means of a delaying unit 808 within the second node 102 configured to, delay withdrawal of the Ethernet Auto- Discovery per ES route associated with the ES 107 until receiving the fourth indication from one of the nodes in the ES 107 configured to indicate that the new DF of data traffic has been elected within the ES 107.
  • the second node 102 may be further configured to, e.g. by means of the receiving unit 802 within the second node 102 configured to, receive the fourth indication from the fourth node 104 configured to be comprised in the ES 107.
  • the fourth indication is configured to indicate that the fourth node 104 is the new DF within the ES 107.
  • the second node 102 may be further configured to, e.g. by means of the withdrawing unit 807 within the second node 102 configured to, withdraw the Ethernet Auto-Discovery per ES route based on the receipt of the fourth indication.
  • the embodiments herein may be implemented through one or more processors, such as a processor 809 in the second node 102 depicted in Figure 8, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the second node 102.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the second node 102.
  • the second node 102 may further comprise a memory 810 comprising one or more memory units.
  • the memory 810 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the second node 102.
  • the second node 102 may receive information from, e.g., the first node 101, the third node 103, any of the other nodes in the ES 107, and/or the destination node 106, through a receiving port 811.
  • the receiving port 811 may be, for example, connected to one or more antennas in second node 102.
  • the second node 102 may receive information from another structure in the communications network 100 through the receiving port 811. Since the receiving port 811 may be in communication with the processor 809, the receiving port 811 may then send the received information to the processor 809.
  • the receiving port 811 may also be configured to receive other information.
  • the processor 809 in the second node 102 may be further configured to transmit or send information to e.g., the first node 101, the third node 103, any of the other nodes in the ES 107, and/or the destination node 106, through a sending port 812, which may be in communication with the processor 809, and the memory 810.
  • the sending unit 801, the receiving unit 802, the forwarding unit 803, the refraining unit 804, the encapsulating unit 805, the determining unit 806, the withdrawing unit 807 and the delaying unit 808 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 809, perform as described above.
  • processors may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • ASIC Application-Specific Integrated Circuit
  • SoC System-on-a-Chip
  • any of the sending unit 801, the receiving unit 802, the forwarding unit 803, the refraining unit 804, the encapsulating unit 805, the determining unit 806, the withdrawing unit 807 and the delaying unit 808 described above may be the processor 809 of the second node 102, or an application running on such processor 809.
  • the methods according to the embodiments described herein for the second node 102 may be respectively implemented by means of a computer program 813 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 809, cause the at least one processor 809 to carry out the actions described herein, as performed by the second node 102.
  • the computer program 813 product may be stored on a computer-readable storage medium 814.
  • the computer- readable storage medium 814 having stored thereon the computer program 813, may comprise instructions which, when executed on at least one processor 809, cause the at least one processor 809 to carry out the actions described herein, as performed by the second node 102.
  • the computer-readable storage medium 814 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space.
  • the computer program 813 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 814, as described above.
  • the second node 102 may comprise an interface unit to facilitate communications between the second node 102 and other nodes or devices, e.g., the first node 101.
  • the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
  • the second node 102 may comprise the following
  • the second node 102 may comprise a processing circuitry 809, e g., one or more processors such as the processor 809, in the second node 102 and the memory 810.
  • the second node 102 may also comprise a radio circuitry 815, which may comprise e.g., the receiving port 811 and the sending port 812.
  • the processing circuitry 809 may be configured to, or operable to, perform the method actions according to Figure 3, in a similar manner as that described in relation to Figure 8a.
  • the radio circuitry 411 may be configured to set up and maintain at least a wireless connection with the first node 101, the third node 103, any of the other nodes in the ES 107, and/or the destination node 106. Circuitry may be understood herein as a hardware component.
  • embodiments herein also relate to the second node 102 operative to handle data traffic in the ES 107 within the communications network 100.
  • the data traffic may be configured to have an unknown destination.
  • the ES 107 may be configured to comprise the plurality of nodes 108 configured to provide multi -homing service
  • the second node 102 may comprise the processing circuitry 809 and the memory 810, said memory 810 containing instructions executable by said processing circuitry 809, whereby the second node 102 is further operative to perform the actions described herein in relation to the second node 102, e.g., in Figure 3.
  • Figure 9 depicts two different examples in panels a) and b), respectively, of the arrangement that the first node 101 may comprise to perform the method actions described above in relation to Figure 4.
  • the first node 101 may comprise the following arrangement depicted in Figure 9a.
  • the first node 101 is for handling data traffic in the ES 107 within the communications network 100.
  • the data traffic is configured to have an unknown destination.
  • the ES 107 is configured to comprise the plurality of nodes 108 being configured to provide multi -homing service.
  • the data traffic may be, e.g., BUM traffic.
  • the first node 101 is configured to, e.g. by means of a receiving unit 901 within the first node 101 configured to, receive the first indication from the second node 102 configured to be comprised in the ES 107.
  • the first indication is configured to indicate that the second node 102 is, within the ES 107, the DF.
  • the first node 101 is further configured to, e.g. by means of a forwarding unit 902 within the first node 101 further configured to, upon receipt of the data traffic with unknown destination to propagate via the ES 107, forward the data traffic with unknown destination to the second node 102, based on the first indication configured to be received.
  • the first node 101 is also configured to, e.g. by means of a refraining unit 903 within the first node 101 configured to, refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the second node 102.
  • the first indication may be configured to be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
  • the first node 101 may be further configured to, e g. by means of the receiving unit
  • the fourth indication is configured to indicate that the fourth node 104 is, within the ES 107, the new DF.
  • the first node 101 may be further configured to, e g. by means of the forwarding unit
  • the 902 within the first node 101 configured to, upon receipt of the additional data traffic with unknown destination to propagate via the ES 107, forward the additional data traffic to the fourth node 104, based on the fourth indication configured to be received.
  • the first node 101 may be further configured to, e.g. by means of the refraining unit 903 within the first node 101 configured to, refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the fourth node 104.
  • the embodiments herein may be implemented through one or more processors, such as a processor 904 in the first node 101 depicted in Figure 9, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first node 101.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first node 101.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the first node 101.
  • the first node 101 may further comprise a memory 905 comprising one or more memory units.
  • the memory 905 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the first node 101.
  • the first node 101 may receive information from, e.g., the fifth node 105, the second node 102, the third node 103, and/or any of the other nodes in the ES 107, through a receiving port 906.
  • the receiving port 906 may be, for example, connected to one or more antennas in first node 101.
  • the network node 101 first node 101 may receive information from another structure in the communications network 100 through the receiving port 906. Since the receiving port 906 may be in communication with the processor 904, the receiving port 906 may then send the received information to the processor 904.
  • the receiving port 906 may also be configured to receive other information.
  • the processor 904 in the first node 101 may be further configured to transmit or send information to e g., the fifth node 105, the second node 102, the third node 103, and/or any of the other nodes in the ES 107, through a sending port 907, which may be in communication with the processor 904, and the memory 905.
  • the receiving unit 901, the forwarding unit 902, and the refraining unit 903 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 904, perform as described above.
  • processors as well as the other digital hardware, may be included in a single Application- Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • ASIC Application- Specific Integrated Circuit
  • SoC System-on-a-Chip
  • any of the receiving unit 901, the forwarding unit 902, and the refraining unit 903 described above may be the processor 904 of the first node 101, or an application running on such processor 904.
  • the methods according to the embodiments described herein for the first node 101 may be respectively implemented by means of a computer program 908 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 904, cause the at least one processor 904 to carry out the actions described herein, as performed by the first node 101.
  • the computer program 908 product may be stored on a computer-readable storage medium 909.
  • the computer-readable storage medium 909, having stored thereon the computer program 908, may comprise instructions which, when executed on at least one processor 904, cause the at least one processor 904 to carry out the actions described herein, as performed by the first node 101.
  • the computer-readable storage medium 909 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space.
  • the computer program 908 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 909, as described above.
  • the first node 101 may comprise an interface unit to facilitate communications between the first node 101 and other nodes or devices, e.g., the network node 101.
  • the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
  • the first node 101 may comprise the following arrangement depicted in Figure 9b.
  • the first node 101 may comprise a processing circuitry 904, e.g., one or more processors such as the processor 904, in the first node 101 and the memory 905.
  • the first node 101 may also comprise a radio circuitry 910, which may comprise e.g., the receiving port 906 and the sending port 907.
  • the processing circuitry 904 may be configured to, or operable to, perform the method actions according to Figure 4, in a similar manner as that described in relation to Figure 9a.
  • the radio circuitry 411 may be configured to set up and maintain at least a wireless connection with the fifth node 105, the second node 102, the third node 103, and/or any of the other nodes in the ES 107. Circuitry may be understood herein as a hardware component.
  • embodiments herein also relate to the first node 101 operative to handle the data traffic in the ES 107 within the communications network 100.
  • the data traffic is configured to have an unknown destination.
  • the ES 107 is configured to comprise the plurality of nodes 108 configured to provide multi -homing service.
  • the network node 101 may comprise the processing circuitry 904 and the memory 905, said memory 905 containing instructions executable by said processing circuitry 904, whereby the first node 101 is further operative to perform the actions described herein in relation to the first node 101, e.g., in Figure 4.
  • Figure 10 depicts two different examples in panels a) and b), respectively, of the arrangement that the third node 103 may comprise to perform the method actions described above in relation to Figure 5.
  • the third node 103 may comprise the following arrangement depicted in Figure 10a.
  • the third node 103 is for handling data traffic in the ES 107 within the communications network 100.
  • the data traffic is configured to have an unknown destination.
  • the ES 107 is configured to comprise the plurality of nodes 108 being configured to provide multi -homing service.
  • the plurality of nodes 108 is configured to comprise the third node 103.
  • the data traffic may be, e.g., BUM traffic.
  • the third node 103 is configured to, e.g. by means of a sending unit 1001 within the third node 103 configured to, send the second indication to the second node 102 within the ES 107.
  • the second indication is configured to indicate the third indication that is to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic.
  • the second indication may be configured to be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
  • the third indication may be configured to be a Backup ES Label defined in an Extended Community.
  • the third node 103 is further configured to, e.g. by means of a receiving unit 1002 within the third node 103 configured to, receive, along with the third indication, the data traffic with unknown destination from the second node 102.
  • the third node 103 is further configured to, e.g. by means of a forwarding unit 1003 within the third node 103 configured to, forward, based on the third indication configured to be received, the data traffic configured to be received to the destination node 106 configured to have a third connection 123 to the ES 107.
  • the embodiments herein may be implemented through one or more processors, such as a processor 1004 in the third node 103 depicted in Figure 10, together with computer program code for performing the functions and actions of the embodiments herein.
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the third node 103.
  • a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the third node 103.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code may furthermore be provided as pure program code on a server and downloaded to the third node 103.
  • the third node 103 may further comprise a memory 1005 comprising one or more memory units.
  • the memory 1005 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the third node 103.
  • the third node 103 may receive information from, e.g., the first node 101, the second node 102, any of the other nodes in the ES 107, and/or the destination node 106, through a receiving port 1006.
  • the receiving port 1006 may be, for example, connected to one or more antennas in third node 103.
  • the third node 103 may receive information from another structure in the communications network 100 through the receiving port 1006. Since the receiving port 1006 may be in communication with the processor 1004, the receiving port 1006 may then send the received information to the processor 1004.
  • the receiving port 1006 may also be configured to receive other information.
  • the processor 1004 in the third node 103 may be further configured to transmit or send information to e.g., the first node 101, the second node 102, any of the other nodes in the ES 107, and/or the destination node 106, through a sending port 1007, which may be in communication with the processor 1004, and the memory 1005.
  • the sending unit 1001, the receiving unit 1002, and the forwarding unit 1003 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 1004, perform as described above.
  • processors as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
  • ASIC Application-Specific Integrated Circuit
  • SoC System-on-a-Chip
  • any the sending unit 1001, the receiving unit 1002, and the forwarding unit 1003 described above may be the processor 1004 of the third node 103, or an application running on such processor 1004.
  • the methods according to the embodiments described herein for the third node 103 may be respectively implemented by means of a computer program 1008 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 1004, cause the at least one processor 1004 to carry out the actions described herein, as performed by the third node 103.
  • the computer program 1008 product may be stored on a computer-readable storage medium 1009.
  • the computer-readable storage medium 1009, having stored thereon the computer program 1008, may comprise instructions which, when executed on at least one processor 1004, cause the at least one processor 1004 to carry out the actions described herein, as performed by the third node 103.
  • the computer-readable storage medium 1009 may be a non- transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space.
  • the computer program 1008 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1009, as described above.
  • the third node 103 may comprise an interface unit to facilitate communications between the third node 103 and other nodes or devices, e.g., the first first node 111.
  • the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
  • the third node 103 may comprise the following arrangement depicted in Figure 10b.
  • the third node 103 may comprise a processing circuitry 1004, e.g., one or more processors such as the processor 1004, in the third node 103 and the memory 1005.
  • the third node 103 may also comprise a radio circuitry 1010, which may comprise e.g., the receiving port 1006 and the sending port 1007.
  • the processing circuitry 1004 may be configured to, or operable to, perform the method actions according to Figure 5, in a similar manner as that described in relation to Figure 10a.
  • the radio circuitry 411 may be configured to set up and maintain at least a wireless connection with the first node 101, the second node 102, any of the other nodes in the ES 107, and/or the destination node 106. Circuitry may be understood herein as a hardware component.
  • embodiments herein also relate to the third node 103 operative to handle the data traffic in the ES 107 within the communications network 100.
  • the data traffic is configured to have an unknown destination.
  • the ES 107 is configured to comprise the plurality of nodes 108 configured to provide multi -homing service.
  • the plurality of nodes 108 is configured to comprise the third node 103.
  • the third node 103 may comprise the processing circuitry 1004 and the memory 1005, said memory 1005 containing instructions executable by said processing circuitry 1004, whereby the third node 103 is further operative to perform the actions described herein in relation to the third node 103, e.g., in Figure 5.
  • the expression“at least one of:” followed by a list of alternatives separated by commas, and wherein the last alternative is preceded by the“and” term may be understood to mean that only one of the list of alternatives may apply, more than one of the list of alternatives may apply or all of the list of alternatives may apply.
  • This expression may be understood to be equivalent to the expression“at least one of:” followed by a list of alternatives separated by commas, and wherein the last alternative is preceded by the“or” term.
  • a processor as used herein, may be understood to be a hardware component.

Abstract

A method performed by a second node (102) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100) is described herein. The data traffic has an unknown destination. The ES (107) comprises a plurality of nodes (108) providing multi-homing service. The plurality of nodes (108) comprises the second node (102). The second node (102) sends (301) a first indication to a first node (101) having a first connection (121) to the ES (107). The first indication indicates that the second node (102) is, within the ES (107), a Designated Forwarder. The second node (102) receives (304), based on the sent first indication, data traffic with unknown destination from the first node (101). Also described is a method performed by the first node (101), which receives (401) the first indication and refrains (403) from forwarding the data traffic to the other nodes comprised in the ES (107).

Description

FIRST NODE, SECOND NODE, THIRD NODE AND METHODS PERFORMED THEREBY FOR HANDLING DATA TRAFFIC IN AN ETHERNET SEGMENT
TECHNICAL FIELD
The present disclosure relates generally to a first node and methods performed thereby for handling data traffic in an Ethernet Segment (ES) within a communications network. The present disclosure also relates generally to a second node, and methods performed thereby for handling data traffic in the ES within the communications network. The present disclosure also relates generally to a third node, and methods performed thereby for handling data traffic in the ES within the communications network. The present disclosure further also relates generally to computer programs and computer- readable storage mediums, having stored thereon the computer programs to carry out these methods.
BACKGROUND
Computer systems in a communications network may comprise one or more nodes, which may also be referred to simply as nodes. A node may comprise one or more processors which, together with computer program code may perform different functions and actions, a memory, a receiving and a sending port. A node may be, for example, a router.
In computer systems or networks, Virtual Private Local Area Network Service (VPLS) enables to provide Ethernet-based multipoint to multipoint communication over Internet Protocol (IP) or Multiprotocol Label Switching (MPLS). VPLS may be understood to enable geographically dispersed sites, e.g., servers and clients, to share an Ethernet broadcast domain by connecting sites through pseudowires or Border Gateway Protocol (BGP) neighbors belonging to the same Ethernet VPN instance (EVI). In VPLS, a local area network (LAN) at each site may be extended to the edge of a provider network. The provider network may then emulate a switch or bridge to connect all of the customer LANs to create a single bridged LAN.
A Broadcast domain may be understood as a logical division of a computer network, in which all nodes comprised in the domain may reach each other by broadcast at the data link layer. A Provider Edge (PE) may be understood as a router that may be located between an area of a provider of a network service or areas that may be under the administration of other network providers. In an Ethernet Virtual Private Network (EVPN) for VPLS, a PE may be connected to multiple remote PEs which may belong to the same Broadcast Domain for an EVPN instance (EVI). A remote PE may be understood as a PE towards which the traffic may be forwarded or flooded. That is, PEs which may be upstream of a destination host. In this case, Broadcast, Unknown Unicast and Multicast (BUM) Traffic may be sent to all the remote PEs, or, in other words, BUM traffic may be flooded. Flooding may be understood as forwarding by, in this case a PE, of traffic to all the remote PEs attached to the PE. In multi-homing, a host or a computer network may be understood to be connected to more than one network. VPLS multi-homing may be understood to enable connecting a customer site to a plurality of PEs in order to provide redundant connectivity. A redundant PE router may therefore be understood to be enabled to provide network service to a customer site upon detection of a failure. An Ethernet segment may be understood as a set of Ethernet links where a customer site, e.g., a device or network, may be connected to one or more PEs via this set of links. The set of PEs belonging to the same Ethernet Segment (ES) may be referred to as a redundancy group.
In a multi-homing scenario, when more than one PE may share the same ES, that is, when there may be more than one PE in a redundancy group, the BUM traffic may be forwarded to all the PEs. In EVPN, this redundancy group may have any number of PEs. One of the PEs may be elected as the Designated-Forwarder (DF) for the ES. The DF PE may be understood to be the one to forward the BUM traffic on the access interface towards the Customer Edge (CE), whereas all non-DF PEs may be understood to drop the BUM traffic.
Flooding may be understood to play an important role for achieving VPLS functionality using EVPN, where all BUM traffic may be understood to be flooded. The current flooding approach in the case of multi-homing where several PEs may be part of an ES results in waste of bandwidth and energy resources, as well as in extended convergence outage in the network.
SUMMARY
It is an object of embodiments herein to improve the handling of data traffic in an Ethernet Segment (ES) within a communications network. It is a particular object of embodiments herein to improve the handling of data traffic having an unknown destination in an ES within a communications network, the ES comprising a plurality of nodes providing multi-homing service.
According to a first aspect of embodiments herein, the object is achieved by a method performed by a second node. The method is for handling data traffic in an ES within a communications network. The data traffic has an unknown destination. The ES comprises a plurality of nodes providing multi-homing service. The plurality of nodes comprises the second node. The second node sends a first indication to a first node having a first connection to the ES. The first indication indicates that the second node is, within the ES, a Designated Forwarder (DF). The second node also receives, based on the sent first indication, data traffic with unknown destination from the first node.
According to a second aspect of embodiments herein, the object is achieved by a method performed by the first node. The method is for handling data traffic in the ES within the communications network. The data traffic has an unknown destination. The ES comprises the plurality of nodes providing multi-homing service. The first node receives the first indication from the second node comprised in the ES. The first indication indicates that the second node is, within the ES the DF. Upon receipt of data traffic with unknown destination to propagate via the ES, the first node then forwards the data traffic with unknown destination to the second node, based on the received first indication. The first node also refrains from forwarding the data traffic to the other nodes comprised in the ES different from the second node.
According to a third aspect of embodiments herein, the object is achieved by a method performed by a third node. The method is for handling data traffic in the ES within the communications network. The data traffic has an unknown destination. The ES comprises the plurality of nodes providing multi-homing service. The plurality of nodes comprises the third node. The third node sends the second indication to the second node within the ES. The second indication indicates the third indication that is to be used when forwarding data traffic with unknown destination to the third node as backup path within the ES to forward data traffic. The third node also receives, along with the third indication, data traffic with unknown destination from the second node. The third node then forwards, based on the received third indication, the received data traffic to the destination node having the third connection to the ES. According to a fourth aspect of embodiments herein, the object is achieved by a second node, for handling data traffic in the ES within the communications network. The data traffic is configured to have an unknown destination. The ES is configured to comprise the plurality of nodes being configured to provide multi-homing service. The plurality of nodes is configured to comprise the second node. The second node is also configured to, send the first indication to the first node configured to have the first connection to the ES. The first indication is configured to indicate that the second node is, within the ES, the DF. The second node is further configured to receive, based on the first indication configured to be sent, data traffic with unknown destination from the first node.
According to a fifth aspect of embodiments herein, the object is achieved by a first node for handling data traffic in ES within the communications network. The data traffic is configured to have an unknown destination. The ES is configured to comprise the plurality of nodes configured to provide multi-homing service. The first node is further configured to receive the first indication from the second node configured to be comprised in the ES. The first indication is configured to indicate that the second node is, within the ES, the DF. The first node is also configured to, upon receipt of data traffic with unknown destination to propagate via the ES, forward the data traffic with unknown destination to the second node, based on the first indication configured to be received. The first node is also configured to refrain from forwarding the data traffic to the other nodes comprised in the ES different from the second node.
According to a sixth aspect of embodiments herein, the object is achieved by a third node for handling data traffic in the ES within the communications network. The data traffic is configured to have an unknown destination. The ES is configured to comprise the plurality of nodes configured to provide multi-homing service. The plurality of nodes is configured to comprise the third node. The third node is also configured to send the second indication to the second node within the ES. The second indication is configured to indicate the third indication that is to be used when forwarding data traffic with unknown destination to the third node as backup path within the ES to forward data traffic. The third node is further configured to receive, along with the third indication, data traffic with unknown destination from the second node. This is configured to be performed forward, based on the third indication configured to be received, the data traffic configured to be received to a destination node configured to have a third connection to the ES. According to a seventh aspect of embodiments herein, the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second node.
According to an eighth aspect of embodiments herein, the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the second node.
According to a ninth aspect of embodiments herein, the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first node.
According to a tenth aspect of embodiments herein, the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the first node.
According to a eleventh aspect of embodiments herein, the object is achieved by a computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the third node.
According to a twelfth aspect of embodiments herein, the object is achieved by a computer-readable storage medium, having stored thereon the computer program, comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method performed by the third node.
By the second node sending the first indication to the first node and thereby communicating its role to the first node, the second node enables the first node to know that it may only need to send the data traffic, e.g., BUM traffic, to the DF, the second node in this case. The first node receives the first indication and forwarding the data traffic with unknown destination to the second node, refraining from forwarding the data traffic to the other nodes comprised in the ES. By only sending the data traffic to the DF, resources are spared by first node, since it may no longer need to forward the data traffic to the other nodes in the ES that are not DF, which may be understood to otherwise drop the traffic. Hence, the bandwidth consumed is optimal. Moreover dropped traffic is reduced to a good extent, leading to better convergence, that is, a shorter time that may be required for the traffic to restore after the failure event. These and other advantages of embodiments herein are explained in detail later.
BRIEF DESCRIPTION OF THE DRAWINGS
Examples of embodiments herein are described in more detail with reference to the accompanying drawings, according to the following description.
Figure l is a schematic diagram illustrating an example of a topology of a communications network, according to existing methods.
Figure 2 is a schematic diagram illustrating a non-limiting example of a communications network, according to embodiments herein.
Figure 3 is a flowchart depicting embodiments of a method in a second node, according to embodiments herein.
Figure 4 is a flowchart depicting embodiments of a method in a first node, according to embodiments herein.
Figure 5 is a schematic diagram illustrating an example of a method in a third node,
according to embodiments herein.
Figure 6 is a schematic diagram illustrating an example of a third indication, according to embodiments herein.
Figure 7 is a schematic diagram illustrating an example of handling data traffic, e.g., BUM traffic, in an Ethernet Segment, according to embodiments herein.
Figure 8 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a second node, according to embodiments herein.
Figure 9 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a first node, according to embodiments herein.
Figure 10 is a schematic block diagram illustrating two non-limiting examples, a) and b), of a third node, according to embodiments herein.
DETAILED DESCRIPTION
As part of the development of embodiments herein, a number of problems with exiting methods will first be identified and discussed.
The current approach of flooding BUM traffic in EVPN is not optimal. Although only one PE, that is, the DF PE, forwards the traffic on the access interface towards the CE, the same traffic is sent to multiple PEs belonging to the same ES across the MPLS core, thus consuming much more network bandwidth than the actual bandwidth required.
Figure l is a schematic representation of an example topology in an EVPN to illustrate the problem of existing methods. Two different CEs are represented in Figure 1, a first CE, CE1, and a second CE, CE 2. Six different PEs are also represented, PEI, PE2, PE3, PE4, PE5 and PE6. In Figure 1, an T type of interface is the interface between two PEs across an MPLS core. Each of the interfaces between PEI and each of PE2, PE3,
PE4, PE5 and PE6 is represented as II, 12, 13, 14, and 15, respectively. An‘IC’ type of interface is the access interface between a PE and a CE. In this example, CE2 is multi homed to PEs PE2 to PE6, which belong to the same ES. Each of the interfaces between PE2, PE3, PE4, PE5 and PE6 and CE2 is respectively represented as IC1, IC2, IC3, IC4, and IC5. PE2 is the elected DF for this ES. From the point of view of PEI, it is connected to multiple PEs, PE2 to PE6, across the core. Unknown Unicast traffic, that is, traffic for which the destination is not known, originated from CE1 will be flooded by PEI to all the remote PEs, that is, PE2 to PE6. Currently, as per RFC 7432, PEs program the forwarding path such that if the BUM Traffic is received from a PE across the MPLS core and the PE is acting as the DF for this ES, then the PE is programmed to forward the traffic to the access interface belonging to the ES. If the traffic is received with the ES label for the ES for which the PE is acting as a DF or a wrong ES label or the PE is not the DF for this ES, the PE is programmed to drop the traffic. Dropping BUM traffic by all non-DF PEs may be understood to ensure that duplicate traffic is not forwarded to CE2.
The behavior described in Figure 1 may be understood to be the same for Broadcast and Multicast traffic. PEI applies no Ingress Filtering. Ingress filtering may be understood as applying a rule at ingress in order to decide whether to forward the traffic or not.
Therefore, in relation to Figure 1, PEI does not base its decision of forwarding the Flood traffic on the role played by the remote PE, but simply forwards the BUM traffic to all the PEs that are part of the same EVI. PE2 will forward this traffic to CE2, since it is the elected DF for this ES, but all other PEs, namely, PE3 to PE6, will drop this traffic.
Although the PEs PE3 to PE6 drop the BUM traffic received across the MPLS core, this traffic has already consumed bandwidth on the links 12 to 15 across the MPLS core. For simplicity, only one direction traffic, that is, traffic from PEI towards the core is considered, unless stated otherwise. However, the same principle may be understood to apply for BUM traffic in both directions. An access interface link may be understood as a link between a PE and a CE towards the access side. A DF Access Link may be understood as a link between the DF PE and a CE towards the access side.
PEs belonging to the same ES may be enabled to either operate in single-active mode or all-active mode. Single-active mode may be understood to be a mode wherein only one PE out of all the PEs in an ES may be understood to be responsible for forwarding traffic destined to the CE or the traffic originated from the CE . All-active mode may be understood to be a mode wherein all the PEs belonging to the ES may forward traffic destined to the CE or the traffic originated from the CE. The rules for these modes are the same as defined in RFC 7432 BGP MPLS-Based Ethernet VPN. Initially, when no Media Access Controls (MACs) are learned at PEI, all the traffic originated from CE1 will be flooded by PEI. Eventually, MACs will be learned from remote PEs, namely PE2 to PE6 in the example of Figure 1, and the traffic will be forwarded as Unicast traffic by PEI . Since PEs PE2 to PE6 belong to the same ES, the reverse traffic originating from CE2 will be forwarded by the DF PE, here PE2, in case of single-active and will be load-balanced across all the PEs, PE2 to PE6, in the case of all-active. As a result, PEI will learn MACs from PE2 in the case of single-active or different set of MACs from the PEs PE2 to PE6 in the case of all-active.
In such a scenario, if the link between PE2 and CE2, that is, IC1, goes down, the following sequence of events takes place as per the existing methods: 1) the link IC1 goes down, 2) the DF, here PE2, withdraws the ES route, that is, a control message that may be sent by BGP, which may contain information specific to the ES, 3) the DF, here PE2, withdraws Ethernet Auto Discovery (A-D) per ES route, 4) other PEs in the ES receive the withdrawn routes, 5) the DF election is re-run on PEs belonging to the ES, in this example PE3 to PE6, and a new DF is elected, and 6) the new DF, e.g., PE4, re-programs the forwarding path to forward BUM traffic for that ES on to the access interface towards the CE.
If the PEs PE2 to PE6 are operating in single-active mode, for all MAC-IPs learned at PEI from PE2, PEI will flood the traffic to all the PEs until the new DF is elected and PEI learns the MAC-IP routes from the new DF.
Until the new DF is elected and an alternative forwarding path towards the CE is enabled by the new DF, any flood traffic received at PEs PE3 to PE6 from PEI will be dropped. This not only consumes unnecessary bandwidth across the MPLS core but also causes extended convergence outage.
Draft“draft-mohanty-bess-evpn-bum-opt-00” in BGP EVPN Flood Traffic
Optimization has attempted to solve the flooding problem but it is not optimal, in the sense that bandwidth consumed is not optimal, no convergence benefit is provided, and no support is provided for handling multiple events such as access links for both a DF and a Backup Designated Forwarder (BDF), e g., in the example of Figure 1, that would be PE4, going down simultaneously.
Several embodiments are comprised herein, which address these problems of the existing methods. Embodiments herein may be understood to relate to flood traffic optimization in BGP Ethernet VPN. According to embodiments herein, bandwidth consumed is optimal, and BUM traffic dropped is reduced to a good extent leading to better convergence, as will be described below.
The embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which examples are shown. In this section, embodiments herein are illustrated by exemplary embodiments. It should be noted that these
embodiments are not mutually exclusive. Components from one embodiment or example may be tacitly assumed to be present in another embodiment or example and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments.
Although terminology from EVPN for VPLS has been used in this disclosure to exemplify the embodiments herein, this should not be seen as limiting the scope of the embodiments herein to only the aforementioned system. Other systems supporting similar or equivalent functionality may also benefit from exploiting the ideas covered within this disclosure. In future network access, the terms used herein may need to be reinterpreted in view of possible terminology changes in future radio access technologies.
Figure 2 is a schematic diagram depicting a non-limiting example of a communications network 100, in which embodiments herein may be implemented. The communications network 100 may be understood as a computer network, as depicted in in the non-limiting example of Figure 2. The communications network 100 may be an MPLS or IP network providing transport for end-to-end VPLS service using BGP EVPN, or a network with similar functionality.
The communications network 100 comprises nodes, whereof a first node 101, a second node 102, a third node 103, a fourth node 104, a fifth node 105, fifth node 105, also referred to herein as a source node and a sixth node, also referred to herein as a destination node 106, are depicted in the non-limiting example of Figure 2. It may be understood that more nodes may be comprised in the communications network 100, and that the number of nodes depicted in Figure 2 is for illustration purposes only. Each of the first node 101, the second node 102, the third node 103, the fourth node 104, the fifth node 105 and the destination node 106 may be understood, respectively, as a first computer system, a second computer system, a third computer system, a fourth computer system, a fifth computer system and a sixth computer system. In particular, each of the first node 101, the second node 102, the third node 103, the fourth node 104, the fifth node 105 and the destination node 106 may be a router that is, a networking device that may be enabled to forward data packets between nodes. Further particularly, each of the first node 101, the second node 102, the third node 103, and the fourth node 104 may be, respectively a first Provider Edge (PE), a second PE, a third PE, and a fourth PE, whereas each of the fifth node 105 and the destination node 106 may be, respectively, a first Customer Edge (CE) and a second CE.
The communications network 100 comprises an Ethernet Segment (ES) 107. The ES 107 comprises a plurality of nodes 108 providing multi -homing service. In particular the plurality of nodes 108 in the ES 107 comprises the second node 102, and may comprise the third node 103 and the fourth node 104, as will be described later in the various embodiments described herein. The plurality of nodes 108 may be understood to be a network of routers a packet may need to go through from a source entity, such as the fifth node 105 to a destination entity such as the destination node 106, e.g., a pipeline. The ES 107 may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider.
Each of the fifth node 105 and the destination node 106 may be implemented, as a standalone server in e.g., a host computer in the cloud. Each of the fifth node 105 and the destination node 106 may in some examples be a distributed node or distributed server, with some of its functions being implemented locally, e.g., by a client manager, and some of its functions implemented in the cloud, by e.g., a server manager. Yet in other examples, each of the fifth node 105 and the destination node 106 may also be
implemented as processing resources in a server farm. Yet in other examples, each of the fifth node 105 and the destination node 106 may also be routers in the communications network 100.
The first node 101 is configured to communicate within the communications network
100 with the second node 102 in the ES 107 over a first link or first connection 121. The second node 102 is configured to communicate within the communications network 100 with the destination node 106 over a second link or second connection 122. The third node 103 is configured to communicate within the communications network 100 with the destination node 106 over a third link or third connection 123. The third node 103 is configured to communicate within the communications network 100 with the first node
101 over a fourth link or fourth connection 124. The fourth node 104 is configured to communicate within the communications network 100 with the first node 101 over a fifth link or fifth connection 125. The fourth node 104 is configured to communicate within the communications network 100 with the destination node 106 over a sixth link or sixth connection 126. The second node 102 is configured to communicate within the communications network 100 with the third node 103 over a seventh link or seventh connection 127. The third node 103 is configured to communicate within the
communications network 100 with the fourth node 104 over an eighth link or eighth connection 128. The fifth node 105 is configured to communicate within the
communications network 100 with the first node 101 over a ninth link or ninth connection 129. Each of the first connection 121, the second connection 122, the third connection 123, the fourth connection 124, the fifth connection 125, the sixth connection 126, the seventh connection 127, the eighth connection 128 and the ninth connection 129 may be typically a wired link. Although they may also be, e.g., a radio link, an infrared link, etc...
Any of the first connection 121, the second connection 122, the third connection 123, the fourth connection 124, the fifth connection 125, the sixth connection 126, the seventh connection 127, the eighth connection 128 and the ninth connection 129 may be understood to be able to be comprised of a plurality of individual links. Any of the first connection 121, the second connection 122, the third connection 123, the fourth connection 124, the fifth connection 125, the sixth connection 126, the seventh connection 127, the eighth connection 128 and the ninth connection 129 may be a direct link or it may go via one or more computer systems or one or more core networks in the communications network 100, which are not depicted in Figure 2, or it may go via an optional intermediate network. The intermediate network may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network, if any, may be a backbone network or the Internet; in particular, the intermediate network may comprise two or more sub-networks, which is not shown in Figure 2.
In general, the usage of“first”,“second”,“third”,“fourth”,“fifth”,“sixth”,
“seventh” ,“eighth” and/or“ninth” herein may be understood to be an arbitrary way to denote different elements or entities, and may be understood to not confer a cumulative or chronological character to the nouns they modify.
Embodiments of a method performed by the second node 102, will now be described with reference to the flowchart depicted in Figure 3. The method may be understood to be for handling data traffic in the Ethernet Segment (ES) 107 within the communications network 100. The data traffic has an unknown destination. That is, the data traffic may be destined to a node for which the path or route to reach the node is not known. The data traffic may be BUM traffic.
The ES 107 comprises the plurality of nodes 108 providing multi-homing service. The plurality of nodes 108 comprises the second node 102. In embodiments herein, all nodes in the plurality of nodes 108 in the ES 107 may be understood to be connected to the destination node 106, as well as to the first node 101. That is, all the nodes such as the destination node 106, which may have hosts belonging to the same broadcast domain and which may be understood to be CEs behind the PEs providing multi-homing, may be connected to all the PEs in the ES 107. If one CE such as the fifth node 105 connects to all the PEs, and another CE such as the destination node 106 does not connect to all the PEs, embodiments herein may not be applicable. Also, embodiments herein may be understood to not relate to a single-homing scenario, in which case the flooding may be understood to be already optimal.
The method may comprise the actions described below. Several embodiments are comprised herein. In some embodiments all the actions may be performed. In some embodiments some of the actions may be performed. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. It should be noted that the examples herein are not mutually exclusive.
Components from one example may be tacitly assumed to be present in another example and it will be obvious to a person skilled in the art how those components may be used in the other examples. In Figure 3, optional actions are indicated with dashed boxes.
Action 301
In the course of operations of the communications network 100, the first node 101 may receive data traffic from the fifth node 105, and it may forward it to the nodes belonging to the ES 107. The data traffic may then be understood to be forwarded by an elected Designated Forwarder (DF) in the ES 107 on to the access interface belonging to that ES 107. The access interface may be understood as an Ethernet link between a node in the ES 107, e.g., a PE, and the destination node 106, e.g., a CE. The data traffic may be BUM traffic which may be received across an MPLS core.
According to embodiments herein, if a node, e.g., a PE, is elected Designated Forwarder (DF), it may communicate this to other nodes, e.g., PEs.
In embodiments herein, the second node 102 may be understood to have been elected DF. Therefore, in this Action 301, the second node 102, sends, a first indication to the first node 101 having the first connection 121 to the ES 107. The first indication indicates that the second node 102 is, within the ES 107, a DF.
The sending in this Action 301 may be implemented, e.g., via the first connection
121
It may be understood that the second node 102 may advertise the first indication, and that therefore, it may send the first indication to all nodes in the ES 107 as well.
The first indication may, for example, be one bit in a‘Flags’ field of a‘Backup ESI Label Extended Community’. If the bit is set, the advertising node, in this case the second node 102, may be understood to be acting as the DF for this ES 107. The advertising node may increment a sequence number in the Backup ESI Label Extended Community by one from the one received. The first DF may always set it as zero.
By communicating its role to the first node 101, that is, the node originating the data traffic in this case, the second node 102 enables the first node 101 to know that it may only need to send the data traffic, e.g., BUM traffic, to the DF, the second node 102 in this case. The first node 101 may also be enabled to then continue to send the data traffic to this DF until it may receive another indication indicating change of DF with a different sequence number, at which time it may then be enabled to switch the data traffic to the new DF. By only sending the data traffic to the DF, resources are spared by first node 101, since it may no longer need to forward the data traffic to the other nodes in the ESI 07 that are not DF, which may be understood to otherwise drop the traffic anyway.
Action 302
All nodes belonging to the ES 107 may be understood to allocate a‘Backup ES Label’ and advertise the Backup Label via a BGP Control message in‘Backup ESI Label Extended Community’ as a part of Ethernet A-D per ES route advertisement.
Each nodes within the ES 107 may be understood to program a backup path for access interface, such that if the access link goes down, the BUM traffic may then be forwarded to a node having a connection via the backup path with the node in question. Each of the nodes within the ES 107 may then advertise an indication to be used when signalling that that particular node may have been chosen as backup path.
In accordance with this, in this Action 302, the second node 102 may receive a second indication from the third node 103. The second indication may indicate a third indication to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic. The received data traffic may then be forwarded to the third node 103 along with the third indication when the second connection 122 between the second node 102 and the destination node 106 may fail.
The third indication may be understood to enable the third node 103 to know how to then process the data traffic bearing the third indication.
In some embodiments, at least one of the first indication and the second indication may be comprised in an Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
In some embodiments, the third indication may be a Backup ES Label defined in an Extended Community.
An advantage provided by this Action 302 is that the second node 102 may be enabled to know how to indicate to the third node 103 that data traffic the second node 102 may be forwarding to the third node 103 is to be treated as data traffic on the backup path, and therefore forwarded to the destination node 106. It may be understood that the second node 102 may receive respective second indications from the other nodes comprised in the ES 107, advertising their respective third indications, with a similar purpose to that of the third indication from the third node 103.
Action 303
Each of the nodes in the ES 107 may then select another node as a backup path in the event of failure of its respective connection or link with the destination node 106, so the data traffic may continue to be forwarded.
In this Action 303, the second node 102 may determine that the third node 103 is the backup path to be used within the ES 107 to forward data traffic to the destination node 106, upon failure of the link between the second node 102 and the destination node 106, that is, upon failure of the second connection 122. The selection of the backup node may be performed by all the nodes in the ES 107 as soon as they receive the third indication from other nodes in the ES 107. The DF node may select a backup node from the nodes in the ES 107 except for itself. All other nodes in the ES 107 will select a backup node except for themselves, and the elected DF. A variety of methods may understood as being suitable to be used for selecting the backup node. As a non-limiting example, one such method may be that each node may create a list of the nodes in the ES 107 in the ascending order of their IP addresses. Each node may then select the node which is next to itself in the list after applying any filters such as that the DF node may not be selected, etc. It will be understood to one of skill in the art that other alternative methods may also be applied to select the backup node.
Since all the nodes in the ES 107 may be understood to know the complete set of nodes belonging to the ES 107, the second node 102 may select the nearest node, in this case the third node 103. The second node 102 may then program the forwarding such that if the second connection 122 goes down, the data traffic may be forwarded to the selected third node 103, encapsulated with the third indication, e.g., a‘Backup ES Label’, advertised by the third node 103.
All other nodes in the ES 107 may be understood to also program a backup path for the access interface, but they may pick a non-DF node as the backup node.
An advantage provided by this Action 303 is that forwarding of the data traffic across the ES 107 is guaranteed in the event of a failure of the second connection 122, since according to embodiments here, the nodes in the ES 107 that are not DF, may no longer receive the data traffic from the first node 101. Determining a backup path, enables the second node 102 to secure a backup path should the second connection 122 fail thereby ensuring the data traffic may still be enabled to reach the destination node 106.
Action 304
As the second node 102 has advertised that it is the DF in the ES 107, whenever data traffic arrives to the first node 101, in this Action 304, the second node 102 receives, based on the sent first indication, data traffic with unknown destination from the first node 101. The data traffic with unknown destination may be, e.g., BUM traffic.
The receiving in this Action 304 may be implemented, e.g., via the first connection
121
The second node 102 may then normally forward the received data traffic to the destination node 106, since it may be understood to be acting as a DF for the ES 107 and the received traffic may be BUM traffic as indicated by a label with which the data traffic may be received.
Action 305
At some point during the course of operations in the communications network 100, the second connection 122 between the second node 102 and the destination node 106 may fail. The second node 102 may then determine to forward the received data traffic to the backup path to ensure its delivery to the destination node 106. Based on the
determination made in Action 303, the backup path is the third node 103. Accordingly, in this Action 305, the second node 102 may, based on the determination performed in Action 303 and on the second indication received in Action 302, encapsulate the received data traffic with the third indication prior to forwarding, in the next Action 306, the received data traffic to the third node 103.
By performing this Action 305, the second node 102 may enable the third node 103 upon receiving the encapsulated data traffic, to know how to process the data traffic, and forward it to the destination node 106.
Action 306
The second node 102 may, in this Action 306, forwarding the received data traffic to the third node 103 comprised in the ES 107, based on a determination that: i) the second connection 122 between the second node 102 and the destination node 106 has failed, and ii) the second node 102 is the Designated Forwarder, DF, within the ES 107. The forwarding in this Action 306 may be implemented, e g., via the seventh connection 127.
Action 307
The second node 102 may, in this Action 307, refrain from forwarding the received data traffic to the other nodes comprised in the ES 107 different from the third node 103, namely the node the second node 102 may have determined to be the backup path.
By performing this Action 307, the second node 102 may continue to save resources while guaranteeing that the data traffic may reach the destination node 106.
Action 308
The second node 102 may, in this Action 308, withdraw, based on the determination that the second connection 122 between the second node 102 and the destination node 106 has failed, an ES route associated with the ES 107. The ES route may be understood to indicate that a particular node, in this case, the second node 102, e.g., a PE, is a member of the ES 107. To withdraw the ES route may be understood as indicating that a particular node, in this case, the second node 102, e.g., a PE, is no longer a member of the ES 107.
By performing this Action 308, the second node 102 may enable other nodes in the ES 107 to elect a new DF.
Action 309
The second node 102 may, in this Action 309, delay withdrawal of an Ethernet Auto- Discovery per ES route associated with the ES 107, until receiving a fourth indication from one of the nodes in the ES 107 indicating that a new DF of data traffic has been elected within the ES 107. The Ethernet Auto-Discovery per ES route may be understood to advertise the role of a node, DF or not, to provide a Backup ES label to use in case of failure, and to allow other nodes, e.g., PEs, to forward traffic to the ES 107. To withdraw the Ethernet Auto-Discovery per ES route may be understood as indicating that the particular node, in this case, the second node 102, e.g., a PE, is no longer a DF, and it may no longer be used as a Backup PE for this ES 107, and the data traffic belonging to the ES 107 may need to not be forwarded to this node.
By performing this Action 309, the second node 102 may be understood to allow enough time for the traffic on the first node 101 to switch to the new DF, without any intermittent loss of data traffic. Action 310
The nodes comprised in the ES 107 may re-run a DF Election, and another node in the ES 107 may then be elected as the new DF. The newly elected DF, may then also advertise that it is the new DF. This may be performed as soon as the nodes comprised in the ES 107 may realize that the current DF is no longer part of the ES 107 as a result of receiving withdraw of ES route by the existing DF. For illustrative purposes only, the fourth node 104 may be elected DF. Accordingly, in this Action 310, the second node 102 may receive the fourth indication from the fourth node 104 comprised in the ES 107. The fourth indication may indicate that the fourth node 104 is the new DF within the ES 107. Action 311
The second node 102 may, in this Action 311, withdraw, the Ethernet Auto- Discovery per ES route based on the receipt of the fourth indication.
By performing this Action 311, and therefore delaying the withdrawal of the Ethernet Auto-Discovery per ES route until the receipt of the fourth indication, the second node 102 may allow the data traffic on the first node 101 to switchover to the new DF without causing traffic loss.
Embodiments of a method performed by a first node 101, will now be described with reference to the flowchart depicted in Figure 4. The method is for handling data traffic in the ES 107 within the communications network 100. The data traffic has an unknown destination. The ES 107 comprises the plurality of nodes 108 providing multi-homing service.
The method may comprise some of the following actions. In some embodiments all the actions may be performed. Several embodiments are comprised herein. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. It should be noted that the examples herein are not mutually exclusive. Components from one example may be tacitly assumed to be present in another example and it will be obvious to a person skilled in the art how those components may be used in the other examples. In Figure 4, optional actions are indicated with dashed boxes.
The detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the second node 102, and will thus not be repeated here to simplify the description. For example, the data traffic may be, e.g., BUM traffic.
Action 401
The first node 101, in this Action 401, receives the first indication from the second node 102 comprised in the ES 107. As described earlier, the first indication indicates that the second node 102 is, within the ES 107, the DF.
The receiving in this Action 401 may be implemented, e.g., via the first connection
121
In some embodiments, the first indication may be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
Action 402
In this Action 402, the first node 101, upon receipt of data traffic with unknown destination to propagate via the ES 107, forwards the data traffic with unknown destination to the second node 102, based on the received first indication.
The forwarding in this Action 402 may be understood as flooding.
The forwarding in this Action 402 may be implemented, e.g., via the first connection
121
Action 403
In this Action 403, the first node 101 may refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the second node 102, that is the nodes in the ES 107 that are not DF.
By refraining from forwarding the data traffic to the other nodes comprised in the ES 107 different from the second node 102, that is to the nodes other than DF, the first node 101 saves resources that would otherwise be wasted if the data traffic were to be forwarded to all nodes in the ES 107, and all nodes other than the DF were to drop the traffic anyway.
Action 404
In some embodiments, e.g., after the second connection 122 between the second node 102 and the destination node 106 may fail, and following the withdrawal of membership to the ES 107 by the second node 102, the first node 101 may, in this Action 404, receive the fourth indication from the fourth node 104 comprised in the ES 107. As explained earlier, the fourth indication indicates that the fourth node 104 is, within the ES 107, the new DF. The receiving in this Action 404 may be implemented, e.g., via the fifth connection
125.
Action 405
In this Action 405, upon receipt of additional data traffic with unknown destination to propagate via the ES 107, the first node 101, forwarding the additional data traffic to the fourth node 104, based on the received fourth indication. That is, based on the knowledge that it may then be the current DF.
Action 406
The first node 101, in this Action 406, may refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the fourth node 104, that is, different from the current DF.
Embodiments of a method performed by the third node 103, will now be described with reference to the flowchart depicted in Figure 5. The method is for handling data traffic in the ES 107 within the communications network 100. The data traffic has an unknown destination. The ES 107 comprises the plurality of nodes 108 providing multi-homing service. The plurality of nodes 108 comprises the third node 103.
The method comprises the following actions. Several embodiments are comprised herein. One or more embodiments may be combined, where applicable. All possible combinations are not described to simplify the description. It should be noted that the examples herein are not mutually exclusive. Components from one example may be tacitly assumed to be present in another example and it will be obvious to a person skilled in the art how those components may be used in the other examples.
The detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the network node 101, and will thus not be repeated here to simplify the description. For example, the data traffic may be, e.g., BUM traffic.
Action 501
The third node 103, in this Action 501, sends the second indication to the second node 102 within the ES 107. The second indication indicates the third indication that is to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic. The sending in this Action 501 may be implemented, e.g., via the seventh connection
127.
In some embodiments, the second indication may be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
In some embodiments, the third indication may be a Backup ES Label defined in an Extended Community.
The third node 103 may be understood to equally send the second indication to all the other nodes in the ES 107.
Action 502
In this Action 502, the third node 103 receives, along with the third indication, data traffic with unknown destination from the second node 102. That is, the third node 103 may receive the data traffic with unknown destination encapsulated with the third indication. The receiving in this Action 502 may happen e.g., upon failure of the second communication 122.
The receiving in this Action 502 may be implemented, e.g., via the seventh connection 127.
Action 503
In this Action 503, the third node 103 forwards, based on the received third indication, the received data traffic to the destination node 106 having the third connection 123 to the ES 107.
The forwarding in this Action 503 may be implemented, e.g., via the third connection 123.
Figure 6 is a schematic diagram depicting a non-limiting example of the third indication, according to embodiments herein. In the top of Figure 6, the corresponding bits in each byte are indicated by numbers 0-9. As explained above, embodiments herein introduce a new label which may be used only during the time when the access interface between the DF node and the destination node 106 may go down, and before the traffic may finally converge on the first node 101. This new label may be called‘Backup ES Label’ and may be defined as a part of a new extended community as‘Backup Ethernet Segment Identifier (ESI) Label Extended Community’. This community may be included in the Ethernet A-D per ES route advertised to BGP Neighbors via MP-BGP. The format of this new community is depicted in Figure 6. The advertisement of this community may be understood to signify that the advertising node may support the flood optimization described in embodiments herein, whereby resources are saved in comparison with existing methods. Unless and until the support for this optimization is not advertised by all the participating nodes, the optimization procedures may be understood to not be available to be applied. As depicted in Figure 6, the third indication, the Backup ESI Label 600 is comprised in the second indication, in this example, a Backup ESI Label Extended Community 601, which also comprises a Community Type 602, a Sub-type 603, a DF bit 604, a Sequence Number 605 and a Reserved field 606.
Figure 7 is a schematic diagram depicting a non-limiting example of handling of data traffic in the ES 107 within the communications network 100, according to embodiments herein. The elements of the Figure with the same reference numerals as those in Figure 2, correspond to the elements in Figure 2 having the same reference numeral. In this particular example, the first node 101 is referred to as PEI, the second node 102 is referred to as PE2, the third node 103 is referred to as PE3 and the fourth node 104 is referred to as PE4. The seventh node 107 is referred to as the CE, and the fifth node 105 is referred to herein as the CE1. In Figure 7, PE2 to PE4 belong to the ES 107. PE2 is acting as the DF for the ES 107.
The EVtET Label, ES Label and the Backup ES Label advertised by the PEs are as per RFC 7432:
PE2 - LIM2, LES2, LBES2
PE3 - LEV13, LES3, LBES3
PE4 - LIM4, LES4, LBES4
PE5 - LIM5, LES5, LBES5
PE6 - LEV16, LES6, LBES6
PE2 has communicated that it is acting as the DF for this ES with sequence number as zero. All PEs have programmed the ES Label in the forwarding path with PE2 having enabled the forwarding of the BUM traffic on to the access interface. All PEs have programmed the Backup ES Label in the forwarding path enabling the forwarding of BUM traffic on to the respective access interface.
PE2 has programmed the backup path for access interface such that if the access link goes down the BUM traffic is forwarded to PE3. The traffic formats below show the Transport label also, though the traffic reaching the destination PE might not have this label if it is removed by the penultimate router.
The BUM traffic as originated at 701 by CE1, the CE behind PEI, is forwarded at 702 by PEI to PE2, since PE2 is the DF for the ES 107. This data traffic as observed on this interface, which reaches PE2, may have the format:
<Eth. Header, Transport Label, IMET Label for PE2, Payload>
Once it is ascertained after Bridge lookup that the traffic needs to be flooded and the PE is acting as a DF, the traffic is forwarded on to the access link, the second connection 122 when this link is UP.
The link between PE2 and CE, the second connection 122, is down, as indicated by the bold cross at 703. The second connection 122 may be referred to herein as IC1. Since the access interface between PE2 and the CE is down the backup path in the forwarding will activate. The BUM traffic reaching PE2 will be forwarded to PE3. at 704. The traffic as originated by PEI does not include the Backup label but when it is forwarded by PE2 to PE3, PE2 imposes the Backup label advertised by PE3 by encapsulating the data traffic with the third indication. This data traffic at 704 as observed on this interface, which reaches PE3, may have the format:
<Eth. Header, Transport Label, IMET Label for PE3, Backup ES Label for PE3, Payload>
Once it is ascertained after bridge lookup that the data traffic needs to be flooded, the Backup ES label is compared and if the label is the same as the one allocated for the ES 107, the data traffic is forwarded at 705 via the access interface of PE3 to the destination node 106 that is, via the third connection 123. The third connection 123 may be referred to herein as IC2. It may be noted that the significance of the Backup label is only across the PEs belonging to the ES 107.
As a summary of the foregoing, the following sequence of events may take place: 1) The IC1 link goes down; 2) BUM Traffic is diverted to PE3 by PE2; 3) The DF, or PE2, withdraws the ES route; 4) PE2 holds the withdrawal of Ethernet A-D per ES route until a new DF is elected; 5) Other PEs in the ES receive the withdrawn ES route; 6) DF Election is re-run on PEs belonging to the ES and PE4 is elected as the new DF; 7) The new DF, e.g., PE4, re-programs the forwarding path to forward BUM traffic for that ES 107 on to the access interface along with the backup path for the access interface, avoiding the old DF PE for the Backup path; 8) PE4 advertises that it has assumed the role of DF for the ES with sequence number as 1; 9) PEI, on receiving a route advertising a new DF, with a different sequence number as the one received earlier, will switch the BUM traffic to PE4; 10) The old DF, herein PE2, on receiving the route advertising a new DF will wait for some time before withdrawing Ethernet A-D route per ES. The wait period may be understood to allow the traffic on PEI to switch to PE4; 11) Other PEs receive the withdrawn Ethernet A-D per ES route.
Although the above optimization focusses on the BUM traffic, for the duration between the link down and the election of the new DF, even the unicast traffic received at PE2 may be forwarded to the Backup PE.
To further assist in describing the embodiments herein, next, a comparison is made between the existing method and the proposed solution. For this illustrative embodiment 3 PEs, PEI to PE3, may be considered which are part of an ES.
According to the existing technology:
1. PEI is the acting DF and forwards BUM traffic to the access side. BUM traffic therefore consumes thrice the bandwidth required in the MPLS core because of 3 PEs in the ES;
2. The access link between PEI and CE fails.
3. In that case, although the BUM traffic is flooded, PE2 and PE3 will not forward this traffic to the CE, but will drop the traffic until a new DF is elected.
4. Once a new PE, e.g., PE2, becomes DF and enables forwarding of BUM traffic towards the CE, it starts forwarding on the access interface.
5. Between step 2 and 4, the flood traffic is consuming twice the bandwidth required, in the MPLS core, since it has to reach PE2 and PE3.
According to embodiments herein:
1. PEI is the acting DF and forwards BUM traffic to the access side. BUM traffic consumes only the required bandwidth in the MPLS core;
2. The access link between PEI and CE fails.
3. BUM traffic is still flowing, no flooding, and will reach PE3 from PEI which will forward and not drop. 4. Once a new PE, e.g., PE2, becomes DF, it will start receiving BUM traffic and will forward it to the CE.
5. BUM traffic always consumes optimal bandwidth. According to the foregoing, embodiments herein provide multiple advantages.
One advantage of embodiments herein is that, by the first node 101 refraining from forwarding the data traffic to the other nodes comprised in the ES 107 different from the DF, a bandwidth consumed is optimal.
Another advantage of embodiments herein is that by enabling dropped BUM traffic to be reduced to a good extent, better convergence may be achieved. Convergence may be understood as the time it takes to recover traffic after a failure. In embodiments herein, since the traffic may be understood to continue to flow between the failure and the election of new DF, the dropped traffic is reduced. Thus, the recovery time is shorter.
Yet another advantage of embodiments herein is that embodiments herein allow to handles multiple events such as multiple access links belonging to the ES going down simultaneously on different PEs, as each node in the ES may have a backup path that may be used in the event of its connection to the destination node 106 suffering a failure.
Figure 8 depicts two different examples in panels a) and b), respectively, of the arrangement that the second node 102 may comprise to perform the method actions described above in relation to Figure 3. In some embodiments, the second node 102 may comprise the following arrangement depicted in Figure 8a. The second node 102 is for handling data traffic in the ES 107 within the communications network 100. The data traffic is configured to have an unknown destination. The ES 107 is configured to comprise the plurality of nodes 108 being configured to provide multi -homing service.
The plurality of nodes 108 is configured to comprise the second node 102.
Several embodiments are comprised herein. Components from one embodiment may be tacitly assumed to be present in another embodiment and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments. The detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the second node 102, and will thus not be repeated here. For example, the data traffic may be, e.g., BUM traffic. In Figure 8, optional modules are indicated with dashed boxes.
The second node 102 is configured to, e.g. by means of a sending unit 801 within the second node 102 configured to, send the first indication to the first node 101 configured to have the first connection 121 to the ES 107. The first indication is configured to indicate that the second node 102 is, within the ES 107, the Designated Forwarder.
The second node 102 is also configured to, e g. by means of a receiving unit 802 within the second node 102 configured to, receive, based on the first indication configured to be sent, the data traffic with unknown destination from the first node 101.
The second node 102 may be further configured to, e g. by means of a forwarding unit 803 within the second node 102 configured to, forward the data traffic configured to be received to the third node 103 configured to be comprised in the ES 107, based on the determination that: i) the second connection 122 between the second node 102 and the destination node 106 has failed, and ii) the second node 102 is the DF within the ES 107.
The second node 102 may be further configured to, e.g. by means of a refraining unit 804 within the second node 102 configured to, refrain from forwarding the data traffic configured to be received to the other nodes configured to be comprised in the ES 107 different from the third node 103.
The second node 102 may be further configured to, e.g. by means of an
encapsulating unit 805 within the second node 102 configured to, receive the second indication from the third node 103. The second indication is configured to indicate the third indication to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic. The data traffic configured to be received is configured to be forwarded to the third node 103 along with the third indication.
In some embodiments, at least one of the first indication and the second indication may be configured to be comprised in the Extended Community part of an Ethernet Auto- Discovery per ES advertisement message.
In some embodiments, the third indication may be configured to be a Backup ES Label defined in an Extended Community.
In some embodiments, the second node 102 may be further configured to, e.g. by means of the encapsulating unit 805 within the second node 102 configured to, encapsulate the data traffic configured to be received with the third indication prior to forwarding the data traffic configured to be received to the third node 103.
The second node 102 may be further configured to, e.g. by means of a determining unit 806 within the second node 102 configured to, determine that the third node 103 is a backup path to be used within the ES 107 to forward data traffic to the destination node 106, upon failure of the link between the second node 102 and the destination node 106.
The second node 102 may be further configured to, e g. by means of a withdrawing unit 807 within the second node 102 configured to, withdraw, based on the determination that the second connection 122 between the second node 102 and the destination node 106 has failed, an ES route associated with the ES 107.
The second node 102 may be further configured to, e.g. by means of a delaying unit 808 within the second node 102 configured to, delay withdrawal of the Ethernet Auto- Discovery per ES route associated with the ES 107 until receiving the fourth indication from one of the nodes in the ES 107 configured to indicate that the new DF of data traffic has been elected within the ES 107.
In some embodiments, the second node 102 may be further configured to, e.g. by means of the receiving unit 802 within the second node 102 configured to, receive the fourth indication from the fourth node 104 configured to be comprised in the ES 107. The fourth indication is configured to indicate that the fourth node 104 is the new DF within the ES 107.
In some embodiments, the second node 102 may be further configured to, e.g. by means of the withdrawing unit 807 within the second node 102 configured to, withdraw the Ethernet Auto-Discovery per ES route based on the receipt of the fourth indication.
The embodiments herein may be implemented through one or more processors, such as a processor 809 in the second node 102 depicted in Figure 8, together with computer program code for performing the functions and actions of the embodiments herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the second node 102. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the second node 102. The second node 102 may further comprise a memory 810 comprising one or more memory units. The memory 810 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the second node 102.
In some embodiments, the second node 102 may receive information from, e.g., the first node 101, the third node 103, any of the other nodes in the ES 107, and/or the destination node 106, through a receiving port 811. In some examples, the receiving port 811 may be, for example, connected to one or more antennas in second node 102. In other embodiments, the second node 102 may receive information from another structure in the communications network 100 through the receiving port 811. Since the receiving port 811 may be in communication with the processor 809, the receiving port 811 may then send the received information to the processor 809. The receiving port 811 may also be configured to receive other information.
The processor 809 in the second node 102 may be further configured to transmit or send information to e.g., the first node 101, the third node 103, any of the other nodes in the ES 107, and/or the destination node 106, through a sending port 812, which may be in communication with the processor 809, and the memory 810.
Those skilled in the art will also appreciate that the sending unit 801, the receiving unit 802, the forwarding unit 803, the refraining unit 804, the encapsulating unit 805, the determining unit 806, the withdrawing unit 807 and the delaying unit 808 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 809, perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
Those skilled in the art will also appreciate that any of the sending unit 801, the receiving unit 802, the forwarding unit 803, the refraining unit 804, the encapsulating unit 805, the determining unit 806, the withdrawing unit 807 and the delaying unit 808 described above may be the processor 809 of the second node 102, or an application running on such processor 809. Thus, the methods according to the embodiments described herein for the second node 102 may be respectively implemented by means of a computer program 813 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 809, cause the at least one processor 809 to carry out the actions described herein, as performed by the second node 102. The computer program 813 product may be stored on a computer-readable storage medium 814. The computer- readable storage medium 814, having stored thereon the computer program 813, may comprise instructions which, when executed on at least one processor 809, cause the at least one processor 809 to carry out the actions described herein, as performed by the second node 102. In some embodiments, the computer-readable storage medium 814 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space. In other embodiments, the computer program 813 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 814, as described above.
The second node 102 may comprise an interface unit to facilitate communications between the second node 102 and other nodes or devices, e.g., the first node 101. In some particular examples, the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
In other embodiments, the second node 102 may comprise the following
arrangement depicted in Figure 8b. The second node 102 may comprise a processing circuitry 809, e g., one or more processors such as the processor 809, in the second node 102 and the memory 810. The second node 102 may also comprise a radio circuitry 815, which may comprise e.g., the receiving port 811 and the sending port 812. The processing circuitry 809 may be configured to, or operable to, perform the method actions according to Figure 3, in a similar manner as that described in relation to Figure 8a. The radio circuitry 411 may be configured to set up and maintain at least a wireless connection with the first node 101, the third node 103, any of the other nodes in the ES 107, and/or the destination node 106. Circuitry may be understood herein as a hardware component.
Hence, embodiments herein also relate to the second node 102 operative to handle data traffic in the ES 107 within the communications network 100. The data traffic may be configured to have an unknown destination. The ES 107 may be configured to comprise the plurality of nodes 108 configured to provide multi -homing service The second node 102 may comprise the processing circuitry 809 and the memory 810, said memory 810 containing instructions executable by said processing circuitry 809, whereby the second node 102 is further operative to perform the actions described herein in relation to the second node 102, e.g., in Figure 3.
Figure 9 depicts two different examples in panels a) and b), respectively, of the arrangement that the first node 101 may comprise to perform the method actions described above in relation to Figure 4. In some embodiments, the first node 101 may comprise the following arrangement depicted in Figure 9a. The first node 101 is for handling data traffic in the ES 107 within the communications network 100. The data traffic is configured to have an unknown destination. The ES 107 is configured to comprise the plurality of nodes 108 being configured to provide multi -homing service.
Several embodiments are comprised herein. Components from one embodiment may be tacitly assumed to be present in another embodiment and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments. The detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the first node 101, and will thus not be repeated here. For example, the data traffic may be, e.g., BUM traffic.
In Figure 9, optional modules are indicated with dashed boxes.
The first node 101 is configured to, e.g. by means of a receiving unit 901 within the first node 101 configured to, receive the first indication from the second node 102 configured to be comprised in the ES 107. The first indication is configured to indicate that the second node 102 is, within the ES 107, the DF.
The first node 101 is further configured to, e.g. by means of a forwarding unit 902 within the first node 101 further configured to, upon receipt of the data traffic with unknown destination to propagate via the ES 107, forward the data traffic with unknown destination to the second node 102, based on the first indication configured to be received.
The first node 101 is also configured to, e.g. by means of a refraining unit 903 within the first node 101 configured to, refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the second node 102. In some embodiments, the first indication may be configured to be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
The first node 101 may be further configured to, e g. by means of the receiving unit
901 within the first node 101 configured to, following the withdrawal of membership to the ES 107 by the second node 102, receive the fourth indication from the fourth node 104 configured to be comprised in the ES 107. The fourth indication is configured to indicate that the fourth node 104 is, within the ES 107, the new DF.
The first node 101 may be further configured to, e g. by means of the forwarding unit
902 within the first node 101 configured to, upon receipt of the additional data traffic with unknown destination to propagate via the ES 107, forward the additional data traffic to the fourth node 104, based on the fourth indication configured to be received.
In some embodiments the first node 101 may be further configured to, e.g. by means of the refraining unit 903 within the first node 101 configured to, refrain from forwarding the data traffic to the other nodes comprised in the ES 107 different from the fourth node 104.
The embodiments herein may be implemented through one or more processors, such as a processor 904 in the first node 101 depicted in Figure 9, together with computer program code for performing the functions and actions of the embodiments herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the first node 101. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the first node 101.
The first node 101 may further comprise a memory 905 comprising one or more memory units. The memory 905 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the first node 101.
In some embodiments, the first node 101 may receive information from, e.g., the fifth node 105, the second node 102, the third node 103, and/or any of the other nodes in the ES 107, through a receiving port 906. In some examples, the receiving port 906 may be, for example, connected to one or more antennas in first node 101. In other embodiments, the network node 101 first node 101 may receive information from another structure in the communications network 100 through the receiving port 906. Since the receiving port 906 may be in communication with the processor 904, the receiving port 906 may then send the received information to the processor 904. The receiving port 906 may also be configured to receive other information.
The processor 904 in the first node 101 may be further configured to transmit or send information to e g., the fifth node 105, the second node 102, the third node 103, and/or any of the other nodes in the ES 107, through a sending port 907, which may be in communication with the processor 904, and the memory 905.
Those skilled in the art will also appreciate that the receiving unit 901, the forwarding unit 902, and the refraining unit 903 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 904, perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single Application- Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC).
Those skilled in the art will also appreciate that any of the receiving unit 901, the forwarding unit 902, and the refraining unit 903 described above may be the processor 904 of the first node 101, or an application running on such processor 904.
Thus, the methods according to the embodiments described herein for the first node 101 may be respectively implemented by means of a computer program 908 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 904, cause the at least one processor 904 to carry out the actions described herein, as performed by the first node 101. The computer program 908 product may be stored on a computer-readable storage medium 909. The computer-readable storage medium 909, having stored thereon the computer program 908, may comprise instructions which, when executed on at least one processor 904, cause the at least one processor 904 to carry out the actions described herein, as performed by the first node 101. In some embodiments, the computer-readable storage medium 909 may be a non-transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space. In other embodiments, the computer program 908 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 909, as described above.
The first node 101 may comprise an interface unit to facilitate communications between the first node 101 and other nodes or devices, e.g., the network node 101. In some particular examples, the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
In other embodiments, the first node 101 may comprise the following arrangement depicted in Figure 9b. The first node 101 may comprise a processing circuitry 904, e.g., one or more processors such as the processor 904, in the first node 101 and the memory 905. The first node 101 may also comprise a radio circuitry 910, which may comprise e.g., the receiving port 906 and the sending port 907. The processing circuitry 904 may be configured to, or operable to, perform the method actions according to Figure 4, in a similar manner as that described in relation to Figure 9a. The radio circuitry 411 may be configured to set up and maintain at least a wireless connection with the fifth node 105, the second node 102, the third node 103, and/or any of the other nodes in the ES 107. Circuitry may be understood herein as a hardware component.
Hence, embodiments herein also relate to the first node 101 operative to handle the data traffic in the ES 107 within the communications network 100. The data traffic is configured to have an unknown destination. The ES 107 is configured to comprise the plurality of nodes 108 configured to provide multi -homing service. The network node 101 may comprise the processing circuitry 904 and the memory 905, said memory 905 containing instructions executable by said processing circuitry 904, whereby the first node 101 is further operative to perform the actions described herein in relation to the first node 101, e.g., in Figure 4.
Figure 10 depicts two different examples in panels a) and b), respectively, of the arrangement that the third node 103 may comprise to perform the method actions described above in relation to Figure 5. In some embodiments, the third node 103 may comprise the following arrangement depicted in Figure 10a. The third node 103 is for handling data traffic in the ES 107 within the communications network 100. The data traffic is configured to have an unknown destination. The ES 107 is configured to comprise the plurality of nodes 108 being configured to provide multi -homing service.
The plurality of nodes 108 is configured to comprise the third node 103.
Several embodiments are comprised herein. Components from one embodiment may be tacitly assumed to be present in another embodiment and it will be obvious to a person skilled in the art how those components may be used in the other exemplary embodiments. The detailed description of some of the following corresponds to the same references provided above, in relation to the actions described for the third node 103, and will thus not be repeated here. For example, the data traffic may be, e.g., BUM traffic.
In Figure 10, optional modules are indicated with dashed boxes.
The third node 103 is configured to, e.g. by means of a sending unit 1001 within the third node 103 configured to, send the second indication to the second node 102 within the ES 107. The second indication is configured to indicate the third indication that is to be used when forwarding data traffic with unknown destination to the third node 103 as backup path within the ES 107 to forward data traffic.
In some embodiments, the second indication may be configured to be comprised in the Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
The third indication may be configured to be a Backup ES Label defined in an Extended Community.
The third node 103 is further configured to, e.g. by means of a receiving unit 1002 within the third node 103 configured to, receive, along with the third indication, the data traffic with unknown destination from the second node 102.
In some embodiments, the third node 103 is further configured to, e.g. by means of a forwarding unit 1003 within the third node 103 configured to, forward, based on the third indication configured to be received, the data traffic configured to be received to the destination node 106 configured to have a third connection 123 to the ES 107.
The embodiments herein may be implemented through one or more processors, such as a processor 1004 in the third node 103 depicted in Figure 10, together with computer program code for performing the functions and actions of the embodiments herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the in the third node 103. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the third node 103.
The third node 103 may further comprise a memory 1005 comprising one or more memory units. The memory 1005 is arranged to be used to store obtained information, store data, configurations, schedulings, and applications etc. to perform the methods herein when being executed in the third node 103.
In some embodiments, the third node 103 may receive information from, e.g., the first node 101, the second node 102, any of the other nodes in the ES 107, and/or the destination node 106, through a receiving port 1006. In some examples, the receiving port 1006 may be, for example, connected to one or more antennas in third node 103. In other embodiments, the third node 103 may receive information from another structure in the communications network 100 through the receiving port 1006. Since the receiving port 1006 may be in communication with the processor 1004, the receiving port 1006 may then send the received information to the processor 1004. The receiving port 1006 may also be configured to receive other information.
The processor 1004 in the third node 103 may be further configured to transmit or send information to e.g., the first node 101, the second node 102, any of the other nodes in the ES 107, and/or the destination node 106, through a sending port 1007, which may be in communication with the processor 1004, and the memory 1005.
Those skilled in the art will also appreciate that the sending unit 1001, the receiving unit 1002, and the forwarding unit 1003 described above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g., stored in memory, that, when executed by the one or more processors such as the processor 1004, perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a System-on-a-Chip (SoC). Those skilled in the art will also appreciate that any the sending unit 1001, the receiving unit 1002, and the forwarding unit 1003 described above may be the processor 1004 of the third node 103, or an application running on such processor 1004.
Thus, the methods according to the embodiments described herein for the third node 103 may be respectively implemented by means of a computer program 1008 product, comprising instructions, i.e., software code portions, which, when executed on at least one processor 1004, cause the at least one processor 1004 to carry out the actions described herein, as performed by the third node 103. The computer program 1008 product may be stored on a computer-readable storage medium 1009. The computer-readable storage medium 1009, having stored thereon the computer program 1008, may comprise instructions which, when executed on at least one processor 1004, cause the at least one processor 1004 to carry out the actions described herein, as performed by the third node 103. In some embodiments, the computer-readable storage medium 1009 may be a non- transitory computer-readable storage medium, such as a CD ROM disc, a memory stick, or stored in the cloud space. In other embodiments, the computer program 1008 product may be stored on a carrier containing the computer program, wherein the carrier is one of an electronic signal, optical signal, radio signal, or the computer-readable storage medium 1009, as described above.
The third node 103 may comprise an interface unit to facilitate communications between the third node 103 and other nodes or devices, e.g., the first first node 111. In some particular examples, the interface may, for example, include a transceiver configured to transmit and receive radio signals over an air interface in accordance with a suitable standard.
In other embodiments, the third node 103 may comprise the following arrangement depicted in Figure 10b. The third node 103 may comprise a processing circuitry 1004, e.g., one or more processors such as the processor 1004, in the third node 103 and the memory 1005. The third node 103 may also comprise a radio circuitry 1010, which may comprise e.g., the receiving port 1006 and the sending port 1007. The processing circuitry 1004 may be configured to, or operable to, perform the method actions according to Figure 5, in a similar manner as that described in relation to Figure 10a. The radio circuitry 411 may be configured to set up and maintain at least a wireless connection with the first node 101, the second node 102, any of the other nodes in the ES 107, and/or the destination node 106. Circuitry may be understood herein as a hardware component.
Hence, embodiments herein also relate to the third node 103 operative to handle the data traffic in the ES 107 within the communications network 100. The data traffic is configured to have an unknown destination. The ES 107 is configured to comprise the plurality of nodes 108 configured to provide multi -homing service. The plurality of nodes 108 is configured to comprise the third node 103. The third node 103 may comprise the processing circuitry 1004 and the memory 1005, said memory 1005 containing instructions executable by said processing circuitry 1004, whereby the third node 103 is further operative to perform the actions described herein in relation to the third node 103, e.g., in Figure 5.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step.
Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
As used herein, the expression“at least one of:” followed by a list of alternatives separated by commas, and wherein the last alternative is preceded by the“and” term, may be understood to mean that only one of the list of alternatives may apply, more than one of the list of alternatives may apply or all of the list of alternatives may apply. This expression may be understood to be equivalent to the expression“at least one of:” followed by a list of alternatives separated by commas, and wherein the last alternative is preceded by the“or” term.
When using the word "comprise" or“comprising”, it shall be interpreted as non limiting, i.e. meaning "consist at least of'. The embodiments herein are not limited to the above described preferred
embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the invention.
As used herein, the expression“in some embodiments” has been used to indicate that the features of the embodiment described may be combined with any other embodiment or example disclosed herein.
As used herein, the expression“in some examples” has been used to indicate that the features of the example described may be combined with any other embodiment or example disclosed herein.
A processor, as used herein, may be understood to be a hardware component.

Claims

CLAIMS:
1. A method performed by a second node (102) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100), the data traffic having an unknown destination, the ES (107) comprising a plurality of nodes (108) providing multi-homing service, the plurality of nodes (108) comprising the second node (102), the method comprising:
- sending (301) a first indication to a first node (101) having a first connection (121) to the ES (107), the first indication indicating that the second node (102) is, within the ES (107), a Designated Forwarder, and
- receiving (304), based on the sent first indication, data traffic with unknown destination from the first node (101).
2. The method according to claim 1, further comprising:
- forwarding (306) the received data traffic to a third node (103) comprised in the ES (107), based on a determination that:
i. a second connection (122) between the second node (102) and a destination node (106) has failed, and
ii. the second node (102) is the Designated Forwarder, DF, within the ES (107), and
- refraining (307) from forwarding the received data traffic to the other nodes comprised in the ES (107) different from the third node (103).
3. The method according to claim 2, further comprising:
- receiving (302) a second indication from the third node (103), the second indication indicating a third indication to be used when forwarding data traffic with unknown destination to the third node (103) as backup path within the ES
(107) to forward data traffic, and wherein the received data traffic is forwarded to the third node (103) along with the third indication.
4. The method according to claim 3, wherein at least one of the first indication and the second indication is comprised in an Extended Community part of an Ethernet
Auto-Discovery per ES advertisement message.
5. The method according to claim 3, wherein the third indication is a Backup ES Label defined in an Extended Community.
6. The method according to any of claims 3-5, further comprising:
- encapsulating (305) the received data traffic with the third indication prior to forwarding (306) the received data traffic to the third node (103).
7. The method according to any of claims 2-6, further comprising:
determining (303) that the third node (103) is a backup path to be used within the ES (107) to forward data traffic to the destination node (106), upon failure of a link between the second node (102) and the destination node (106).
8. The method according to any of claims 2-7, further comprising:
- withdrawing (308), based on the determination that the second connection 122 between the second node (102) and the destination node (106) has failed, an ES route associated with the ES (107),
- delaying (309) withdrawal of an Ethernet Auto-Discovery per ES route associated with the ES (107) until receiving a fourth indication from one of the nodes in the ES (107) indicating that a new DF of data traffic has been elected within the ES (107),
- receiving (310) a fourth indication from a fourth node (104) comprised in the ES (107), the fourth indication indicating that the fourth node (104) is the new DF within the ES (107), and
- withdrawing (311) the Ethernet Auto-Discovery per ES route based on the receipt of the fourth indication.
9. A computer program (813), comprising instructions which, when executed on at least one processor (809), cause the at least one processor (809) to carry out the method according to any one of claims 1 to 8.
10. A computer-readable storage medium (814), having stored thereon a computer program (813), comprising instructions which, when executed on at least one processor (809), cause the at least one processor (809) to carry out the method according to any one of claims 1 to 8.
11. A method performed by a first node (101) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100), the data traffic having an unknown destination, the ES (107) comprising a plurality of nodes (108) providing multi-homing service, the method comprising: - receiving (401) a first indication from a second node (102) comprised in the ES (107), the first indication indicating that the second node (102) is, within the ES (107), a Designated Forwarder, DF, and, upon receipt of data traffic with unknown destination to propagate via the ES (107),
- forwarding (402) the data traffic with unknown destination to the second node (102), based on the received first indication, and
- refraining (403) from forwarding the data traffic to the other nodes comprised in the ES (107) different from the second node (101).
12. The method according to claim 11, wherein the first indication is comprised in an Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
13. The method according to any of claims 11-12, further comprising, following a withdrawal of membership to the ES (107) by the second node (102):
- receiving (404) a fourth indication from a fourth node (104) comprised in the ES (107), the fourth indication indicating that the fourth node (104) is, within the ES (107), the new DF, and upon receipt of additional data traffic with unknown destination to propagate via the ES (107),
- forwarding (405) the additional data traffic to the fourth node (104), based on the received fourth indication, and
- refraining (406) from forwarding the data traffic to the other nodes comprised in the ES (107) different from the fourth node (104).
14. A computer program (908), comprising instructions which, when executed on at least one processor (904), cause the at least one processor (904) to carry out the method according to any one of claims 11 to 13.
15. A computer-readable storage medium (909), having stored thereon a computer program (908), comprising instructions which, when executed on at least one processor (904), cause the at least one processor (904) to carry out the method according to any one of claims 11 to 13.
16. A method performed by a third node (103) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100), the data traffic having an unknown destination, the ES (107) comprising a plurality of nodes (108) providing multi-homing service, the plurality of nodes (108) comprising the third node (103), the method comprising:
- sending (501) a second indication to a second node (102) within the ES (107), the second indication indicating a third indication that is to be used when forwarding data traffic with unknown destination to the third node (103) as backup path within the ES (107) to forward data traffic,
- receiving (502), along with the third indication, data traffic with unknown destination from the second node (102), and
- forwarding (503), based on the received third indication, the received data traffic to a destination node (106) having a third connection (123) to the ES (107).
17. The method according to claim 16, wherein the second indication is comprised in an Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
18. The method according to claim 17, wherein the third indication is a Backup ES Label defined in an Extended Community.
19. A computer program (1008), comprising instructions which, when executed on at least one processor (1004), cause the at least one processor (1004) to carry out the method according to any one of claims 16 to 18.
20. A computer-readable storage medium (1009), having stored thereon a computer program (1008), comprising instructions which, when executed on at least one processor (1004), cause the at least one processor (1004) to carry out the method according to any one of claims 16 to 18.
21. A second node (102) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100), the data traffic being configured to have an unknown destination, the ES (107) being configured to comprise a plurality of nodes (108) being configured to provide multi -homing service, the plurality of nodes (108) being configured to comprise the second node (102), the second node (102) being further configured to:
- send a first indication to a first node (101) configured to have a first connection (121) to the ES (107), the first indication being configured to indicate that the second node (102) is, within the ES (107), a Designated Forwarder, and - receive, based on the first indication configured to be sent, data traffic with unknown destination from the first node (101).
22. The second node (102) according to claim 21, further configured:
- forward the data traffic configured to be received to a third node (103) configured to be comprised in the ES (107), based on a determination that: i. a second connection (122) between the second node (102) and a destination node (106) has failed, and
ii. the second node (102) is the Designated Forwarder, DF, within the ES (107), and
- refrain from forwarding the data traffic configured to be received to the other nodes configured to be comprised in the ES (107) different from the third node (103).
23. The second node (102) according to claim 22, being further configured to:
- receive a second indication from the third node (103), the second indication being configured to indicate a third indication to be used when forwarding data traffic with unknown destination to the third node (103) as backup path within the ES (107) to forward data traffic, and wherein the data traffic configured to be received is configured to be forwarded to the third node (103) along with the third indication.
24. The second node (102) according to claim 23, wherein at least one of the first
indication and the second indication is configured to be comprised in an Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
25. The second node (102) according to claim 23, wherein the third indication is
configured to be a Backup ES Label defined in an Extended Community.
26. The second node (102) according to any of claims 23-25, being further configured to:
- encapsulate the data traffic configured to be received with the third indication prior to forwarding the data traffic configured to be received to the third node (103).
27. The second node (102) according to any of claims 22-26, being further configured to: - determine that the third node (103) is a backup path to be used within the ES (107) to forward data traffic to the destination node (106), upon failure of a link between the second node (102) and the destination node (106).
28. The second node (102) according to any of claims 22-27, being further configured to:
- withdraw, based on the determination that the second connection 122 between the second node (102) and the destination node (106) has failed, an ES route associated with the ES (107),
- delay withdrawal of an Ethernet Auto-Discovery per ES route associated with the ES (107) until receiving a fourth indication from one of the nodes in the ES (107) configured to indicate that a new DF of data traffic has been elected within the ES (107),
- receive a fourth indication from a fourth node (104) configured to be
comprised in the ES (107), the fourth indication being configured to indicate that the fourth node (104) is the new DF within the ES (107), and
- withdraw the Ethernet Auto-Discovery per ES route based on the receipt of the fourth indication.
29. A first node (101) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100), the data traffic being configured to have an unknown destination, the ES (107) being configured to comprise a plurality of nodes (108) configured to provide multi-homing service, the first node (101) being further configured to:
- receive a first indication from a second node (102) configured to be comprised in the ES (107), the first indication being configured to indicate that the second node (102) is, within the ES (107), a Designated Forwarder, DF, and, upon receipt of data traffic with unknown destination to propagate via the ES (107),
- forward the data traffic with unknown destination to the second node (102), based on the first indication configured to be received, and
- refrain from forwarding the data traffic to the other nodes comprised in the ES (107) different from the second node (101).
30. The first node (101) according to claim 29, wherein the first indication is configured to be comprised in an Extended Community part of an Ethernet Auto-Discovery per ES advertisement message.
31. The first node (101) according to any of claims 29-30, being further configured to, following a withdrawal of membership to the ES (107) by the second node (102):
- receive a fourth indication from a fourth node (104) configured to be
comprised in the ES (107), the fourth indication being configured to indicate that the fourth node (104) is, within the ES (107), the new DF, and upon receipt of additional data traffic with unknown destination to propagate via the ES (107),
- forward the additional data traffic to the fourth node (104), based on the fourth indication configured to be received, and
- refrain from forwarding the data traffic to the other nodes comprised in the ES (107) different from the fourth node (104).
32. A third node (103) for handling data traffic in an Ethernet Segment, ES, (107) within a communications network (100), the data traffic being configured to have an unknown destination, the ES (107) being configured to comprise a plurality of nodes (108) configured to provide multi-homing service, the plurality of nodes (108) being configured to comprise the third node (103), the third node (103) being further configured to:
- send a second indication to a second node (102) within the ES (107), the second indication being configured to indicate a third indication that is to be used when forwarding data traffic with unknown destination to the third node (103) as backup path within the ES (107) to forward data traffic,
- receive, along with the third indication, data traffic with unknown destination from the second node (102), and
- forward, based on the third indication configured to be received, the data traffic configured to be received to a destination node (106) configured to have a third connection (123) to the ES (107).
33. The third node (103) according to claim 32, wherein the second indication is configured to be comprised in an Extended Community part of an Ethernet Auto- Discovery per ES advertisement message. 34. The third node (103) according to claim 33, wherein the third indication is
configured to be a Backup ES Label defined in an Extended Community.
PCT/IN2018/050887 2018-12-27 2018-12-27 First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment WO2020136661A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IN2018/050887 WO2020136661A1 (en) 2018-12-27 2018-12-27 First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IN2018/050887 WO2020136661A1 (en) 2018-12-27 2018-12-27 First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment

Publications (1)

Publication Number Publication Date
WO2020136661A1 true WO2020136661A1 (en) 2020-07-02

Family

ID=71128525

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2018/050887 WO2020136661A1 (en) 2018-12-27 2018-12-27 First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment

Country Status (1)

Country Link
WO (1) WO2020136661A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955439A (en) * 2023-03-13 2023-04-11 苏州浪潮智能科技有限公司 Transmission control method, system, device and storage medium of data message

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170141963A1 (en) * 2015-11-18 2017-05-18 Telefonaktiebolaget L M Ericsson (Publ) Designated forwarder (df) election and re-election on provider edge (pe) failure in all-active redundancy topology
EP2996290B1 (en) * 2013-06-30 2018-05-30 Huawei Technologies Co., Ltd. Packet forwarding method, apparatus, and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2996290B1 (en) * 2013-06-30 2018-05-30 Huawei Technologies Co., Ltd. Packet forwarding method, apparatus, and system
US20170141963A1 (en) * 2015-11-18 2017-05-18 Telefonaktiebolaget L M Ericsson (Publ) Designated forwarder (df) election and re-election on provider edge (pe) failure in all-active redundancy topology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955439A (en) * 2023-03-13 2023-04-11 苏州浪潮智能科技有限公司 Transmission control method, system, device and storage medium of data message
CN115955439B (en) * 2023-03-13 2023-05-23 苏州浪潮智能科技有限公司 Data message transmission control method, system, device and storage medium

Similar Documents

Publication Publication Date Title
CN112840625B (en) First hop migration gateway redundancy in a network computing environment
US9781032B1 (en) MPLS label usage in ethernet virtual private networks
CN113765782B (en) Local repair of underlying faults using prefix independent convergence
CN113765829B (en) Activity detection and route convergence in a software-defined networking distributed system
CN105743689B (en) Fast convergence of link failures in a multi-homed ethernet virtual private network
CN111740913B (en) Method, router and readable medium for forwarding network traffic in computer network
US8953590B1 (en) Layer two virtual private network having control plane address learning supporting multi-homed customer networks
US9197583B2 (en) Signaling of attachment circuit status and automatic discovery of inter-chassis communication peers
US9019814B1 (en) Fast failover in multi-homed ethernet virtual private networks
CN107612808B (en) Tunnel establishment method and device
US9509609B2 (en) Forwarding packets and PE devices in VPLS
JP5581441B2 (en) Method and apparatus for MPLS MAC-VPN MPLS label allocation
US8694664B2 (en) Active-active multi-homing support for overlay transport protocol
US8665711B2 (en) Fast restoration for provider edge node and access link failures
US9178816B1 (en) Control plane messaging in all-active multi-homed ethernet virtual private networks
US9100213B1 (en) Synchronizing VPLS gateway MAC addresses
CN112929274A (en) Method, equipment and system for processing route
US9288067B2 (en) Adjacency server for virtual private networks
US11329845B2 (en) Port mirroring over EVPN VXLAN
CN111740907A (en) Message transmission method, device, equipment and machine readable storage medium
CN111064596A (en) Node protection for BUM traffic for multi-homed node failures
US10158567B1 (en) PBB-EVPN customer MAC synchronization among all-active multi-homing PEs
US10033636B1 (en) Ethernet segment aware MAC address learning
WO2020136661A1 (en) First node, second node, third node and methods performed thereby for handling data traffic in an ethernet segment
CN113037883B (en) Method and device for updating MAC address table entries

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944165

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944165

Country of ref document: EP

Kind code of ref document: A1