WO2017141076A1 - Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance - Google Patents

Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance Download PDF

Info

Publication number
WO2017141076A1
WO2017141076A1 PCT/IB2016/050918 IB2016050918W WO2017141076A1 WO 2017141076 A1 WO2017141076 A1 WO 2017141076A1 IB 2016050918 W IB2016050918 W IB 2016050918W WO 2017141076 A1 WO2017141076 A1 WO 2017141076A1
Authority
WO
WIPO (PCT)
Prior art keywords
network device
bier
network
dao
message
Prior art date
Application number
PCT/IB2016/050918
Other languages
English (en)
Inventor
Ganesh Prasad PALANKAR
Nobin Mathew
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2016/050918 priority Critical patent/WO2017141076A1/fr
Publication of WO2017141076A1 publication Critical patent/WO2017141076A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/189Arrangements for providing special services to substations for broadcast or conference, e.g. multicast in combination with wireless systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/025Updating only a limited number of routers, e.g. fish-eye update
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W40/00Communication routing or communication path finding
    • H04W40/02Communication route or path selection, e.g. power-based or shortest path routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L2001/0092Error control systems characterised by the topology of the transmission link
    • H04L2001/0093Point-to-multipoint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L2001/0092Error control systems characterised by the topology of the transmission link
    • H04L2001/0097Relays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • Embodiments of the invention relate to the field of packet networks; and more specifically, to a stateless multicast protocol for Low-Power and Lossy Networks.
  • Low Power and Lossy Networks are a type of network optimized to save energy and to support traffic.
  • the network devices forming the network and their interconnects are constrained.
  • the network devices are constrained with low battery power, memory and processing power.
  • the links coupling the network devices are characterized by high loss rates, low data rates and instability.
  • the Internet Engineering Task Force (IETF) Routing Over Low power and Lossy networks (ROLL) workgroup standardized Routing Protocol for Low-Power and Lossy Networks (RPL) as a routing protocol for such networks.
  • RPL is an IPv6 routing protocol that builds a Destination Oriented Directed Acyclic Graph DODAG Topology rooted at a Low Power and Lossy Network Boundary Router (LBR) as defined in Request for Comments RFC 7102 "Terms Used in Routing for Low-Power and Lossy Networks."
  • DODAG is a directed graph rooted at the LBR.
  • RPL is used to build an IPv6 based routing topology over a mesh network using Objective Function (OF) with a set of constraints persisting on the network devices or the environment in which the network devices are operating.
  • OF Objective Function
  • Each RPL instance contains one or more DODAG root that can be coupled to another network, which does not have the same constraint as the LLN.
  • RPL Routing Protocol for LLNs
  • IETF Internet Engineering Task Force
  • RRC Request For Comment
  • LBR LLN Border Router
  • an LLN device may not be able to maintain a multicast forwarding topology when operating with limited memory.
  • topology maintenance may involve selecting a connected dominating set used to forward multicast messages to all network devices in an administrative domain.
  • Multicast is one of the significant applications that is supported on multiple flavors of networks. With the constraints in LLN, running traditional multicast protocols on such networks may lead to exhaustion of nodal resources as they expect the nodes to store per-flow states.
  • a method, in a first network device of a low-power and lossy network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, of enabling multicast routing, is described.
  • the method includes receiving a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, where the DAO message includes a bit index explicit replication (BIER) header field including an identifier of a multicast domain; and updating a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.
  • DAO destination advertisement object
  • BIER bit index explicit replication
  • a first network device of a low-power and lossy network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN is described.
  • the first network device includes a non-transitory computer readable medium to store instructions; and a processor coupled with the non-transitory computer readable medium to process the stored instructions.
  • the process is to receive a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, where the DAO message includes a bit index explicit replication (BIER) header field including an identifier of a multicast domain and to update a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.
  • DAO destination advertisement object
  • BIER bit index explicit replication
  • a non-transitory computer readable storage medium that provides instructions.
  • the instructions when executed by a processor of a first network device of a low-power and lossy network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, cause said processor to perform operations including receiving a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, where the DAO message includes a bit index explicit replication (BIER) header field including an identifier of a multicast domain; and updating a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.
  • DAO destination advertisement object
  • BIER bit index explicit replication
  • FIG. 1 illustrates an exemplary Low-Power and Lossy Network (LLN) including BIER-enabled network devices in accordance with some embodiments of the invention.
  • LLC Low-Power and Lossy Network
  • Figure 2A illustrates an exemplary DIO message including a BIER header field for enabling multicast routing in an LLN in accordance with some embodiments of the invention.
  • Figure 2B illustrates an exemplary DAO message including a BIER header field for enabling multicast routing in an LLN in accordance with some embodiments of the invention.
  • Figure 3A-E illustrate block diagrams of an exemplary LLN 300 in which multicast routing is enabled in accordance with some embodiments of the invention.
  • Figure 4 illustrates exemplary operations performed at a first network device for enabling stateless multicast routing in an LLN in accordance with some embodiments of the invention.
  • Figure 5 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 5B illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.
  • FIG. 5C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments of the invention.
  • VNEs virtual network elements
  • Figure 5D illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • NE network element
  • Figure 5E illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments of the invention.
  • Figure 5F illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, according to some embodiments of the invention.
  • Figure 6 illustrates a general purpose control plane device with centralized control plane (CCP) software 650), according to some embodiments of the invention.
  • CCP centralized control plane
  • partitioning/sharing/duplication implementations types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
  • references in the specification to "one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Bracketed text and blocks with dashed borders may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • LDN's Low-Power and Lossy Networks
  • Routing Protocol for LLNs is modified to include support for Bit Indexed Explicit Replication (BIER) and provide multicast routing in LLNs.
  • a first network device of an LLN receives a destination advertisement object (DAO) message from a second network device from the plurality of other network devices of the LLN.
  • DAO destination advertisement object
  • the received DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain.
  • BIER Bit Index Explicit Replication
  • the first network device updates a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.
  • methods and apparatuses are provided for enabling multicast routing in LLNs through RPL extended with the BIER protocol in a network including BIER- capable and non-BIER-capable network devices.
  • the DAO message received at the first network device further includes an identifier of a third network device from the LLN, wherein the third network device is the closest BIER enabled network device of the LLN on a path coupling the first network device with one or more receivers of the multicast domain.
  • the first network device when the second network device is not BIER enabled, the first network device establishes an Internet Protocol (IP) tunnel between the first network device and the third network device such that the first network device forwards BIER packets destined to the multicast domain through the IP tunnel established between the first network device and the third network device.
  • IP Internet Protocol
  • the embodiments described in the foregoing present clear advantages for multicast routing in LLNs.
  • the embodiments avoid building an explicit multicast tree over the LLN network. Further in these embodiments, there is no need for maintaining per-flow multicast states at each network device thus conserving the network device's resources.
  • the embodiments present a stateless multicast protocol that overcomes the large memory requirements of standard multicast protocols providing an efficient multicast mechanism for Low-Power and Lossy networks, which does not exhaust the limited physical resources of the network devices of an LLN (i.e., minimal memory, reduced CPU performance and limited power source).
  • FIG 1 illustrates an exemplary LLN 100 including BIER-enabled network devices in accordance with some embodiments.
  • the LLN includes a first network device 102, a second network device 104 and a third network device 106.
  • each one of the network devices is implemented as described in further details with reference to Figures 5A-F below.
  • the first network device (ND) 102 is coupled with the second network device (ND) 104, and with the third network device 106.
  • the first network device can be a border network device coupling the LLN network to another network (not illustrated), which can be more stable (e.g., Internet).
  • the second network device 104 may be coupled with one or more electronic devices (receivers 103) which are clients of a multicast group and are operative to request traffic destined for the multicast group.
  • the second network device is not directly coupled with receivers of the multicast group; instead it is coupled with another network device (e.g., ND 106) and is on a path coupling the receivers of the multicast group with the first network device 102.
  • ND 104 is a BIER-capable network device (i.e., ND 104 is operative to receive and forward BIER encapsulated packets).
  • ND 104 receives a request for traffic of a multicast group from at least one of the multicast receivers (e.g., electronic device 103a) (e.g., the receivers 103 may use multicast group management to transmit the request for the multicast traffic (e.g., an Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD)).
  • IGMP Internet Group Management Protocol
  • MLD Multicast Listener Discovery
  • ND 104 Upon receipt of the request for the multicast traffic, ND 104 initiates a process for creating a path for the multicast traffic from a multicast source to the receivers of the multicast group.
  • the embodiments use the BIER protocol to provide multicast routing and provide an RPL extension for BIER to enable network devices of a BIER domain to exchange some BIER specific information among themselves.
  • RPL is extended to support BIER and to perform the distribution of these information. In the following embodiments, extensions to RPL to distribute BIER specific information are described.
  • ND 104 transmits an RPL DODAG Information Solicitation (DIS) message.
  • the DIS message may be used to solicit a DODAG Information Object from an RPL network device.
  • ND 104 may use DIS to probe its neighborhood for nearby DODAGs.
  • ND 104 may transmit the DIS message to probe network device 102 (and optionally network device 106) for a nearby multicast DODAG.
  • operation 1 is skipped and ND 104 does not transmit the DIS message; instead ND 104 receives at operation 3a, a DODAG Information Object (DIO) message from ND 102 without having transmitted a DIS message.
  • DIO DODAG Information Object
  • ND 102 constructs a DIO message including a BIER header field.
  • the BIER header field included within the DIO message identifies a multicast domain (i.e., a BIER domain).
  • ND 102 is an LLN border network device (e.g., a border router), which is configured to act as a root of an RPL DODAG.
  • ND 102 may act as a gateway for communications between multiple multicast domains.
  • ND 102 can be operative to convert and translate between multiple multicast formats (e.g., between Protocol Independent Multicast (PIM) and BIER).
  • PIM Protocol Independent Multicast
  • ND 102 is a BIER-enabled network device which is part of the RPL DODAG (other than the root of the DODAG).
  • ND 102 starts advertising DIO messages to neighboring network devices to mark its presence.
  • the DIO messages include the BIER header field used to inform the neighbor devices (i.e., the children of ND 102 in the hierarchy of the DODAG) that ND 102 is part of the multicast domain identified in the BIER header field.
  • the BIER header field includes an identifier which uniquely identifies the BIER domain.
  • FIG. 2A illustrates an exemplary DIO message 220 including a BIER header field 224 for enabling multicast routing in LLN in accordance with some embodiments.
  • the DODAG Information Object (DIO) message carries information that allows a network device to discover an RPL Instance, learn its configuration parameters, select a DODAG parent set, and maintain the DODAG.
  • the DIO message 220 includes a first portion 222 and a second portion 224.
  • the first portion 222 of the DIO message 220 includes standard fields of a DIO message as defined in RFC 6550, which is hereby incorporated by reference.
  • the fields included in portion 222 will be described below with respect to parameter values that enable multicast routing with RPL.
  • "RPLInstancelD” is a field set by the DODAG root that indicates to which RPL Instance the DODAG belongs.
  • "Version Number” is a field set by the DODAG root to indicate the version of the DODAG being formed.
  • “Rank” is a field indicating the DODAG Rank of the network device sending the DIO message. The Grounded 'G' flag indicates whether the DODAG advertised can satisfy the application-defined goal.
  • MOP Mode of Operation
  • DODAGPreference (illustrated as “Prf ' in Figure 2A) is a field that defines how preferable the root of this DODAG is compared to other DODAG roots within a same RPL instance.
  • Destination Advertisement Trigger Sequence Number (illustrated as DTSN in Figure 2A) is set by the network device issuing the DIO message and is used to maintain downward routes. "Flags” is a number of bits unused and reserved for flags. "Reserved” is a field of unused bits. "DODAGID” is an IPv6 address set by a DODAG root that uniquely identifies a DODAG.
  • DIO message 220 further includes a second portion 224, which includes a BIER header field.
  • a DIO message is extended to include BIER header information to identify the multicast domain to which the transmitting ND belongs (e.g., ND 102).
  • the BIER header field is included in an optional portion of the standard DIO message defined in RFC 6550.
  • the BIER header field 224 includes a "Type” field indicating the type of the packet, a "Capability” field that is a flag indicating BIER capabilities of the transmitting ND (here ND 102), a “Length” field indicating the length of the packet, a "Domain” field identifying the BIER domain to which the network device belongs (the domain field may identify a BIER domain or a BIER sub-domain), a "BFR-Prefix” field indicating the BIER Forwarding Router Prefix of the transmitting ND (ND 102), a BSL field indicating a Bit String Length supported by the transmitting network device, and a "BFR-ID” field indicating a BIER Forwarding Router ID assigned to the network device in this BIER domain.
  • BIER information which identifies the transmitting ND within the BIER domain and provide information with respect to this ND (e.g., whether the ND is BIER capable or not, and if it is BIER capable how it can be identified within the BIER domain).
  • This information enables the receiving ND (e.g., 104) to update forwarding tables (e.g., Bit Index Routing Table (BIRT) and/or Bit Index Forwarding Table (BIFT)) for receiving and forwarding multicast data packets through the BIER domain.
  • forwarding tables e.g., Bit Index Routing Table (BIRT) and/or Bit Index Forwarding Table (BIFT)
  • ND 104 determines, at operation 4a, whether to join the DODAG based on the objective function included in the received DIO message.
  • ND 104 parses the BIER header field and extracts BIER information related to the multicast domain to update a forwarding table (e.g., a BIRT and/or a BIFT) associated with the BIER domain identified in the BIER header field of the DIO message.
  • a forwarding table e.g., a BIRT and/or a BIFT
  • ND 104 may use the values of the "BFR- Prefix" field and the "BFR-ID" field of the DIO message to populate its BIRT entry for ND 102.
  • ND 104 identifies a set of parent network devices according to information carried in the DIO message.
  • ND 104 may receive additional DIO messages including BIER header fields from other network devices (not shown in Figure 1).
  • ND 104 then identifies from the plurality of network devices a set of parent network devices coupling ND 104 to the RPL DODAG and consequently to the BIER domain.
  • adjacencies of BIER-enabled network device e.g., ND 102 and ND 104 belonging to a same BIER domain are determined by the adjacencies identified through the RPL protocol.
  • ND 104 When ND 104 accepts to join the RPL DODAG, it constructs, at operation 4d, an RPL Destination Advertisement Object (DAO) message which includes a BIER header field to establish an upward route towards the root of the DODAG.
  • the BIER header field includes BIER information related to ND 104 with respect to the BIER domain. This BIER information is to be used by ND 102 for forwarding multicast data packets towards ND 104 through the BIER protocol.
  • the DAO message is then transmitted to ND 102 (which was identified as a parent of ND 104 in the RPL DODAG).
  • ND 102 and ND 104 are configured to operate in a "Storing" mode of operation, which is a fully stateful mode (e.g., a MOP is set to 3, in the RPL messages such that the NDs support multicast in the storing mode of operation).
  • a MOP is set to 3
  • each network device stores routing tables for its DODAG.
  • each hop on an upward route examines its routing table to decide on the next hop.
  • the DAO message is constructed as described in more details with reference to Figure 2B below.
  • FIG. 2B illustrates an exemplary DAO message including a BIER header field for enabling multicast routing in LLN in accordance with some embodiments.
  • the DAO message 230 includes a first portion 232 and a second portion 234.
  • the DAO message 230 is used to propagate destination information upward along the multicast DODAG.
  • the DAO message when the LLN operates in a storing mode, the DAO message is unicast by a child network device (e.g., ND 104) to the selected parent(s) (e.g., ND 102).
  • the LLN operates in a non-storing mode, the DAO message is unicast to the DODAG root.
  • the DAO message may optionally, upon explicit request or error, be acknowledged by its destination with a Destination Advertisement An acknowledgement (DAO-ACK) message is sent back to the sender of the DAO (e.g., DAO-ACK as described with reference to Figure 3E).
  • DAO-ACK Destination Advertisement
  • the first portion 232 of the DAO message 230 includes standard fields of a DAO message as defined in RFC 6550. [0042]
  • the first portion 232 of the DAO message 230 includes standard fields of a DAO message as defined in RFC 6550, which is hereby incorporated by reference. The fields included in portion 232 will be described below with respect to parameter values that enable multicast routing with RPL.
  • RLInstancelD is a header field indicating the topology instance associated with the DODAG, as learned from the DIO message (e.g., DIO message 220).
  • the "K” field is a flag, which indicates that the recipient is expected to send a DAO-ACK back.
  • the "D” field is a flag, which indicates that the DODAGID field is present in the DAO message.
  • the "Flags” field represents remaining unused bits in the Flags field and, and the unused bits are reserved for flags.
  • the "Reserved” field is an unused field.
  • the DAOSequence field is a counter present in the DAO message to correlate the message with a DAO-ACK message.
  • the DAOSequence number is locally significant to the ND that issues a DAO message for its own consumption to detect the loss of a DAO message and enable retries.
  • DODAGID is an IPv6 address set by a DODAG root that uniquely identifies a DODAG. This field is optional in DAO messages and may not be included in some embodiments.
  • DAO message 230 further includes a second portion 234, which includes a BIER header field.
  • a DAO message is extended to include BIER header information to identify the BIER domain to which the transmitting ND belongs (e.g., ND 104).
  • the BIER header field is included in an optional portion of the standard DAO message.
  • the BIER header field 234 includes a "Type" field indicating the type of the packet, a
  • Capability field that is a flag indicating BIER capabilities of the transmitting ND (here ND 104), a "Length” field indicating the length of the packet, a "Domain” field identifying the BIER domain to which the network device belongs (the domain field may identify a BIER domain or a BIER sub-domain), a "BFR-Prefix” field indicating the BIER Forwarding Router Prefix of the transmitting ND (ND 103), a BSL field indicating a Bit String Length supported by the transmitting network device, and a "BFR-ID” field indicating a BIER Forwarding Router ID assigned to the network device in this BIER domain.
  • These fields represent BIER information which identifies the transmitting ND within the BIER domain and provide information with respect to this ND (e.g., whether the ND is BIER capable or not, and if it is BIER capable how it can be identified within the BIER domain). This information enables the receiving ND (e.g., 102) to update forwarding tables (e.g., BIRT and/or BIFT) for receiving and forwarding multicast data packets through the BIER domain.
  • forwarding tables e.g., BIRT and/or BIFT
  • ND 102 parses the BIER header field and extracts BIER information related to the multicast domain to update one or more forwarding tables (e.g., a BIRT and/or a BIFT) associated with the BIER domain identified in the BIER header field of the DAO message.
  • ND 102 may use the values of the "BFR-Prefix" field and the "BFR-ID” field of the DAO message to populate its BIRT entry for ND 102.
  • ND 102 may update Forwarding Bit Mast (F-BM) and BFR Neighbor (BFR-NH) in the BIFT entry associated with ND 104.
  • F-BM Forwarding Bit Mast
  • BFR-NH BFR Neighbor
  • ND 102 may transmit, at operation 7, a D AO- Acknowledgment (DAO-ACK) to ND 104.
  • DAO-ACK D AO- Acknowledgment
  • ND 102 when ND 102 is the root of the RPL DODAG coupled with ND 104, it may further act as a gateway network device for communications between multiple multicast domains.
  • ND 102 may be a border router of an LLN.
  • ND 102 is now operative to forward BIER encapsulated multicast packets as described with reference to IETF Draft "Multicast using Bit Index Explicit Replication: draft- ietf-bier-architecture-03," which is hereby incorporated by reference.
  • ND 102 can be operative to convert and translate between multiple multicast formats (e.g., between PIM and BIER).
  • ND 102 may receive multicast data packets and encapsulate the packets with a BIER Header destined to ND 104 to be forwarded to the receivers 103.
  • ND 102 is a network device on a multicast path between a root of a DODAG and ND 104 which is part of the DODAG.
  • the root of the DODAG may receive multicast data packets and encapsulate the packets with a BIER Header destined to ND 104 to be forwarded to the receivers 103.
  • the encapsulated packets may be routed through the intermediary ND 102 prior to being forwarded to ND 104.
  • all network devices of an LLN form a separate RPL DODAG in which the network devices are all BIER capable.
  • the process described with reference to Figure 1 is performed by each ND of the LLN that needs to join the multicast domain. Once the NDs have joined the multicast domain (by exchanging BIER information through RPL DIO and RPL DAO messages), BIER encapsulated packets can be forwarded to the multicast clients served by each ND.
  • FIG. 3A-E illustrate block diagrams of an exemplary LLN 300 in which multicast routing is enabled in accordance with some embodiments. Multicast routing is enabled by using an RPL extension for BIER.
  • the LLN 300 includes multiple network devices 302R and 302A- 302G.
  • the LLN 300 includes BIER-enabled network devices (which can also be referred to as Bit-Forwarding Routers (BFR)) (e.g., 302R, 302A-C, 302F, and 302G) as well as non-BIER-enabled network devices (e.g., ND 302D and ND 302E).
  • BFR Bit-Forwarding Routers
  • network devices 302R, 302A, 302B, 302C, 302F, and 302G are part of a same BIER domain.
  • the BIER domain may include more or less network devices than the ones illustrated herein in Figures 3A-E.
  • ND 302R is configured to operate as a root of an RPL DODAG that is operative to support BIER.
  • ND 302R may be configured to act as a Bit-Forwarding Ingress Router (BFIR) of a BIER domain, which receives multicast data packets that enter the BIER domain.
  • BFIR Bit-Forwarding Ingress Router
  • ND 302R is a gateway network device for communication between multiple multicast domains.
  • ND 302R may convert between multicast formats (e.g., from PIM to BIER). It is further operative to share the BIER capabilities of the other network devices in its DODAG with all the devices.
  • ND 302G is a BIER-enabled network device coupled with the multicast receivers 303.
  • ND 302G can be referred to as "Bit-Forwarding Egress Router” (BFER) and forwards the multicast data packets that leave the BIER domain.
  • Intermediary BIER-enabled NDs (302A, 302C, 302F) which are part of the BIER domain may be referred to as "transit BFRs.”
  • the receivers 303 are multicast clients that have subscribed to receive traffic of a multicast group.
  • ND 302R is operative to receive multicast data packets for the multicast group from the network 301 and transmit the multicast packets towards the receivers 303 through a path in the LLN.
  • the RPL DAO message is further propagated with an identifier of the latest traced BIER-enabled network device such that when it reaches a BIER- enabled network device, which has multicast receiver children that are non-BIER network devices, then this network device can build an IP2IP/P2MP/MP2MP tunnel to transmit BIER packets.
  • ND 302G transmits a DAO message including a BIER header field that includes an identifier of a BIER domain.
  • the DAO message is constructed as described with reference to Figure 2B.
  • the DAO message is transmitted in response to the receipt of a DIO message advertised by the parent network device ND 302F.
  • the DIO message advertised by ND 302F is caused by ND 302F joining the RPL DODAG.
  • ND 302F determines whether the current ND (i.e., ND 302F) and the source ND (i.e., the ND from which the DAO message is received, here ND 302G) are BIER-enabled network devices.
  • ND 302F may determine that the source ND (ND 302G) is BIER-enabled by parsing the BIER header field of the received DAO message 351 and extracting BIER information related to ND 302G.
  • the two devices ND 302G and 302F are BIER enabled.
  • ND 302F adds, at operation 703, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302G.
  • additional information is added in the forwarding table(s) for each entry associated with a network device.
  • This additional information indicates for each ND whether the device is BIER-enabled or not.
  • a 1-bit flag is added to the table and the flag may be set to a value of 1 for indicating that the ND is BIER-enabled and to a value of 0 for indicating that the ND is non-BIER-enabled.
  • ND 302F identifies the latest BIER- enabled ND encountered, which is ND 302F in this example.
  • ND 302F encapsulates the DAO message 351 into an IP packet in which the source address is the IP address of the current ND 302F (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the encapsulated DAO message 353 to its RPL parent (here ND 302E).
  • ND 302E determines, in response to the receipt of the updated DAO message 353, that the current ND (i.e., ND 302E) is Non-BIER enabled and that ND 302F, which is the source of the DAO message 353 is BIER enabled.
  • ND 302E adds, at operation 707, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302F.
  • ND 302E identifies the latest BIER-enabled ND encountered, which is ND 302F in this example.
  • ND 302E encapsulates the received DAO message into an IP packet to form an IP encapsulated DAO message 355 in which the source address is the IP address of ND 302F (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the IP encapsulated DAO message 355 to its RPL parents (here ND 302C and 302D).
  • ND 302D determines, in response to the receipt of the updated DAO message 355, that the current ND (i.e., ND 302D) and the source ND 302E are non-BIER enabled.
  • ND 302D adds, at operation 711, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302E.
  • ND 302D identifies the latest BIER-enabled ND encountered, which is ND 302F in this example.
  • ND 302D encapsulates the received DAO message into an IP packet to form an IP encapsulated DAO message 357 in which the source address is the IP address of ND 302F (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the IP encapsulated DAO message 357 to its RPL parent (here ND 302A).
  • ND 302A determines that the ND is BIER enabled and that ND 302D, which is the source of the DAO message 357 is non-BIER enabled.
  • ND 302A adds, at operation 715, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302D.
  • Flow then moves to operation 716, at which ND 302A identifies the latest BIER- enabled ND encountered, which is ND 302A in this example.
  • ND 302A stores a flag indicating that an IP tunnel is to be established between the current node(ND 302A) and the previous BIER enabled ND encountered (ND 302F). Flow then moves to operation 718, at which ND 302A updates BIER forwarding tables according to the BIER information received in the BIER header field of the DAO message 357.
  • ND 302A encapsulates the received DAO message 357 into an IP packet to form an IP encapsulated DAO message 359 in which the source address is the IP address of ND 302A (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the IP encapsulated DAO message 359 to its RPL parent (here ND 302R) which is the root of the DODAG and the ingress network device of the BIER domain.
  • the control plane of the LLN is set-up dynamically.
  • the intermediate network devices (e.g., ND 302A) maintain tunnel states until the multicast data packets are forwarded onto the tunnel.
  • the DAO messages can be used to keep the tunnel and multicast forwarding alive until there are hosts requesting the multicast traffic.
  • DAO-ACK messages are used to initiate the tunnels.
  • a DAO-ACK message 361 is initiated by the ND 302R and is forwarded back to the source of the original DAO message 351 (i.e., ND 302G).
  • the DAO-ACK follows the best path from the root of the DODAG, ND 302R, to the receiver ND 302G as determined through the RPL protocol.
  • ND 302 A determines at operation 721, that the tunnel flag is set indicating that an IP tunnel is to be established between the current node and the previous BIER enabled ND encountered (e.g., ND 302F).
  • ND 302A establishes an IP tunnel 310 at operation 722 between ND 302A and ND 302F, and forwards the DAO-ACK message 361 through the IP tunnel to the previous BIER enabled ND (e.g., toward ND 302F).
  • the DAO-ACK message is encapsulated within an IP header destined to the BIER-enabled ND 302F.
  • multicast traffic can start.
  • the multicast traffic is then forwarded from ND 302R to the receivers 303 passing through ND 302G according to the BIER protocol.
  • ND 302R encapsulate the data packets into a BIER header and points that the BFER is ND 302G.
  • BIER Forwarding table ND 302A knows that the data packets have to be routed through ND 302A.
  • ND 302A looks at the BitString of the packet and forwards the packet over the tunnel to 302F.
  • ND 302F then forwards the packet to 302G by looking up its forwarding table (BIFT).
  • gateway network device ND 302G is not BIER capable, and it cannot initiate the DAO message as extended with BIER, in this case, any network device that is intermediate on the path of the DAO message can initiate the DAO message extended with the BIER header field to be forwarded to the root of the RPL DODAG.
  • FIG. 4 illustrates exemplary operations performed at a first network device for enabling stateless multicast routing in an LLN in accordance with some embodiments.
  • a first network device e.g., ND 102 or ND 302A
  • receives a destination advertisement object (DAO) message from a second network device e.g., ND 104, ND 302C, or ND 302D
  • DAO destination advertisement object
  • the received DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain.
  • BIER Bit Index Explicit Replication
  • the DAO message is constructed as described with reference to Figure 2B.
  • the first network device updates (e.g., ND 102 or ND 302A) a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device (e.g., ND 104, ND 302C, or ND 302D), BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain (e.g., receivers 103 or 303).
  • the second network device e.g., ND 104, ND 302C, or ND 302D
  • BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain (e.g., receivers 103 or 303).
  • the DAO message is received with an identifier of a third network device (e.g., 302F) from the LLN, wherein the third network device (302F) is the closest BIER enabled network device of the LLN on a path coupling the first network device (302A) with one or more receivers of the multicast domain (303).
  • the second network device e.g., 302D
  • the first network device (302A) establishes an Internet Protocol (IP) tunnel 310 between the first network device (302A) and the third network device (302F) such that the first network device forwards BIER packets of the multicast domain through the IP tunnel 310 established between the two network devices.
  • IP Internet Protocol
  • the embodiments described herein present clear advantages when compared to standard multicast routing protocols in LLNs.
  • the LLN avoids building an explicit multicast tree.
  • there is no need to maintain per-flow multicast states at each network device thus conserving the network device's physical resources (e.g., memory, computing resources, and power source).
  • the embodiments present a stateless multicast protocol that overcomes the large memory requirements of standard multicast protocols providing Low- Power and Lossy networks with an efficient multicast mechanism which does not exhaust the limited physical resources of the network devices of an LLN (i.e., minimal memory, reduced CPU performance and limited power source).
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine -readable media also called computer-readable media
  • machine -readable storage media e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals.
  • an electronic device e.g., a computer
  • includes hardware and software such as a set of one or more processors coupled to one or more machine -readable storage media to store code for execution on the set of processors and/or to store data.
  • an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device.
  • volatile memory e.g., dynamic random access memory (DRAM), static random access memory (SRAM)
  • Typical electronic devices also include a set or one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • network connections to transmit and/or receive code and/or data using propagating signals.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • each network device may include one or more network elements as described below with respect to Figure 5A.
  • Each network device may perform the operations described with reference to Figures 1-3E and 4.
  • each network device may include a plurality of network elements, where each network element is operative to receive and forward DIO and DAO messages extended to include BIER header fields and to propagate BIER domain information through the RPL protocol to enable a stateless multicast forwarding in LLNs.
  • a network device is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • Figure 5A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 5A shows NDs 500A-H, and their connectivity by way of lines between 500A-500B, 500B-500C, 500C-500D, 500D-500E, 500E-500F, 500F-500G, and 500A-500G, as well as between 500H and each of 500A, 500C, 500D, and 500G.
  • These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link).
  • NDs 500A, 500E, and 500F An additional line extending from NDs 500A, 500E, and 500F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
  • Two of the exemplary ND implementations in Figure 5A are: 1) a special-purpose network device 502 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 504 that uses common off-the-shelf (COTS) processors and a standard OS.
  • ASICs application-specific integrated-circuits
  • OS special-purpose operating system
  • COTS common off-the-shelf
  • the special-purpose network device 502 includes networking hardware 510 comprising compute resource(s) 512 (which typically include a set of one or more processors), forwarding resource(s) 514 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 516 (sometimes called physical ports), as well as non- transitory machine readable storage media 518 having stored therein networking software 520.
  • a physical NI is hardware in a ND through which a network connection (e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)) is made, such as those shown by the connectivity between NDs 500A-H.
  • WNIC wireless network interface controller
  • NIC network interface controller
  • the networking software 520 may be executed by the networking hardware 510 to instantiate a set of one or more networking software instance(s) 522.
  • the networking software 520 includes an RPL BIER Routing element 523 which when instantiated as the instance 533A is operative to enable stateless multicast routing as described with reference to Figures 1-4.
  • Each of the networking software instance(s) 522, and that part of the networking hardware 510 that executes that network software instance (be it hardware dedicated to that networking software instance and/or time slices of hardware temporally shared by that networking software instance with others of the networking software instance(s) 522), form a separate virtual network element 530A-R.
  • Each of the virtual network element(s) (VNEs) 530A-R includes a control communication and configuration module 532A- R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 534A-R, such that a given virtual network element (e.g., 530A) includes the control communication and configuration module (e.g., 532A), a set of one or more forwarding table(s) (e.g., 534A), and that portion of the networking hardware 510 that executes the virtual network element (e.g., 530A).
  • a control communication and configuration module 532A- R sometimes referred to as a local control module or control communication module
  • forwarding table(s) 534A-R forwarding table(s) 534A-R
  • the special-purpose network device 502 is often physically and/or logically considered to include: 1) a ND control plane 524 (sometimes referred to as a control plane) comprising the compute resource(s) 512 that execute the control communication and configuration module(s) 532A-R; and 2) a ND forwarding plane 526 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 514 that utilize the forwarding table(s) 534A-R and the physical NIs 516.
  • a ND control plane 524 (sometimes referred to as a control plane) comprising the compute resource(s) 512 that execute the control communication and configuration module(s) 532A-R
  • a ND forwarding plane 526 sometimes referred to as a forwarding plane, a data plane, or a media plane
  • forwarding resource(s) 514 that utilize the forwarding table(s) 534A-R and the physical NIs 516.
  • the ND control plane 524 (the compute resource(s) 512 executing the control communication and configuration module(s) 532A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 534A-R, and the ND forwarding plane 526 is responsible for receiving that data on the physical NIs 516 and forwarding that data out the appropriate ones of the physical NIs 516 based on the forwarding table(s) 534A-R.
  • data e.g., packets
  • the ND forwarding plane 526 is responsible for receiving that data on the physical NIs 516 and forwarding that data out the appropriate ones of the physical NIs 516 based on the forwarding table(s) 534A-R.
  • Figure 5B illustrates an exemplary way to implement the special-purpose network device 502 according to some embodiments of the invention.
  • Figure 5B shows a special- purpose network device including cards 538 (typically hot pluggable). While in some embodiments the cards 538 are of two types (one or more that operate as the ND forwarding plane 526 (sometimes called line cards), and one or more that operate to implement the ND control plane 524 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card).
  • additional card types e.g., one additional type of card is called a service card, resource card, or multi-application card.
  • a service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)).
  • Layer 4 to Layer 7 services e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)
  • GPRS General Pack
  • the general purpose network device 504 includes hardware 540 comprising a set of one or more processor(s) 542 (which are often COTS processors) and network interface controller(s) 544 (NICs; also known as network interface cards) (which include physical NIs 546), as well as non-transitory machine readable storage media 548 having stored therein software 550.
  • the software 550 includes an RPL BIER Routing element 523 which when instantiated is operative to enable stateless multicast routing as described with reference to Figures 1-4.
  • the processor(s) 542 execute the software 550 to instantiate one or more sets of one or more applications 564A-R.
  • the virtualization layer 554 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 562A-R called software containers that may each be used to execute one (or more) of the sets of applications 564A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • the virtualization layer 554 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 562A-R called software containers that may each be used to execute one (or more) of the sets of applications 564A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from
  • the virtualization layer 554 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 564A-R is run on top of a guest operating system within an instance 562A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes.
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system. (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system. (LibOS) including drivers/libraries of OS services
  • unikernel can be implemented to run directly on hardware 540, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container
  • embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 554, unikernels running within software containers represented by instances 562A-R, or as a combination of unikernels and the above-described techniques (e.g., unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
  • the instantiation of the one or more sets of one or more applications 564A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 552.
  • the virtual network element(s) 560A-R perform similar functionality to the virtual network element(s) 530A-R - e.g., similar to the control communication and configuration module(s) 532A and forwarding table(s) 534A (this virtualization of the hardware 540 is sometimes referred to as network function virtualization (NFV)).
  • NFV network function virtualization
  • CPE customer premise equipment
  • each instance 562A-R corresponding to one VNE 560A-R
  • alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 562A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
  • the virtualization layer 554 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 562A-R and the NIC(s) 544, as well as optionally between the instances 562A-R; in addition, this virtual switch may enforce network isolation between the VNEs 560A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
  • VLANs virtual local area networks
  • the third exemplary ND implementation in Figure 5A is a hybrid network device 506, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND.
  • a platform VM i.e., a VM that that implements the functionality of the special-purpose network device 502 could provide for para-virtualization to the networking hardware present in the hybrid network device 506.
  • NE network element
  • each of the VNEs receives data on the physical NIs (e.g., 516, 546) and forwards that data out the appropriate ones of the physical NIs (e.g., 516, 546).
  • a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and
  • FIG. 5C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments of the invention.
  • Figure 5C shows VNEs 570A.1-570A.P (and optionally VNEs 570A.Q-570A.R) implemented in ND 500A and VNE 570H.1 in ND 500H.
  • VNEs 570A.1-P are separate from each other in the sense that they can receive packets from outside ND 500A and forward packets outside of ND 500A; VNE 570A.1 is coupled with VNE 570H.1, and thus they communicate packets between their respective NDs; VNE 570A.2-570A.3 may optionally forward packets between themselves without forwarding them outside of the ND 500A; and VNE 570A.P may optionally be the first in a chain of VNEs that includes VNE 570A.Q followed by VNE 570A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 5C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNE
  • the NDs of Figure 5A may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services.
  • end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances
  • VOIP
  • Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g.,
  • end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers.
  • one or more of the electronic devices operating as the NDs in Figure 5A may also host one or more such servers (e.g., in the case of the general purpose network device 504, one or more of the software instances 562A-R may operate as servers; the same would be true for the hybrid network device 506; in the case of the special-purpose network device 502, one or more such servers could also be run on a virtualization layer executed by the compute resource(s) 512); in which case the servers are said to be co-located with the VNEs of that ND.
  • the servers are said to be co-located with the VNEs of that ND.
  • FIG. 5D illustrates a network with a single network element on each of the NDs of Figure 5A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • Figure 5D illustrates network elements (NEs) 570A-H with the same connectivity as the NDs 500A-H of Figure 5 A.
  • Figure 5D illustrates that the distributed approach 572 distributes responsibility for generating the reachability and forwarding information across the NEs 570A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
  • the control communication and configuration module(s) 532A-R of the ND control plane 524 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
  • Border Gateway Protocol BGP
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • RIP Routing Information Protocol
  • LDP Label Distribution Protocol
  • RSVP Resource Reservation Protocol
  • the NEs 570A-H e.g., the compute resource(s) 512 executing the control communication and configuration module(s) 532A-R
  • the NEs 570A-H perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by distributively determining the reachability within the network and calculating their respective forwarding information.
  • Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 524.
  • the ND control plane 524 programs the ND forwarding plane 526 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 524 programs the adjacency and route information into one or more forwarding table(s) 534A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 526.
  • FIB Forwarding Information Base
  • LFIB Label Forwarding Information Base
  • the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 502, the same distributed approach 572 can be implemented on the general purpose network device 504 and the hybrid network device 506.
  • FIG. 5D illustrates that a centralized approach 574 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination.
  • the illustrated centralized approach 574 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 576 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized.
  • a centralized control plane 576 sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the centralized control plane 576 has a south bound interface 582 with a data plane 580 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 570A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes).
  • the centralized control plane 576 includes a network controller 578, which includes a centralized reachability and forwarding information module 579 that determines the reachability within the network and distributes the forwarding information to the NEs 570A-H of the data plane 580 over the south bound interface 582 (which may use the OpenFlow protocol).
  • the network intelligence is centralized in the centralized control plane 576 executing on electronic devices that are typically separate from the NDs.
  • the network controller 578 includes RPL BIER Routing Control Element 581 which is operative to enable stateless multicast routing as described with reference to Figures 1-4.
  • each of the control communication and configuration module(s) 532A-R of the ND control plane 524 typically include a control agent that provides the VNE side of the south bound interface 582.
  • the ND control plane 524 (the compute resource(s) 512 executing the control communication and configuration module(s) 532A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 576 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 579 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 532A-R, in addition to communicating with the centralized control plane 576, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 574, but may also be considered a hybrid approach).
  • data e.g., packets
  • the control agent communicating with the centralized control plane 576 to receive the forward
  • the same centralized approach 574 can be implemented with the general purpose network device 504 (e.g., each of the VNE 560A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 576 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 579; it should be understood that in some embodiments of the invention, the VNEs 560A-R, in addition to communicating with the centralized control plane 576, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 506.
  • the general purpose network device 504 e.g., each of the VNE 560A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for
  • NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run
  • NFV and SDN both aim to make use of commodity server hardware and physical switches.
  • Figure 5D also shows that the centralized control plane 576 has a north bound interface 584 to an application layer 586, in which resides application(s) 588.
  • the centralized control plane 576 has the ability to form virtual networks 592 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 570A-H of the data plane 580 being the underlay network)) for the application(s) 588.
  • virtual networks 592 sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 570A-H of the data plane 580 being the underlay network)
  • the centralized control plane 576 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
  • Figure 5D shows the distributed approach 572 separate from the centralized approach 574
  • the effort of network control may be distributed differently or the two combined in certain embodiments of the invention.
  • embodiments may generally use the centralized approach (SDN) 574, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree.
  • SDN centralized approach
  • Such embodiments are generally considered to fall under the centralized approach 574, but may also be considered a hybrid approach.
  • Figure 5D illustrates the simple case where each of the NDs 500A-H implements a single NE 570A-H
  • the network control approaches described with reference to Figure 5D also work for networks where one or more of the NDs 500A-H implement multiple VNEs (e.g., VNEs 530A-R, VNEs 560A-R, those in the hybrid network device 506).
  • the network controller 578 may also emulate the implementation of multiple VNEs in a single ND.
  • the network controller 578 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 592 (all in the same one of the virtual network(s) 592, each in different ones of the virtual network(s) 592, or some combination).
  • the network controller 578 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 576 to present different VNEs in the virtual network(s) 592 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • Figures 5E and 5F respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 578 may present as part of different ones of the virtual networks 592.
  • Figure 5E illustrates the simple case of where each of the NDs 500A-H implements a single NE 570A-H (see Figure 5D), but the centralized control plane 576 has abstracted multiple of the NEs in different NDs (the NEs 570A-C and G-H) into (to represent) a single NE 5701 in one of the virtual network(s) 592 of Figure 5D, according to some embodiments of the invention.
  • Figure 5E shows that in this virtual network, the NE 5701 is coupled to NE 570D and 570F, which are both still coupled to NE 570E.
  • Figure 5F illustrates a case where multiple VNEs (VNE 570A.1 and VNE 570H.1) are implemented on different NDs (ND 500A and ND 500H) and are coupled to each other, and where the centralized control plane 576 has abstracted these multiple VNEs such that they appear as a single VNE 570T within one of the virtual networks 592 of Figure 5D, according to some embodiments of the invention.
  • the abstraction of a NE or VNE can span multiple NDs.
  • the centralized control plane 576 implements the centralized control plane 576 as a single entity (e.g., a single instance of software running on a single electronic device), alternative embodiments may spread the functionality across multiple entities for redundancy and/or scalability purposes (e.g., multiple instances of software running on different electronic devices).
  • the electronic device(s) running the centralized control plane 576, and thus the network controller 578 including the centralized reachability and forwarding information module 579 may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device).
  • FIG. 6 illustrates, a general purpose control plane device 604 including hardware 640 comprising a set of one or more processor(s) 642 (which are often COTS processors) and network interface controller(s) 644 (NICs; also known as network interface cards) (which include physical NIs 646), as well as non-transitory machine readable storage media 648 having stored therein centralized control plane (CCP) software 650.
  • processor(s) 642 which are often COTS processors
  • NICs network interface controller
  • NICs network interface cards
  • CCP centralized control plane
  • the processor(s) 642 typically execute software to instantiate a virtualization layer 654 (e.g., in one embodiment the virtualization layer 654 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 662A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 654 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 662A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment, an application is implemented as a unikernel, which can be generated by compiling directly with an application only a
  • VMM virtual machine monitor
  • CCP instance 676A an instance of the CCP software 650 (illustrated as CCP instance 676A) is executed (e.g., within the instance 662A) on the virtualization layer 654.
  • CCP instance 676A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general purpose control plane device 604.
  • the instantiation of the CCP instance 676A, as well as the virtualization layer 654 and instances 662A-R if implemented, are collectively referred to as software instance(s) 652.
  • the CCP instance 676A includes a network controller instance 678.
  • the network controller instance 678 includes a centralized reachability and forwarding information module instance 679 (which is a middleware layer providing the context of the network controller 578 to the operating system and communicating with the various NEs), and an CCP application layer 680 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces).
  • this CCP application layer 680 within the centralized control plane 576 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
  • the centralized control plane 576 transmits relevant messages to the data plane 580 based on CCP application layer 680 calculations and middleware layer mapping for each flow.
  • a flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers.
  • Different NDs/NEs/VNEs of the data plane 580 may receive different messages, and thus different forwarding information.
  • the data plane 580 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
  • Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets.
  • the model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
  • MAC media access control
  • Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched).
  • Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet.
  • TCP transmission control protocol
  • an unknown packet for example, a "missed packet” or a "match-miss” as used in OpenFlow parlance
  • the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 576.
  • the centralized control plane 576 will then program forwarding table entries into the data plane 580 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 580 by the centralized control plane 576, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.
  • a network interface may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI.
  • a virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface).
  • a NI physical or virtual
  • a loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address.
  • IP addresses of that ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
  • Some NDs include functionality for authentication, authorization, and accounting (AAA) protocols (e.g., RADIUS (Remote Authentication Dial-In User Service), Diameter, and/or TACACS+ (Terminal Access Controller Access Control System Plus).
  • AAA can be provided through a client/server model, where the AAA client is implemented on a ND and the AAA server can be implemented either locally on the ND or on a remote electronic device coupled with the ND.
  • Authentication is the process of identifying and verifying a subscriber. For instance, a subscriber might be identified by a combination of a username and a password or through a unique key.
  • Authorization determines what a subscriber can do after being authenticated, such as gaining access to certain electronic device information resources (e.g., through the use of access control policies). Accounting is recording user activity.
  • end user devices may be coupled (e.g., through an access network) through an edge ND (supporting AAA processing) coupled to core NDs coupled to electronic devices implementing servers of service/content providers.
  • AAA processing is performed to identify for a subscriber the subscriber record stored in the AAA server for that subscriber.
  • a subscriber record includes a set of attributes (e.g., subscriber name, password, authentication information, access control information, rate-limiting information, policing information) used during processing of that subscriber's traffic.
  • Certain NDs internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits.
  • CPE customer premise equipment
  • a subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session.
  • a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly deallocates that subscriber circuit when that subscriber disconnects.
  • Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or
  • Asynchronous Transfer Mode (ATM)
  • Ethernet 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM
  • ATM Asynchronous Transfer Mode
  • a subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking).
  • DHCP dynamic host configuration protocol
  • CLIPS client-less internet protocol service
  • MAC Media Access Control
  • PPP point-to-point protocol
  • DSL digital subscriber line
  • DSL digital subscriber line
  • DHCP When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided.
  • CPE end user device
  • Each VNE e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable.
  • each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s).
  • AAA authentication, authorization, and accounting
  • Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
  • interfaces that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing).
  • the subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND.
  • a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher- layer protocol interface is configured and associated with that physical entity.
  • a physical entity e.g., physical NI, channel
  • a logical entity e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)
  • network protocols e.g., routing protocols, bridging protocols

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne des procédés et appareils destinés à autoriser le routage multidiffusion sans état dans les réseaux avec perte et de faible puissance (LLN's). Un premier dispositif de réseau d'un LLN reçoit un message d'objet d'avertissement de destination (DAO) en provenance d'un second dispositif de réseau contenu dans une pluralité d'autres dispositifs de réseau d'un LLN. La message DAO reçu comprend un champ d'en tête de reproduction explicite d'index d'élément binaire (BIER) comprenant un identifiant d'un domaine multidiffusion. Le premier dispositif de réseau met à jour l'entrée de la table d'acheminement comprenant un identifiant du second dispositif de réseau à partir duquel le message DAO est reçu, l'entrée de la table d'acheminement entraîne le premier dispositif de réseau à acheminer, vers le second dispositif de réseau, des paquets encapsulés BIER d'un domaine de multidiffusion qui doivent être acheminés au niveau du second dispositif de réseau en faveur d'au moins un récepteur du domaine de multidiffusion.
PCT/IB2016/050918 2016-02-19 2016-02-19 Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance WO2017141076A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2016/050918 WO2017141076A1 (fr) 2016-02-19 2016-02-19 Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2016/050918 WO2017141076A1 (fr) 2016-02-19 2016-02-19 Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance

Publications (1)

Publication Number Publication Date
WO2017141076A1 true WO2017141076A1 (fr) 2017-08-24

Family

ID=55446841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2016/050918 WO2017141076A1 (fr) 2016-02-19 2016-02-19 Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance

Country Status (1)

Country Link
WO (1) WO2017141076A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109561022A (zh) * 2017-09-27 2019-04-02 华为技术有限公司 一种组播转发方法及组播路由器
CN110233709A (zh) * 2019-06-11 2019-09-13 南方电网科学研究院有限责任公司 一种rpl路由方法及相关装置
US10469278B2 (en) * 2017-10-24 2019-11-05 Cisco Technology, Inc. Method and device for multicast content delivery
EP3585016A1 (fr) * 2018-06-19 2019-12-25 Juniper Networks, Inc. Transmission de paquets de données de multidiffusion au moyen d'une réplication explicite d'indice de bit (bier) pour des dispositifs de réseau incapables de bier
US10644900B2 (en) 2018-06-19 2020-05-05 Juniper Networks, Inc. Forwarding multicast data packets using bit index explicit replication (BIER) for BIER-incapable network devices
EP3641353A4 (fr) * 2017-07-11 2020-06-10 Huawei Technologies Co., Ltd. Procédé de transfert de diffusion groupée et dispositif associé
US20210176172A1 (en) * 2017-05-23 2021-06-10 Zte Corporation Packet forwarding method, device and apparatus, and storage medium
CN113364695A (zh) * 2020-03-06 2021-09-07 烽火通信科技股份有限公司 一种bier组播在bier域内的单播传输方法及系统
CN113841363A (zh) * 2019-03-28 2021-12-24 兰迪斯+盖尔创新有限公司 用于在具有不同路由协议的网络和设备之间建立通信链路的系统和方法
US11349807B2 (en) 2020-04-02 2022-05-31 Cisco Technology, Inc. Directed multicast based on multi-dimensional addressing relative to identifiable LLN properties
US11394567B2 (en) 2020-04-09 2022-07-19 Cisco Technology, Inc. Multicast-only thin DODAG in low power and lossy network
CN114928395A (zh) * 2022-05-07 2022-08-19 鹏城实验室 一种基于bier的天地一体化多播网络通信方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150078379A1 (en) * 2013-09-17 2015-03-19 Cisco Technology, Inc. Bit Indexed Explicit Replication Using Internet Protocol Version 6
US20150131659A1 (en) * 2013-09-17 2015-05-14 Cisco Technology, Inc. Bit Indexed Explicit Replication Forwarding Optimization
US20150304118A1 (en) * 2012-03-07 2015-10-22 Commissariat A L'energie Atomique Et Aux Ene Alt Method for preselecting a router in an rpl network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150304118A1 (en) * 2012-03-07 2015-10-22 Commissariat A L'energie Atomique Et Aux Ene Alt Method for preselecting a router in an rpl network
US20150078379A1 (en) * 2013-09-17 2015-03-19 Cisco Technology, Inc. Bit Indexed Explicit Replication Using Internet Protocol Version 6
US20150131659A1 (en) * 2013-09-17 2015-05-14 Cisco Technology, Inc. Bit Indexed Explicit Replication Forwarding Optimization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IJ WIJNANDS ET AL: "Multicast using Bit Index Explicit Replication; draft-ietf-bier-architecture-03.txt", MULTICAST USING BIT INDEX EXPLICIT REPLICATION; DRAFT-IETF-BIER-ARCHITECTURE-03.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 19 January 2016 (2016-01-19), pages 1 - 36, XP015110735 *
MARK TOWNSLEY: "MPLS over IP-Tunnels", 21 February 2005 (2005-02-21), XP055313735, Retrieved from the Internet <URL:https://www.apricot.net/apricot2005/slides/T5-1_1.pdf> [retrieved on 20161025] *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210176172A1 (en) * 2017-05-23 2021-06-10 Zte Corporation Packet forwarding method, device and apparatus, and storage medium
EP3641353A4 (fr) * 2017-07-11 2020-06-10 Huawei Technologies Co., Ltd. Procédé de transfert de diffusion groupée et dispositif associé
US11258698B2 (en) 2017-07-11 2022-02-22 Huawei Technologies Co., Ltd. Multicast forwarding method and related device
US11190367B2 (en) 2017-09-27 2021-11-30 Huawei Technologies Co., Ltd. Multicast forwarding method and multicast router
CN109561022B (zh) * 2017-09-27 2020-09-08 华为技术有限公司 一种组播转发方法及组播路由器
CN109561022A (zh) * 2017-09-27 2019-04-02 华为技术有限公司 一种组播转发方法及组播路由器
US10469278B2 (en) * 2017-10-24 2019-11-05 Cisco Technology, Inc. Method and device for multicast content delivery
US10644900B2 (en) 2018-06-19 2020-05-05 Juniper Networks, Inc. Forwarding multicast data packets using bit index explicit replication (BIER) for BIER-incapable network devices
EP3585016A1 (fr) * 2018-06-19 2019-12-25 Juniper Networks, Inc. Transmission de paquets de données de multidiffusion au moyen d'une réplication explicite d'indice de bit (bier) pour des dispositifs de réseau incapables de bier
US10841111B2 (en) 2018-06-19 2020-11-17 Juniper Networks, Inc. Forwarding multicast data packets using bit index explicit replication (BIER) for BIER-incapable network devices
CN113841363B (zh) * 2019-03-28 2022-11-04 兰迪斯+盖尔创新有限公司 在不同路由协议的网络和设备间建立通信的系统和方法
CN113841363A (zh) * 2019-03-28 2021-12-24 兰迪斯+盖尔创新有限公司 用于在具有不同路由协议的网络和设备之间建立通信链路的系统和方法
CN110233709A (zh) * 2019-06-11 2019-09-13 南方电网科学研究院有限责任公司 一种rpl路由方法及相关装置
CN113364695B (zh) * 2020-03-06 2022-03-01 烽火通信科技股份有限公司 一种bier组播在bier域内的单播传输方法及系统
CN113364695A (zh) * 2020-03-06 2021-09-07 烽火通信科技股份有限公司 一种bier组播在bier域内的单播传输方法及系统
US11349807B2 (en) 2020-04-02 2022-05-31 Cisco Technology, Inc. Directed multicast based on multi-dimensional addressing relative to identifiable LLN properties
US11777900B2 (en) 2020-04-02 2023-10-03 Cisco Technology, Inc. Directed multicast based on multi-dimensional addressing relative to identifiable LLN properties
US11394567B2 (en) 2020-04-09 2022-07-19 Cisco Technology, Inc. Multicast-only thin DODAG in low power and lossy network
US11909543B2 (en) 2020-04-09 2024-02-20 Cisco Technology, Inc. Multicast-only thin DODAG in low power and lossy network
CN114928395A (zh) * 2022-05-07 2022-08-19 鹏城实验室 一种基于bier的天地一体化多播网络通信方法及系统
CN114928395B (zh) * 2022-05-07 2023-09-26 鹏城实验室 一种基于bier的天地一体化多播网络通信方法及系统

Similar Documents

Publication Publication Date Title
EP3497893B1 (fr) Routage de segments sur la base d&#39;une profondeur d&#39;identifiant de segment maximale
EP3318024B1 (fr) Utilisation de protocole de passerelle frontière afin d&#39;exposer une profondeur d&#39;identifiant de segment maximale pour une application externe
CN109075984B (zh) 计算的spring组播的多点到多点树
EP3417580B1 (fr) Techniques de fourniture de la link segment identifier depth d&#39;un noeud et / ou lien en utilisant is-is
WO2017141076A1 (fr) Protocole multidiffusion sans état destiné aux réseaux avec perte et de faible puissance
US11038791B2 (en) Techniques for exposing maximum node and/or link segment identifier depth utilizing OSPF
WO2017037615A1 (fr) Procédé et appareil de modification d&#39;états de réacheminement dans un dispositif de réseau d&#39;un réseau défini par logiciel
US9774504B2 (en) Route refresh mechanism for border gateway protocol link state
WO2019030552A1 (fr) Traçage de chemin de réseau évolutif
CN108604997B (zh) 用于对差异化服务编码点(dscp)和显式拥塞通知(ecn)的监视进行配置的控制平面的方法和设备
EP3808031A1 (fr) Mécanisme de détection de défaillance de noeud robuste pour grappe de contrôleurs sdn
CN108604999B (zh) 用于监视差异化服务编码点(dscp)和显式拥塞通知(ecn)的数据平面方法和设备
US20220141761A1 (en) Dynamic access network selection based on application orchestration information in an edge cloud system
US20220247679A1 (en) Method and apparatus for layer 2 route calculation in a route reflector network device
EP3987714A1 (fr) Procédé et système pour transmettre un trafic de diffusion, d&#39;unidiffusion inconnue ou de multidiffusion pour de multiples instances (evis) de réseau privé virtuel ethernet (evpn)
WO2017144946A1 (fr) Procédé et appareil pour soutenir le réseau hérité pour une multidiffusion spring calculée
WO2017144945A1 (fr) Procédé et appareil de multidiffusion dans un réseau spring à zones multiples
WO2020152691A1 (fr) Détection d&#39;adresse en double du protocole internet version 6 (ipv6) dans des réseaux multiples en utilisant un réseau privé virtuel ethernet (evpn)
US11876881B2 (en) Mechanism to enable third party services and applications discovery in distributed edge computing environment
WO2020100150A1 (fr) Blobs de protocole de routage pour calculs d&#39;itinéraire efficaces et téléchargements d&#39;itinéraire
US20240007388A1 (en) Smart local mesh networks
US11451637B2 (en) Method for migration of session accounting to a different stateful accounting peer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16707218

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16707218

Country of ref document: EP

Kind code of ref document: A1