WO2018150222A1 - Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2 - Google Patents

Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2 Download PDF

Info

Publication number
WO2018150222A1
WO2018150222A1 PCT/IB2017/050828 IB2017050828W WO2018150222A1 WO 2018150222 A1 WO2018150222 A1 WO 2018150222A1 IB 2017050828 W IB2017050828 W IB 2017050828W WO 2018150222 A1 WO2018150222 A1 WO 2018150222A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
local
nes
allocation
domain
Prior art date
Application number
PCT/IB2017/050828
Other languages
English (en)
Inventor
Vyshakh Krishnan C H
Faseela K
Vinayak Joshi
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2017/050828 priority Critical patent/WO2018150222A1/fr
Publication of WO2018150222A1 publication Critical patent/WO2018150222A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • H04L61/5014Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • H04L61/103Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5046Resolving address allocation conflicts; Testing of addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5084Providing for device mobility

Definitions

  • Embodiments of the invention relate to the field of packet networks; and more specifically, to the Internet protocol (IP) address allocation over virtual Layer 2 networks spanning across multiple data centers.
  • IP Internet protocol
  • SDN Software-Defined Networking
  • a network controller which can be deployed as a cluster of server nodes, has the role of the control plane and is coupled to one or more network elements (NEs) that have the role of the data plane.
  • NEs network elements
  • Each network element may be implemented on one or multiple network devices (NDs).
  • a network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • the control connection between the network controller and network elements is generally a TCP/UDP based communication.
  • the network controller communicates with the network elements using an SDN protocol (e.g., OpenFlow, I2RS, etc.).
  • SDN protocol e.g., OpenFlow, I2RS, etc.
  • DCs data centers
  • a datacenter is a physical infrastructure for hosting compute nodes in a room or building, which can be subdivided into different PODs.
  • a POD Perfectance Optimized DC
  • Physical DCs could be divided into virtual DCs (e.g. VPODs).
  • the SDN controller manages and facilitates the connectivity within and between datacenters.
  • Each datacenter communicates with external networks and/or remote data centers via one (or more) gateway network element (which may be referred to as a Data Center - Gateway (DC-GW)).
  • DC-GW Data Center - Gateway
  • the connectivity between datacenters is the connectivity between their respective DC-GWs.
  • L2 domains across DCs has become a very common feature in cloud solutions.
  • Border Gateway Protocol (BGP)-based Virtual Private Networks (VPNs) are increasingly used in the industry.
  • BGP Ethernet- VPN EVPN
  • EVPN BGP Ethernet- VPN
  • a Layer 2 domain is distributed over multiple DCs via BGP EVPN.
  • SDN controller acts as a BGP speaker in addition to programming the forwarding plane.
  • BGP-EVPN network connectivity across the DCs may also be established via non-centralized (e.g., distributed) control approaches.
  • virtual control network elements e.g., a virtual Router (vRouter)
  • VRouter virtual Router
  • a distributed architecture is used in multi-DC networks. That is, a DC is self- contained with its own cloud orchestrator (e.g. OpenStack), its own network controller (e.g., SDN controller), its own IP address allocator (e.g., Dynamic Host Configuration Protocol (DHCP) module), etc.
  • the IP address allocator pre-determines the Internet Protocol (IP) address and Media Access Control (MAC) address of each NE. When an NE requests an IP address this pre-determined IP address is allocated to the NE by the IP address allocator.
  • IP Internet Protocol
  • MAC Media Access Control
  • the IP address allocator can reside within the SDN controller, or inside a separate network device (e.g., within a separate Virtualized Network Function (VNF) or OpenStack instance itself). Therefore, the IP address allocator does not need to lie within the L2 domain of the NE requesting the IP address. In this case, proper forwarding plane programming ensures that DHCP messages are exchanged between the clients (i.e., NEs requesting the IP addresses) and the DHCP module.
  • the connectivity between the IP address allocator and the NEs requesting address allocation is enabled by a network controller (e.g., SDN controller).
  • IP addresses are assigned from a static pool of IP addresses.
  • statically assigning IP addresses in different DCs is not recommended as some DCs will starve for IP addresses and some will have a lot of unused IP addresses.
  • a centralized IP address allocator for all the DCs is used.
  • having a centralized IP address allocator would be slow, as an IP address request would have to reach the centralized IP address allocator and return to the DC.
  • the IP address allocator would have to serve a higher quantity of requests (as a result of serving requests from multiple DCs).
  • This approach faces reliability issues, as a failure of the centralized IP address allocator would cause all the DCs to starve for IP address allocation (i.e., there is a single point of failure).
  • ARP Address Resolution Protocol
  • the IP address allocator itself can attempt to send out ARP probes to verify duplicate IP address allocation.
  • the NE that requested the IP address might time out before the IP address allocator receives a confirmation on the ARP probe.
  • NEs of a DC are virtual network elements (vNEs) (e.g., virtual machines (VMs), containers)
  • vNEs virtual network elements
  • VMs virtual machines
  • non-live migration is in practice a case of inter-DC NE migrations.
  • IP address allocation request When a NE migrates across DCs, in non-live manner, after coming up in the new DC, it sends out an IP address allocation request. To retain the old IP address, the IP address allocator has to allocate the same IP address. Current IP address allocation approaches do not provide a solution for this scenario.
  • One general aspect includes a method for allocating an internet protocol (IP) address to a network element, the method including: learning a first IP address allocated to a remote network element (NE) by a remote address allocator as a result of a receipt of an advertisement message indicating a layer 2 (L2) route towards the remote NE within an L2 domain; removing the first IP address from a set of IP addresses available for allocation to local NEs, where the local NEs are part of the L2 domain; receiving a request for IP address allocation from a local NE that is part of the L2 domain; and allocating to the local NE a second IP address from the set of IP addresses available for allocation to local NEs, where the set of IP addresses available for allocation to local NEs does not include the first IP address.
  • IP internet protocol
  • One general aspect includes a network device for allocating an internet protocol (IP) address to a network element.
  • the network device including: a non-transitory computer readable medium to store instructions; and a processor coupled with the non-transitory computer readable medium to process the stored instructions to learn a first IP address allocated to a remote network element (NE) by a remote address allocator as a result of a receipt of an advertisement message indicating a layer 2 (L2) route towards the remote NE within an L2 domain.
  • the network device is further to remove the first IP address from a set of IP addresses available for allocation to local NEs, where the local NEs are part of the L2 domain.
  • the network device is further to receive a request for IP address allocation from a local NE that is part of the L2 domain; and to allocate to the local NE a second IP address from the set of IP addresses available for allocation to local NEs, where the set of IP addresses available for allocation to local NEs does not include the first IP address.
  • One general aspect includes a non-transitory computer readable storage medium that provide instructions, which when executed by a processor of a network device, cause said processor to perform operations including: learning a first internet protocol (IP) address allocated to a remote network element (NE) by a remote address allocator as a result of a receipt of an advertisement message indicating a layer 2 (L2) route towards the remote NE within an L2 domain; removing the first IP address from a set of IP addresses available for allocation to local NEs, where the local NEs are part of the L2 domain; receiving a request for IP address allocation from a local NE that is part of the L2 domain; and allocating to the local NE a second IP address from the set of IP addresses available for allocation to local NEs, where the set of IP addresses available for allocation to local NEs does not include the first IP address.
  • IP internet protocol
  • NE remote network element
  • L2 layer 2
  • Figure 1A illustrates an exemplary block diagram of a network enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers, according to some embodiments of the invention.
  • Figure IB illustrates an exemplary block diagram of a network enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers, according to some embodiments of the invention.
  • Figure 2A illustrates a block diagram of exemplary operations performed in a network when duplicate IP address allocation occurs, according to some embodiments of the invention.
  • Figure 2B illustrates a block diagram of exemplary operations performed in a network when duplicate IP address allocation occurs, according to some embodiments of the invention.
  • Figure 3A illustrates a flow diagram of exemplary operations performed in an IP address allocator for enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers, according to some embodiments of the invention.
  • Figure 3B illustrates a flow diagram of exemplary operations for enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers, according to some embodiments of the invention.
  • Figure 3C illustrates a flow diagram of exemplary operations for avoiding the occurrence of duplicate IP address allocation, in accordance with some embodiments.
  • Figure 4A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 4B illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.
  • FIG. 4C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments of the invention.
  • VNEs virtual network elements
  • Figure 4D illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • NE network element
  • Figure 4E illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments of the invention.
  • Figure 4F illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, according to some embodiments of the invention.
  • Figure 5 illustrates a general purpose control plane device with centralized control plane (CCP) software 550), according to some embodiments of the invention.
  • CCP centralized control plane
  • IP Internet protocol
  • references in the specification to "one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Bracketed text and blocks with dashed borders may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine -readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals.
  • an electronic device e.g., a computer
  • hardware and software such as a set of one or more processors (e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data.
  • processors e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding
  • an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device.
  • Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • NI(s) physical network interface
  • a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection and/or sending data out to other devices via a wireless connection.
  • This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s).
  • the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter.
  • NICs network interface controller
  • the NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • a network device is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • Overlay based connectivity is increasingly used in data centers. For example, overlay tunnels are established between the DC-GW and network elements of a data center.
  • the DC- GW is a network device that connects the compute nodes (i.e., network elements) of the data center (e.g., telco clusters, datacenters, server farms) with external networks.
  • a routing and reachability protocol is used to announce and receive prefixes of the network devices independently of the underlay network.
  • Border Gateway Protocol (BGP) Ethernet Virtual Private Network (EVPN) can be used to announce routes in the overlay network and enable tenant separation in addition to providing external connectivity to compute host of a data center.
  • BGP can be hosted on a Software Defined Networking (SDN) controller, or alternatively can be run inside a Virtual Router route processor (RP).
  • SDN Software Defined Networking
  • RP Virtual Router route processor
  • BGP-EVPN is used for extending L2 domains in a cloud outside the data center (across data centers, into enterprise premises etc.).
  • MPLS Multiprotocol Label Switching
  • VXLAN tunneling is used for EVPN in the data plane.
  • Embodiments of the present invention disclose methods and apparatuses for allocating an Internet Protocol (IP) address to a network element that is part of a virtual Layer 2 domain which spans across multiple data centers.
  • IP Internet Protocol
  • NE remote network element
  • L2 Layer 2
  • the first IP address is learnt at a local IP address allocator of a second data center as a result of the receipt of the advertisement message indicating the L2 route towards the remote NE within the L2 domain.
  • the local IP address allocator Upon learning the first IP address, the local IP address allocator removes the first IP address from a set of IP addresses available for allocation to local NEs, where the local NEs are part of the L2 domain. Upon receiving a request for IP address allocation from a local NE that is part of the L2 domain, the local IP address allocator allocates a second IP address from the set of IP addresses available for allocation to local NEs, where the set of IP addresses available for allocation to local NEs does not include the first IP address.
  • a reachability and forwarding protocol e.g., BGP
  • BGP Layer 2 routes for network elements of a layer 2 domain which span across multiple data centers.
  • a network controller and a DC-GW of respective data centers act as BGP speakers, exchanging Multi-Protocol (MP)-BGP messages carrying EVPN routes.
  • MP Multi-Protocol
  • an advertisement message BGP-EVPN Route of Type 2 i.e., BGP EVPN RT-2, which may be referred to as MAC/IP Advertisement Route
  • BGP EVPN RT-2 which may be referred to as MAC/IP Advertisement Route
  • the IP/MAC addresses learnt as a result of the receipt of the advertisement message are shared with the local IP address allocator of a DC and taken into consideration when an IP address request is received from a local NE.
  • the embodiments described herein present clear advantages with respect to prior IP address allocation approaches.
  • the solution enables a dynamic allocation of IP addresses across sites.
  • the solution avoids IP address conflicts across data centers in a distributed L2 domain.
  • the embodiments allow for a smooth migration of NEs across DCs by maintaining the allocation of the same IP address to a migrating NE.
  • the embodiments of the present invention make use of existing control plane infrastructure (e.g., BGP-EVPN routes of Type 2) without introducing any overhead.
  • Figure 1A illustrates an exemplary block diagram of a network enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers.
  • the network of Figure 1 A includes a first network 103, coupled with a second network 109 through a third network 107.
  • the network 103 includes network element NE 111, and one or more NEs 101 A-N.
  • the network 103 includes a network controller 121 and an IP address allocator 131.
  • the network controller 121 is communicatively coupled with the NEs 101 A-N and NE 111, and further communicatively coupled with the IP address allocator 131.
  • Each one of the NEs 101A-N may be coupled with NE 111 through a virtual Layer 2 tunnel (e.g., VXLAN tunnel) implemented over an underlay network.
  • the underlay network can be an Internet Protocol (IP) Layer 3 protocol.
  • IP Internet Protocol
  • Each one of the NEs 101 A-N, and NE 111 can be implemented as described in further details below with reference to Figures 4A-F.
  • NE 101A may be one of multiple network elements (e.g., hosts) of a data center (DC1).
  • DC1 data center
  • each NE may be a virtual machine (VM) coupled with a virtual switch executed on a physical network device, which is in communication with a DC-GW (NE 111) of the first data center.
  • VM virtual machine
  • NE 111 DC-GW
  • NE 111 is a termination node of the virtual Layer 2 tunnel (e.g., it may be a termination node of a VXLAN tunnel and can be referred to as a VXLAN Tunnel End Point (VTEP)).
  • the NE 111 maps VLANs to VXLANs and handles the VXLAN encapsulation and decapsulation so that the non-virtualized resources do not need to support the VXLAN protocol.
  • NE 111 can be implemented as a gateway network device coupling each network element (e.g., host) of DCl with external networks (e.g., other data centers, enterprise networks, VLANs, Internet, etc.) through an IP/MPLS network 107.
  • the NE 111 functions as a data center gateway— providing the interface to the IP/MPLS WAN for interworking Layer 2 and Layer 3 VPN services to remote centers and branch locations. These services provide seamless connectivity between multiple data centers on the same or different IP subnets using the same or different Layer 2 or Layer 3 encapsulation mechanisms. It also enables full integration of data center and VPN services for seamless connectivity between data center and branch locations.
  • NE 111 can be a Top of the Rack (ToR)/access switch or a switch higher up in the topology of data center, DCl (e.g., it can be a core or Wide Area Network (WAN) edge device).
  • DCl e.g., it can be a core or Wide Area Network (WAN) edge device.
  • NE 111 can be a provider edge (PE) router that terminates VXLAN tunnels of a hybrid cloud environment.
  • PE provider edge
  • the network 109 includes network element NE 112, and one or more NEs 102A-M.
  • the network 109 includes a network controller 122 and an IP address allocator 132.
  • the network controller 122 is communicatively coupled with the NEs 102A-M and NE 112, and further communicatively coupled with the IP address allocator 132.
  • Each one of the NEs 102A-M may be coupled with NE 112 through a virtual Layer 2 tunnel (e.g., VXLAN tunnel) implemented over an underlay network.
  • the underlay network can be an Internet Protocol (IP) Layer 3 protocol.
  • IP Internet Protocol
  • NE 102A may be one of multiple network elements (e.g., hosts) of a second data center (DC2).
  • DC2 second data center
  • each NE may be a virtual machine coupled with a virtual switch in communication with a DC-GW (NE 112) of the second data center.
  • DC-GW DC-GW
  • NE 112 is a termination node of the virtual Layer 2 tunnel (e.g., it may be a termination node of a VXLAN tunnel and can be referred to as a VXLAN Tunnel End Point (VTEP)).
  • the NE 112 maps VLANs to VXLANs and handles the VXLAN encapsulation and decapsulation so that the non-virtualized resources do not need to support the VXLAN protocol.
  • NE 112 can be implemented as a gateway network device coupling each network element (e.g., host) of DC2 with external networks (e.g., other data centers (e.g., DCl), enterprise networks, VLANs, Internet, etc.) through an IP/MPLS network 107.
  • the NE 112 functions as a data center gateway— providing the interface to the IP/MPLS WAN for interworking Layer 2 and Layer 3 VPN services to remote centers and branch locations. These services provide seamless connectivity between multiple data centers on the same or different IP subnets using the same or different Layer 2 or Layer 3 encapsulation mechanisms. It also enables full integration of data center and VPN services for seamless connectivity between data center and branch locations.
  • NE 112 can be a Top of the Rack (ToR)/access switch or a switch higher up in the topology of data center, DC2 (e.g., it can be a core or Wide Area Network (WAN) edge device).
  • DC2 e.g., it can be a core or Wide Area Network (WAN) edge device.
  • NE 112 can be a provider edge (PE) router that terminates VXLAN tunnels of a hybrid cloud environment.
  • PE provider edge
  • the network controller (e.g., network controllers 121 and 122) is a centralized control plane, which has the responsibility for the generation of reachability and forwarding information (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus enables the process of neighbor discovery and topology discovery in a centralized manner.
  • a SDN control module sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the network controller has a south bound interface with a data plane (sometimes referred to as the infrastructure layer, network forwarding plane, or forwarding plane) that includes the NEs (sometimes referred to as switches, forwarding elements, data plane elements, or nodes) (i.e., NEs 101 A-N and NE 111 for network controller 121 ; NEs 102A-M and NE 112 for network controller 122).
  • This network controller is operative to act as a speaker of a reachability and forwarding protocol (such as BGP).
  • control plane e.g., network controllers 121 and 122
  • the control plane may be implemented in a distributed manner and/or in a hybrid manner as it will be described in further details below.
  • the embodiments below illustrate a first set of NEs 101 A-N, and a second set of NEs 102A-M, any number of NEs can be included in either one of the network 103 and network 109.
  • Each one of the IP address allocators 131 and 132 is operative to receive and respond to IP address requests from the NEs of the network 103 and network 109 respectively.
  • Each one of the IP address allocators can be included within a respective network controller (121, 122).
  • each IP address allocator may be implemented within a separate network device communicatively coupled with a respective one of the network controllers.
  • the network 103 is a first data center (DCl) coupled with the second data center (DC2) through an IP/MPLS network.
  • the network 103 includes NE 101A which is part of the same Virtual Layer 2 domain 114 (e.g., the same EVPN instance) as NE 102A from network 109.
  • IP address allocator 131 receives an IP address request from NE 101A.
  • NE 101A is associated with a Layer 2 address uniquely identifying the NE in the Layer 2 domain.
  • the NE 101A may be associated with a MAC address aa:aa:aa:aa:aa:aaa:aaa.
  • any address e.g., IP address, and/or MAC address
  • the IP address allocator determines, at operation 2a) whether the Layer 2 address is already associated with an IP address that was previously allocated to a network element within the given L2 domain.
  • Each one of the IP address allocators 131 and 132 maintains a database of already allocated IP addresses.
  • Each IP address allocated to a network element is stored in the database with a corresponding Layer 2 address of the network element and an identification of the Layer 2 domain (e.g., Route Distinguisher (RD) of an EVPN instance (EVI)) to which the NE belongs.
  • the IP address may further be associated with an indication of whether the NE is a local or a remote NE (i.e., a local NE belongs to the network that the IP address allocator serves; a remote NE belongs to a network that is served by a remote IP address allocator).
  • the IP address allocator 131 looks up the database of IP addresses already allocated and may determine whether the Layer 2 address of NE 101 A is associated with an IP address already allocated. When it is determined that the NE 101A has previously been allocated an IP address at operation 2c) (i.e., determining that the Layer 2 address is associated with an allocated IP address), this same IP address is allocated to the NE 101A. This may, for example, be a case of a VNE migration from a data center (e.g., DC2) to the first data center DC1.
  • a data center e.g., DC2
  • the IP address allocator is operative to allocate the same IP address to a NE that has migrated from a first DC to a new DC.
  • the IP address allocator 131 determines that the Layer 2 address is not associated with any previously allocated IP address at operation 2b), it allocates a first IP address to the NE 101A from a set of available IP addresses. For example, as illustrated in Figure 1A, the IP address allocator allocates IP address 1.1.1.1 to NE 101 A. Once the IP address is allocated, the network controller 121 causes, at operation 3), the NE 111 to install a virtual Layer 2 route. For example, when the network controller acts as a BGP speaker, it sends an EVPN route of type 2 update to NE 111 via BGP. This route update includes the IP address and the MAC address of the NE 101 A.
  • NE 111 transmits to the NE 112 an advertisement message indicating a virtual Layer 2 route towards the NE 101 A.
  • the advertisement message includes the Layer 2 address, the newly allocated IP address of the NE 101 A, as well as an identification of the virtual Layer 2 domain to which NE 101A belongs.
  • the advertisement message is an EVPN route of type 2.
  • EVPN Route type-2 (which is also referred to as MAC with IP advertisement route) is a route defined per-VLAN (i.e., per virtual L2 domain), therefore only NEs that are part of that domain need to receive the route.
  • EVPN allows an end host's IP and MAC addresses to be advertised within the EVPN Network Layer reachability information (NLRI). This allows for control plane learning of MAC addresses.
  • the route is punted to the local network controller 122 (acting as a BGP peer).
  • the network controller 122 informs the IP Address Allocator 132.
  • the IP address allocator 132 removes the IP address indicated in the advertisement message from a list of IP addresses available for allocation to local NEs.
  • the IP address can be marked in the local database of IP addresses of the IP address allocator 132 as already in use (i.e., already allocated).
  • this address is no longer available to be allocated to NEs from the local network 109, when those NEs belong to the same Virtual Layer 2 domain as NE 101 A.
  • the embodiments described herein enable an IP address allocator of a local network 109 (e.g., DC2) to learn IP addresses allocated in remote networks (that belong to a virtual layer 2 domain that spans over the local and the remote networks) while ensuring reconciliation between the sets of IP addresses available for local allocation in each network. This enables the system to avoid allocation of duplicate IP addresses to different NEs within a same virtual layer 2 domain.
  • a local network 109 e.g., DC2
  • IP addresses allocated in remote networks that belong to a virtual layer 2 domain that spans over the local and the remote networks
  • This enables the system to avoid allocation of duplicate IP addresses to different NEs within a same virtual layer 2 domain.
  • FIG. IB illustrates an exemplary block diagram of a network enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers.
  • the IP address allocator 132 receives an IP address request for NE 102A. Similar to NE 101 A, NE 102A is part of the same Virtual Layer 2 domain 114. NE 102A is associated with a Layer 2 address (e.g., MAC address bb:bb:bb:bb:bb:bbb, which uniquely identifies the NE 102A within the VL2 Domain).
  • the IP address allocator 132 determines whether the Layer 2 address is associated with an IP address already allocated.
  • the IP address allocator 132 looks up the database of IP addresses already allocated to determine whether the Layer 2 address of NE 102A is associated with an IP address already allocated. When it is determined that the NE 102A has previously been allocated an IP address, this same IP address is allocated to the NE 102A at operation 8c). This may, for example, be a case of a VNE migration from the first data center (e.g., DC1) to the second data center DC2. Therefore, the IP address allocator is operative to allocate the same IP address to a NE that has migrated from a DC to a new DC.
  • the first data center e.g., DC1
  • the IP address allocator 132 allocates, at operation 8b), an IP address to NE 102A from a set of available IP addresses, where the set of available IP addresses does not include the previously allocated address of NE 101 A. For example, IP address 1.1.1.2 is allocated to the NE 102A. In a symmetrical manner to the operations performed following the IP address allocation to the NE 101 A in network 103, the newly allocated IP address is advertised through the network to peer network devices over advertisement messages.
  • the network controller 122 upon learning of the allocated IP address, causes the NE 112 to install a virtual Layer 2 route with NE 102A.
  • the NE 112 transmits to the NE 111 an advertisement message indicating a virtual Layer 2 route towards the NE 102A.
  • the advertisement message includes the Layer 2 address, the newly allocated IP address of the NE 102A, as well as an identification of the virtual Layer 2 domain to which NE 102A belongs.
  • the advertisement message is an EVPN route of type 2. EVPN allows an end host's IP and MAC addresses to be advertised within the EVPN
  • Network Layer reachability information This allows for control plane learning of MAC addresses and in this embodiment of the IP addresses.
  • the route is punted to the local network controller 121 (acting as a BGP peer).
  • the network controller 121 informs the IP Address Allocator 131, which at operation 12) removes the IP address 1.1.1.2 indicated in the advertisement message from a list of IP addresses available for allocation to local NEs.
  • the embodiments described herein enable an IP address allocator of a local network (e.g., networks 103 (DO) or 109 (DC2)) to learn IP addresses allocated in remote networks (that belong to a virtual layer 2 domain that spans over the local and the remote networks) while ensuring reconciliation between the sets of IP addresses available for local allocation in each network.
  • a local network e.g., networks 103 (DO) or 109 (DC2)
  • DO networks 103
  • DC2 virtual layer 2 domain that spans over the local and the remote networks
  • IP address allocation occurs at substantially the same time in network 103 and in network 109.
  • the same IP address may be allocated to two different NEs, one in each network, while the NEs belong to the same Layer 2 domain.
  • each one of the allocation is performed prior to the receipt of the advertisement message for the route towards the remote NE (that includes the newly allocated IP address).
  • Figures 2A-B illustrate a block diagram of exemplary operations performed in a network when duplicate IP address allocation occurs.
  • Figures 2A-B will be described with reference to an exemplary scenario, in which the first NE 101 A, to which a first IP address (1.1.1.1) was allocated (e.g., as described with reference to Figure 1A) by IP address allocator 131, and with a second NE 102B from network 109, to which the same IP address (1.1.1.1) is allocated by IP address allocator 132 prior to receiving the advertisement message for the route towards NE 101A.
  • NE 102B is also part of the same virtual Layer 2 domain (e.g., same VNI as identified by an RD of the EVPN route for NE 101A).
  • IP address 1.1.1.1 when an IP address is allocated to NE 101A (e.g., IP address 1.1.1.1), this IP address is learnt at the network controller 122 through the advertisement message received at the NE 112.
  • the network controller 122 responsive to receiving the advertisement message indicating the Layer 2 route towards the NE 101A (e.g., BGP EVPN Route type 2), the network controller 122 informs the IP Address Allocator 132.
  • the IP address allocator 132 performs the following operations.
  • the IP address allocator 132 determines whether the IP address received in the advertisement message is part of a set of IP addresses available for allocation to local NEs.
  • the IP address allocator 132 removes the IP address indicated in the advertisement message from the list of IP addresses available for allocation to local NEs. For example, the IP address can be marked as a used IP address and will not be allocated to any local NE.
  • the IP address allocator 132 determines whether the local NE 102B associated with the IP address is to obtain a new IP address different from the IP address received in the advertisement message. While the embodiments herein are described with operations performed at the network 109, symmetrical operations are performed in the network 103 upon receipt of the advertisement message for the route towards the NE 102B and which includes the allocated IP address 1.1.1.1. Thus, the two networks determine based on a tie breaking mechanism which of the NEs is to keep the allocated IP address while the other NE is forced to request a new IP address.
  • tie breaking mechanisms may be used at the IP address allocator to determine if its local NE is to obtain a new IP address or not.
  • a tie breaking mechanism can make use of the Ethernet Segment identifier (ESI) included in an EVPN route of type 2.
  • ESI Ethernet Segment identifier
  • the ESIs can be zeroes (i.e., the ESI associated with the EVPN route for NE 101A will be identical to the EVPN route for the NE 102B). In these scenarios the tie breaking mechanism may rely on the ESI as well as the MAC addresses of the two NEs associated with the same IP address.
  • the tie breaking mechanism may rely only on the MAC addresses of the two NEs.
  • the tie breaking parameter (ESI and/or MAC addresses) is used to compare the two NEs and to determine which one is to obtain a new address.
  • the IP address allocator may determine which NE is associated with the greater tie breaker parameter (e.g., which one has a greater MAC address, or which ESI is greater, or which combination ESI/MAC address is greater) in order to identify the one that is to obtain a new IP address.
  • the IP address allocator 132 determines that NE 102B is to obtain a new IP address.
  • Other comparison mechanisms can be used without departing from the scope of the present invention.
  • IP address allocator 131 makes the same determination and identifies the NE 102B as the network device that is to obtain a new IP address, therefore there is no impact on the NE 101A and this NE keeps the previously allocated IP address.
  • the IP address allocator 132 in response to determining that the local NE 102B associated with the IP address (1.1.1.1) is to obtain a new IP address different from the IP address received in the advertisement message, the IP address allocator 132 causes the local NE 102B to request a new IP address from the set of IP addresses available for allocation to local NEs. For example, the IP address allocator can send a message to the NE 102B to force the NE 102B to renew its IP address (FORCERENEW message).
  • the NE 102B may be forced to stop and restart (e.g., Openstack can stop and start the entire VM), or an IP address module within the NE 102B can be forced to stop and restart (e.g., the DHCP in the VM can be stopped and restarted), which will make the NE 102B ask for a new IP address.
  • the NE 102B requests an IP address allocation, and obtains a new IP address 1.1.1.3 that is different from the previously allocated IP address 1.1.1.1.
  • the network controller 122 upon learning of the newly allocated IP address, causes the NE 112 to install a virtual Layer 2 route with NE 102B and causes the NE 112 to transmit, at operation 10b) an advertisement message indicating a virtual Layer 2 route towards the NE 102B including the newly allocated IP address 1.1.1.3 (e.g., BGP EVPN Route Type 2).
  • an advertisement message indicating a virtual Layer 2 route towards the NE 102B including the newly allocated IP address 1.1.1.3 (e.g., BGP EVPN Route Type 2).
  • FIG. 3A-3B illustrate flow diagrams of exemplary operations for enabling IP address allocation to network elements of a Layer 2 domain spanning across multiple data centers.
  • an IP Address allocator e.g., IP address allocator 132 learns (e.g., by determining/identifying from a received message or notification) a first IP address allocated to a remote NE (e.g., NE 101A) by a remote address allocator (131) as a result of a receipt of an advertisement message indicating a Layer 2 (L2) route towards the remote NE within an L2 domain.
  • the advertisement message is received at operation 312, at a network controller 122.
  • the advertisement message can be an EVPN route of type 2 advertised by NE 112 and forwarded to the BGP peer (network controller 122).
  • the network controller causes an update of a forwarding table of an edge network device (NE 112) to include an entry for the first IP address.
  • the network controller 122 Upon receipt of the advertisement message indicating the L2 route, the network controller 122 further causes, at operation 316, the local IP address allocator 132 to remove the first IP address from a set of IP addresses available for allocation to local NEs, where the local NEs are part of the Layer 2 domain.
  • the flow of operation then moves to operation 303, at which the IP address allocator 132 removes the first IP address from a set of IP addresses available for allocation to local NEs.
  • the local NEs are part of the L2 domain. This enables the IP address allocator to ensure that unique IP addresses are allocated to NEs (local and remote NEs) of a same Layer 2 domain even when the domain spans across multiple networks (e.g., across multiple data centers DC1 and DC2).
  • the IP address allocator 132 receives a request for IP address allocation from a local NE that is part of the L2 domain.
  • the local NE is identified in the L2 domain with an L2 address at operation 305.
  • the IP address allocator determines, operation 306, whether the L2 address of the NE is associated with an address from a set of IP addresses already allocated. Upon determining that the L2 address of the NE is already associated with a pre-allocated IP address, the same address is allocated to this NE (operation 308).
  • the operations 306 and 308 may be optional and can be skipped.
  • the IP address allocator Upon determining that the L2 address of the NE is not associated with a pre- allocated IP address, the IP address allocator allocates, at operation 307, an IP address to the local NE from the set of IP addresses available for allocation. This set of available IP addresses does not include the IP address that was learnt from the advertisement message.
  • Figure 3C illustrates a flow diagram of exemplary operations for avoiding the occurrence of duplicate IP address allocation, in accordance with some embodiments.
  • the operations of Figure 3C are performed in an IP address allocator (e.g., IP address allocator 132), when it learns a new IP address allocated to a remote NE as a result of a receipt of an advertisement message for a Layer 2 route including the new IP address.
  • IP address allocator 132 determines whether a first IP address associated with (e.g., allocated to) the remote NE is part of a set of IP addresses available for allocation to local NEs.
  • the flow moves to operation 326, at which the IP address is removed from the set of IP addresses available for allocation to local NEs.
  • the flow moves to operation 324, at which the IP address allocator 132 determines whether the local NE, to which the first IP address was allocated, is to obtain a new IP address different from the first IP address. In some embodiments, the determination is performed based on a comparison of the L2 address of the local NE and the L2 address of the remote NE associated with the first IP address (operation 325). Upon determination that the local NE is to obtain a new address, the IP address allocator 132 causes at operation 327, the local NE to request a new IP address from the set of IP addresses available for allocation to local NEs.
  • the first IP address e.g., the IP address allocator 132 causes the local NE to keep/retain the first IP address
  • the embodiments of the present invention described herein enable an efficient IP address allocation mechanism.
  • the mechanisms described herein enable a streamlined allocation of IP addresses without requiring a common central pool of IP addresses, without any new message exchange, or any configuration overhead.
  • EVPN route-2 messages are used to identify the IP addresses allocated in remote DCs and the IP address allocation service (e.g., DHCP service) is made aware of these IP addresses and hence overlapping allocated IP addresses across multiple decentralized networks do not occur.
  • the invention provides a tie breaking algorithm that addresses the cases where IP address allocation occurs simultaneously in multiple DCs or IP address allocation occurs before EVPN route-2 messages are received at a given network.
  • the tie breaking mechanism enables a first DC to keep the allocated IP address and the other DC to request a new IP address allocation. Further, the embodiments present a mechanism for facilitating IP address allocation to migrating virtual elements such as VMs from a first DC to another DC.
  • the EVPN route-2 messages are used to allocate the same IP address for a migrated VM (based on the MAC address) easing the VM migration across the DCs.
  • Figure 4A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention.
  • Figure 4A shows NDs 400A-H, and their connectivity by way of lines between 400A-400B, 400B-400C, 400C-400D, 400D-400E, 400E-400F, 400F-400G, and 400A-400G, as well as between 400H and each of 400A, 400C, 400D, and 400G.
  • These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link).
  • NDs 400A, 400E, and 400F An additional line extending from NDs 400A, 400E, and 400F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
  • Two of the exemplary ND implementations in Figure 4 A are: 1) a special-purpose network device 402 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 404 that uses common off-the-shelf (COTS) processors and a standard OS.
  • ASICs application-specific integrated-circuits
  • OS special-purpose operating system
  • COTS common off-the-shelf
  • the special-purpose network device 402 includes networking hardware 410 comprising a set of one or more processor(s) 412, forwarding resource(s) 414 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 416 (through which network connections are made, such as those shown by the connectivity between NDs 400 A-H), as well as non-transitory machine readable storage media 418 having stored therein networking software 420.
  • the networking software 420 may be executed by the networking hardware 410 to instantiate a set of one or more networking software instance(s) 422.
  • Each of the networking software instance(s) 422, and that part of the networking hardware 410 that executes that network software instance form a separate virtual network element 430A-R.
  • Each of the virtual network element(s) (VNEs) 430A-R includes a control communication and configuration module 432A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 434A-R, such that a given virtual network element (e.g., 430A) includes the control communication and configuration module (e.g., 432A), a set of one or more forwarding table(s) (e.g., 434A), and that portion of the networking hardware 410 that executes the virtual network element (e.g., 430A).
  • a control communication and configuration module 432A-R sometimes referred to as a local control module or control communication module
  • forwarding table(s) 434A-R such that a given virtual network element (e.g., 430A) includes the control communication and configuration module (e.g., 432A), a set of one or more forwarding table(s) (e.g., 434A), and that portion of the networking hardware 410 that
  • the special-purpose network device 402 is often physically and/or logically considered to include: 1) a ND control plane 424 (sometimes referred to as a control plane) comprising the processor(s) 412 that execute the control communication and configuration module(s) 432A-R; and 2) a ND forwarding plane 426 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 414 that utilize the forwarding table(s) 434A-R and the physical NIs 416.
  • a ND control plane 424 (sometimes referred to as a control plane) comprising the processor(s) 412 that execute the control communication and configuration module(s) 432A-R
  • a ND forwarding plane 426 sometimes referred to as a forwarding plane, a data plane, or a media plane
  • the forwarding resource(s) 414 that utilize the forwarding table(s) 434A-R and the physical NIs 416.
  • the ND control plane 424 (the processor(s) 412 executing the control communication and configuration module(s) 432A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 434A-R, and the ND forwarding plane 426 is responsible for receiving that data on the physical NIs 416 and forwarding that data out the appropriate ones of the physical NIs 416 based on the forwarding table(s) 434A-R.
  • data e.g., packets
  • the ND forwarding plane 426 is responsible for receiving that data on the physical NIs 416 and forwarding that data out the appropriate ones of the physical NIs 416 based on the forwarding table(s) 434A-R.
  • Figure 4B illustrates an exemplary way to implement the special-purpose network device 402 according to some embodiments of the invention.
  • Figure 4B shows a special- purpose network device including cards 438 (typically hot pluggable). While in some embodiments the cards 438 are of two types (one or more that operate as the ND forwarding plane 426 (sometimes called line cards), and one or more that operate to implement the ND control plane 424 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card).
  • additional card types e.g., one additional type of card is called a service card, resource card, or multi-application card.
  • a service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)).
  • Layer 4 to Layer 7 services e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)
  • GPRS General Pack
  • the general purpose network device 404 includes hardware 440 comprising a set of one or more processor(s) 442 (which are often COTS processors) and physical NIs 446, as well as non-transitory machine readable storage media 448 having stored therein software 450.
  • the processor(s) 442 execute the software 450 to instantiate one or more sets of one or more applications 464A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization.
  • the virtualization layer 454 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 462A-R called software containers that may each be used to execute one (or more) of the sets of applications 464A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • the multiple software containers also called virtualization engines, virtual private servers, or jails
  • user spaces typically a virtual memory space
  • the virtualization layer 454 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 464A-R is run on top of a guest operating system within an instance 462A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes.
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/li raries of OS services) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system (LibOS) including drivers/li raries of OS services
  • unikernel can be implemented to run directly on hardware 440, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container
  • embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 454, unikernels running within software containers represented by instances 462A-R, or as a combination of unikernels and the above-described techniques (e.g. , unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
  • the virtual network element(s) 460A-R perform similar functionality to the virtual network element(s) 430A-R - e.g., similar to the control communication and configuration module(s) 432A and forwarding table(s) 434A (this virtualization of the hardware 440 is sometimes referred to as network function virtualization (NFV)).
  • NFV network function virtualization
  • CPE customer premise equipment
  • each instance 462A-R corresponding to one VNE 460A-R
  • alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 462A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
  • the virtualization layer 454 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 462A-R and the physical NI(s) 446, as well as optionally between the instances 462A-R; in addition, this virtual switch may enforce network isolation between the VNEs 460A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
  • VLANs virtual local area networks
  • the third exemplary ND implementation in Figure 4A is a hybrid network device 406, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND.
  • a platform VM i.e., a VM that that implements the functionality of the special-purpose network device 402 could provide for para-virtualization to the networking hardware present in the hybrid network device 406.
  • NE network element
  • each of the VNEs receives data on the physical NIs (e.g., 416, 446) and forwards that data out the appropriate ones of the physical NIs (e.g., 416, 446).
  • a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and
  • destination port refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • transport protocol e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • Figure 4C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments of the invention.
  • Figure 4C shows VNEs 470A.1-470A.P (and optionally VNEs 470A.Q-470A.R) implemented in ND 400A and VNE 470H.1 in ND 400H.
  • VNEs 470A.1-P are separate from each other in the sense that they can receive packets from outside ND 400 A and forward packets outside of ND 400 A; VNE 470 A.1 is coupled with VNE 470H.1, and thus they communicate packets between their respective NDs; VNE 470A.2-470A.3 may optionally forward packets between themselves without forwarding them outside of the ND 400 A; and VNE 470 A.
  • P may optionally be the first in a chain of VNEs that includes VNE 470A.Q followed by VNE 470A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services).
  • Figure 4C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNEs).
  • the NDs of Figure 4A may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services.
  • end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances
  • VOIP
  • Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g.,
  • end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers.
  • one or more of the electronic devices operating as the NDs in Figure 4A may also host one or more such servers (e.g., in the case of the general purpose network device 404, one or more of the software instances 462A-R may operate as servers; the same would be true for the hybrid network device 406; in the case of the special-purpose network device 402, one or more such servers could also be run on a virtualization layer executed by the processor(s) 412); in which case the servers are said to be co-located with the VNEs of that ND.
  • the servers are said to be co-located with the VNEs of that ND.
  • a virtual network is a logical abstraction of a physical network (such as that in Figure 4A) that provides network services (e.g., L2 and/or L3 services).
  • a virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
  • IP Internet Protocol
  • a network virtualization edge sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network.
  • a virtual network instance is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND).
  • a virtual access point is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
  • Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IP VPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)).
  • Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
  • quality of service capabilities e.g., traffic classification marking, traffic conditioning and scheduling
  • security capabilities e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements
  • management capabilities e.g., full detection and processing
  • FIG. 4D illustrates a network with a single network element on each of the NDs of Figure 4A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.
  • Figure 4D illustrates network elements (NEs) 470A-H with the same connectivity as the NDs 400A-H of Figure 4A.
  • Figure 4D illustrates that the distributed approach 472 distributes responsibility for generating the reachability and forwarding information across the NEs 470A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
  • the control communication and configuration module(s) 432A-R of the ND control plane 424 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
  • Border Gateway Protocol BGP
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • RIP Routing Information Protocol
  • LDP Label Distribution Protocol
  • RSVP Resource Reservation Protocol
  • the NEs 470A-H e.g., the processor(s) 412 executing the control communication and configuration module(s) 432A-R
  • the NEs 470A-H perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by
  • Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 424.
  • the ND control plane 424 programs the ND forwarding plane 426 with information (e.g., adjacency and route information) based on the routing structure(s).
  • the ND control plane 424 programs the adjacency and route information into one or more forwarding table(s) 434A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 426.
  • the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 402, the same distributed approach 472 can be implemented on the general purpose network device 404 and the hybrid network device 406.
  • FIG. 4D illustrates that a centralized approach 474 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination.
  • the illustrated centralized approach 474 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 476 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized.
  • a centralized control plane 476 sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the centralized control plane 476 has a south bound interface 482 with a data plane 480 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 470A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes).
  • the centralized control plane 476 includes a network controller 478, which includes a centralized reachability and forwarding information module 479 that determines the reachability within the network and distributes the forwarding information to the NEs 470A-H of the data plane 480 over the south bound interface 482 (which may use the OpenFlow protocol), and IP address allocator module 481.
  • the network intelligence is centralized in the centralized control plane 476 executing on electronic devices that are typically separate from the NDs.
  • the centralized reachability and Forwarding Information Module 479 acts as a BGP speaker and is operative to communicate with the IP address allocator module 481 to perform the operations described with reference to Figures 1A-3C. While the IP address allocator Module 481 is described herein as being part of the Network Controller 478, in other embodiments, the IP address allocator module 481 is executed on another electronic device separate from the network controller and operative to communicate with the network controller to perform the operations described with reference to Figures 1A-3C.
  • each of the control communication and configuration module(s) 432A-R of the ND control plane 424 typically include a control agent that provides the VNE side of the south bound interface 482.
  • the ND control plane 424 (the processor(s) 412 executing the control communication and configuration module(s) 432A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 476 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 479 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 432A-R, in addition to communicating with the centralized control plane 476, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 474, but may also be considered a hybrid approach).
  • data e.g., packets
  • the control agent communicating with the centralized control plane 476 to receive the forwarding
  • the same centralized approach 474 can be implemented with the general purpose network device 404 (e.g., each of the VNE 460A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 476 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 479; it should be understood that in some embodiments of the invention, the VNEs 460A-R, in addition to communicating with the centralized control plane 476, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 406.
  • the general purpose network device 404 e.g., each of the VNE 460A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for
  • NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run, and NFV and SDN both aim to make use of commodity server hardware and physical switches.
  • Figure 4D also shows that the centralized control plane 476 has a north bound interface 484 to an application layer 486, in which resides application(s) 488.
  • the centralized control plane 476 has the ability to form virtual networks 492 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 470A-H of the data plane 480 being the underlay network)) for the application(s) 488.
  • virtual networks 492 sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 470A-H of the data plane 480 being the underlay network)
  • the centralized control plane 476 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
  • Figure 4D shows the distributed approach 472 separate from the centralized approach 474
  • the effort of network control may be distributed differently or the two combined in certain embodiments of the invention.
  • embodiments may generally use the centralized approach (SDN) 474, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree.
  • SDN centralized approach
  • Such embodiments are generally considered to fall under the centralized approach 474, but may also be considered a hybrid approach.
  • Figure 4D illustrates the simple case where each of the NDs 400A-H implements a single NE 470A-H
  • the network control approaches described with reference to Figure 4D also work for networks where one or more of the NDs 400 A-H implement multiple VNEs (e.g., VNEs 430A-R, VNEs 460A-R, those in the hybrid network device 406).
  • the network controller 478 may also emulate the implementation of multiple VNEs in a single ND.
  • the network controller 478 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 492 (all in the same one of the virtual network(s) 492, each in different ones of the virtual network(s) 492, or some combination).
  • the network controller 478 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 476 to present different VNEs in the virtual network(s) 492 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • Figures 4E and 4F respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 478 may present as part of different ones of the virtual networks 492.
  • Figure 4E illustrates the simple case of where each of the NDs 400A-H implements a single NE 470A-H (see Figure 4D), but the centralized control plane 476 has abstracted multiple of the NEs in different NDs (the NEs 470A-C and G-H) into (to represent) a single NE 4701 in one of the virtual network(s) 492 of Figure 4D, according to some embodiments of the invention.
  • Figure 4E shows that in this virtual network, the NE 4701 is coupled to NE 470D and 470F, which are both still coupled to NE 470E.
  • Figure 4F illustrates a case where multiple VNEs (VNE 470A.1 and VNE 470H.1) are implemented on different NDs (ND 400A and ND 400H) and are coupled to each other, and where the centralized control plane 476 has abstracted these multiple VNEs such that they appear as a single VNE 470T within one of the virtual networks 492 of Figure 4D, according to some embodiments of the invention.
  • the abstraction of a NE or VNE can span multiple NDs.
  • the electronic device(s) running the centralized control plane 476 may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include processor(s), a set or one or more physical NIs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software.
  • Figure 5 illustrates, a general purpose control plane device 504 including hardware 540 comprising a set of one or more processor(s) 542 (which are often COTS processors) and physical NIs 546, as well as non-transitory machine readable storage media 548 having stored therein centralized control plane (CCP) software 550.
  • processor(s) 542 which are often COTS processors
  • NIs 546 physical NIs 546
  • CCP centralized control plane
  • the processor(s) 542 typically execute software to instantiate a virtualization layer 554 (e.g., in one embodiment the virtualization layer 554 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 562A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 554 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 562A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment, an application is implemented as a unikernel, which can be generated by compiling directly with an application only a
  • VMM virtual machine monitor
  • CCP instance 576A an instance of the CCP software 550 (illustrated as CCP instance 576A) is executed (e.g., within the instance 562A) on the virtualization layer 554.
  • CCP instance 576A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general purpose control plane device 504.
  • the instantiation of the CCP instance 576A, as well as the virtualization layer 554 and instances 562A-R if implemented, are collectively referred to as software instance(s) 552.
  • the CCP instance 576A includes a network controller instance 578.
  • the network controller instance 578 includes a centralized reachability and forwarding information module instance 579 (which is a middleware layer providing the context of the network controller 478 to the operating system and communicating with the various NEs), an CCP application layer 580 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces), and an IP Address Allocator Module 581.
  • this CCP application layer 580 within the centralized control plane 476 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
  • the centralized reachability and Forwarding Information Module 579 acts as a BGP speaker and is operative to communicate with the IP address allocator module 581 to perform the operations described with reference to Figures 1A-3C. While the IP address allocator Module 581 is described herein as being part of the Network Controller Instance 578, in other embodiments, the IP address allocator module 581 is executed on another electronic device separate from the network controller and operative to communicate with the network controller to perform the operations described with reference to Figures 1A-3C.
  • the centralized control plane 476 transmits relevant messages to the data plane 480 based on CCP application layer 580 calculations and middleware layer mapping for each flow.
  • a flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers.
  • Different NDs/NEs/VNEs of the data plane 480 may receive different messages, and thus different forwarding information.
  • the data plane 480 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
  • Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets.
  • the model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
  • MAC media access control
  • Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched).
  • Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet.
  • TCP transmission control protocol
  • an unknown packet for example, a "missed packet” or a "match- miss” as used in OpenFlow parlance
  • the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 476.
  • the centralized control plane 476 will then program forwarding table entries into the data plane 480 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 480 by the centralized control plane 476, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.
  • a network interface may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI.
  • a virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface).
  • a NI physical or virtual
  • a loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address.
  • IP addresses of that ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
  • Some NDs include functionality for authentication, authorization, and accounting (AAA) protocols (e.g., RADIUS (Remote Authentication Dial-In User Service), Diameter, and/or TACACS+ (Terminal Access Controller Access Control System Plus).
  • AAA can be provided through a client/server model, where the AAA client is implemented on a ND and the AAA server can be implemented either locally on the ND or on a remote electronic device coupled with the ND.
  • Authentication is the process of identifying and verifying a subscriber. For instance, a subscriber might be identified by a combination of a username and a password or through a unique key.
  • Authorization determines what a subscriber can do after being authenticated, such as gaining access to certain electronic device information resources (e.g., through the use of access control policies). Accounting is recording user activity.
  • end user devices may be coupled (e.g., through an access network) through an edge ND (supporting AAA processing) coupled to core NDs coupled to electronic devices implementing servers of service/content providers.
  • AAA processing is performed to identify for a subscriber the subscriber record stored in the AAA server for that subscriber.
  • a subscriber record includes a set of attributes (e.g., subscriber name, password, authentication information, access control information, rate-limiting information, policing information) used during processing of that subscriber's traffic.
  • Certain NDs internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits.
  • CPE customer premise equipment
  • a subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session.
  • a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly deallocates that subscriber circuit when that subscriber disconnects.
  • Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or
  • Asynchronous Transfer Mode (ATM)
  • Ethernet 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM
  • ATM Asynchronous Transfer Mode
  • a subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking).
  • DHCP dynamic host configuration protocol
  • CLIPS client-less internet protocol service
  • MAC Media Access Control
  • PPP point-to-point protocol
  • DSL digital subscriber line
  • DSL digital subscriber line
  • DHCP When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided.
  • CPE end user device
  • Each VNE e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable.
  • each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s).
  • AAA authentication, authorization, and accounting
  • Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
  • interfaces that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing).
  • the subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND.
  • a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher- layer protocol interface is configured and associated with that physical entity.
  • a physical entity e.g., physical NI, channel
  • a logical entity e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)
  • network protocols e.g., routing protocols, bridging protocols
  • Some NDs provide support for implementing VPNs (Virtual Private Networks) (e.g., Layer 2 VPNs and/or Layer 3 VPNs).
  • VPNs Virtual Private Networks
  • the ND where a provider's network and a customer's network are coupled are respectively referred to as PEs (Provider Edge) and CEs (Customer Edge).
  • PEs Provide Edge
  • CEs Customer Edge
  • Layer 2 VPN forwarding typically is performed on the CE(s) on either end of the VPN and traffic is sent across the network (e.g., through one or more PEs coupled by other NDs).
  • Layer 2 circuits are configured between the CEs and PEs (e.g., an Ethernet port, an ATM permanent virtual circuit (PVC), a Frame Relay PVC).
  • PVC ATM permanent virtual circuit
  • Frame Relay PVC Frame Relay PVC
  • routing typically is performed by the PEs.
  • an edge ND that supports multiple VNEs may be deployed as a PE; and a VNE may be configured with a VPN protocol

Abstract

L'invention concerne un procédé et un appareil d'allocation d'adresse de protocole internet (IP) sur des réseaux virtuels de couche 2 s'étendant sur de multiples centres de données. Une première adresse IP, allouée à un élément de réseau distant (NE) par un allocateur d'adresse à distance, est apprise suite à la réception d'un message de notification indiquant une route de couche 2 (L2) vers le NE distant dans un domaine L2. La première adresse IP est retirée d'un ensemble d'adresses IP disponibles pour une allocation à des NE locaux qui font partie du domaine L2. Lors de la réception d'une demande d'allocation d'adresse IP à partir d'un NE local qui fait partie du domaine L2, une seconde adresse IP est allouée au NE local à partir de l'ensemble d'adresses IP disponibles pour une allocation à des NE locaux, l'ensemble d'adresses IP disponibles pour l'allocation à des NE locaux ne comprenant pas la première adresse IP.
PCT/IB2017/050828 2017-02-14 2017-02-14 Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2 WO2018150222A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/050828 WO2018150222A1 (fr) 2017-02-14 2017-02-14 Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/050828 WO2018150222A1 (fr) 2017-02-14 2017-02-14 Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2

Publications (1)

Publication Number Publication Date
WO2018150222A1 true WO2018150222A1 (fr) 2018-08-23

Family

ID=58191503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/050828 WO2018150222A1 (fr) 2017-02-14 2017-02-14 Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2

Country Status (1)

Country Link
WO (1) WO2018150222A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020212998A1 (fr) * 2019-04-17 2020-10-22 Telefonaktiebolaget Lm Ericsson (Publ) Attribution d'adresse de réseau dans un domaine de couche 2 virtuel s'étendant sur de multiples grappes de conteneurs
WO2021043314A1 (fr) * 2019-09-06 2021-03-11 华为技术有限公司 Procédé de communication pour un environnement en nuage hybride, passerelle, et procédé et appareil de gestion
US20220303156A1 (en) * 2021-03-16 2022-09-22 At&T Intellectual Property I, L.P. Virtual Router Instantiation on Public Clouds
CN115348238A (zh) * 2022-08-16 2022-11-15 中国联合网络通信集团有限公司 Dhcp中继的方法、vtep网关、电子设备及介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2482524A1 (fr) * 2009-09-23 2012-08-01 ZTE Corporation Procédé de distribution d'adresse, dispositif et système pour ce procédé
US20130179580A1 (en) * 2011-07-08 2013-07-11 Robert Dunham Short Dynamic vpn address allocation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2482524A1 (fr) * 2009-09-23 2012-08-01 ZTE Corporation Procédé de distribution d'adresse, dispositif et système pour ce procédé
US20130179580A1 (en) * 2011-07-08 2013-07-11 Robert Dunham Short Dynamic vpn address allocation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALI SAJASSI SAMER SALAM KEYUR PATEL CISCO NABIL BITAR VERIZON WIM HENDERICKX ALCATEL-LUCENT: "A Network Virtualization Overlay Solution using E-VPN; draft-sajassi-nvo3-evpn-overlay-01.txt", A NETWORK VIRTUALIZATION OVERLAY SOLUTION USING E-VPN; DRAFT-SAJASSI-NVO3-EVPN-OVERLAY-01.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 23 October 2012 (2012-10-23), pages 1 - 16, XP015088511 *
DROMS BUCKNELL UNIVERSITY R COLE AT&T MNS R: "An Inter-server Protocol for DHCP; draft-ietf-dhc-interserver-01.txt", AN INTER-SERVER PROTOCOL FOR DHCP; DRAFT-IETF-DHC-INTERSERVER-01.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, vol. dhc, no. 1, 1 March 1997 (1997-03-01), XP015017067 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020212998A1 (fr) * 2019-04-17 2020-10-22 Telefonaktiebolaget Lm Ericsson (Publ) Attribution d'adresse de réseau dans un domaine de couche 2 virtuel s'étendant sur de multiples grappes de conteneurs
WO2021043314A1 (fr) * 2019-09-06 2021-03-11 华为技术有限公司 Procédé de communication pour un environnement en nuage hybride, passerelle, et procédé et appareil de gestion
US11888809B2 (en) 2019-09-06 2024-01-30 Huawei Technologies Co., Ltd. Communication method, gateway, and management method and apparatus in hybrid cloud environment
US20220303156A1 (en) * 2021-03-16 2022-09-22 At&T Intellectual Property I, L.P. Virtual Router Instantiation on Public Clouds
US11456892B1 (en) * 2021-03-16 2022-09-27 At&T Intellectual Property I, L.P. Virtual router instantiation on public clouds
CN115348238A (zh) * 2022-08-16 2022-11-15 中国联合网络通信集团有限公司 Dhcp中继的方法、vtep网关、电子设备及介质

Similar Documents

Publication Publication Date Title
US10924389B2 (en) Segment routing based on maximum segment identifier depth
US10581726B2 (en) Method and apparatus for supporting bidirectional forwarding (BFD) over multi-chassis link aggregation group (MC-LAG) in internet protocol (IP) multiprotocol label switching (MPLS) networks
US9923781B2 (en) Designated forwarder (DF) election and re-election on provider edge (PE) failure in all-active redundancy topology
US9629037B2 (en) Handover of a mobile device in an information centric network
US10841207B2 (en) Method and apparatus for supporting bidirectional forwarding (BFD) over multi-chassis link aggregation group (MC-LAG) in internet protocol (IP) networks
EP3580897B1 (fr) Procédé et appareil de chaînage de service dynamique avec routage de segment pour bng
WO2018109536A1 (fr) Procédé et appareil pour surveiller un tunnel de réseau local extensible virtuel (vxlan) avec une infrastructure de réseau privé virtuel ethernet (evpn) - protocole de passerelle frontière (bgp)
CN109691026B (zh) 更新多个多协议标签切换双向转发检测会话的方法和装置
WO2020212998A1 (fr) Attribution d'adresse de réseau dans un domaine de couche 2 virtuel s'étendant sur de multiples grappes de conteneurs
EP3935814B1 (fr) Sélection de réseau d'accès dynamique sur la base d'informations d'orchestration d'application dans un système de nuage de périphérie
WO2017221050A1 (fr) Gestion efficace de trafic multi-destination dans des réseaux privés virtuels ethernet à hébergements multiples (evpn)
US11343332B2 (en) Method for seamless migration of session authentication to a different stateful diameter authenticating peer
WO2018150222A1 (fr) Allocation d'adresse de protocole internet (ip) sur des réseaux virtuels de couche 2
WO2018065813A1 (fr) Procédé et système de distribution de trafic virtuel de couche 2 vers de multiples dispositifs de réseau d'accès
US20220247679A1 (en) Method and apparatus for layer 2 route calculation in a route reflector network device
US20220311643A1 (en) Method and system to transmit broadcast, unknown unicast, or multicast (bum) traffic for multiple ethernet virtual private network (evpn) instances (evis)
WO2020152691A1 (fr) Détection d'adresse en double du protocole internet version 6 (ipv6) dans des réseaux multiples en utilisant un réseau privé virtuel ethernet (evpn)
US11669256B2 (en) Storage resource controller in a 5G network system
US11451637B2 (en) Method for migration of session accounting to a different stateful accounting peer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17707956

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17707956

Country of ref document: EP

Kind code of ref document: A1