WO2019130327A1 - Medium access control (mac) address allocation across multiple data centers - Google Patents

Medium access control (mac) address allocation across multiple data centers Download PDF

Info

Publication number
WO2019130327A1
WO2019130327A1 PCT/IN2017/050629 IN2017050629W WO2019130327A1 WO 2019130327 A1 WO2019130327 A1 WO 2019130327A1 IN 2017050629 W IN2017050629 W IN 2017050629W WO 2019130327 A1 WO2019130327 A1 WO 2019130327A1
Authority
WO
WIPO (PCT)
Prior art keywords
mac address
data center
allocated
mac
evpn
Prior art date
Application number
PCT/IN2017/050629
Other languages
French (fr)
Inventor
Vyshakh Krishnan C H
Faseela K
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IN2017/050629 priority Critical patent/WO2019130327A1/en
Publication of WO2019130327A1 publication Critical patent/WO2019130327A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/58Association of routers
    • H04L45/586Association of routers of virtual routers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5038Address allocation for local use, e.g. in LAN or USB networks, or in a controller area network [CAN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5046Resolving address allocation conflicts; Testing of addresses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/622Layer-2 addresses, e.g. medium access control [MAC] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0272Virtual private networks

Definitions

  • the present application relates generally to the field of networking, and more specifically to Ethernet Virtual Private Networks (EVPN) between remote data centers or other computing environments including, but not limited to, software-defined networking (SDN) environments where the packet-forwarding functionality (e.g., data plane) is separated from the packet routing or switching process (e.g , control plane).
  • EVPN Ethernet Virtual Private Networks
  • SDN Software-defined networking
  • control plane also referred to as“control plane”
  • data plane packet switching and/or forwarding functions
  • SDN controllers controller nodes
  • DPNs data-plane nodes
  • switches switches
  • a“datapath” data-plane nodes
  • SDN controllers are often logically- centralized entities that translate requirements of higher-layer applications into configuration of the DPNs that they control, while providing a simpler, more abstract view of the datapath to these applications.
  • the interface to the SDN applications is often referred to as the SDN controller’s “northbound interface.”
  • An exemplary northbound controller interface is OpenStack.
  • a Logical Switch consists of one or more flow tables and a group table, which collectively perform packet lookups and forwarding from input ports to output ports; and one or more OF channels to a controller. Via these channels, the controller can configure and/or manage the switch, such as by adding, updating, and deleting flow entries in flow tables, both reactively (e.g., responsive to packets) and proactively.
  • a controller can also receive events from the switch and send packets out to the switch via OF channels.
  • a switch’s control channel may support a single OF channel with a single controller or, in some implementations, multiple OF channels enabling multiple controllers to share management of a single switch.
  • multiple controllers can be configured in a“high-availability” (HA) cluster, whereby one controller serves as a“master” of the connection from a switch to the cluster, and one or more other controllers are connection“slaves.”
  • SDN controller nodes in the cluster can be front-ended by a load balancer proxy, which exposes a single virtual Internet Protocol (VIP) address used by the switches or DPNs to connect to the controller cluster.
  • VIP virtual Internet Protocol
  • the proxy also can distribute incoming switch connections to controller nodes of the cluster based on some predetermined policy, such as round-robin.
  • DC data center
  • POD performance-optimized DCs
  • vDCs virtual DCs
  • vPODs virtual PODs
  • FIG. 1 shows a block diagram of an exemplary DC comprising an SDN controller, a Cloud Orchestrator, a virtual switch (vSwitch), and one or more virtual computing machines (e.g., VM1 as shown).
  • the vSwitch connects VM1 with any other VMs in the DC via local-area network (LAN) functionality, such as Ethernet.
  • LAN local-area network
  • VM1 virtual computing machines
  • VM1 virtual switch
  • VM1 virtual switch
  • VM1 virtual computing machines
  • the vSwitch connects VM1 with any other VMs in the DC via local-area network (LAN) functionality, such as Ethernet.
  • LAN local-area network
  • MAC medium access control
  • An exemplary Cloud Orchestrator is OpenS tack, in which the “Nova” component pre-determines MAC and Internet Protocol (IP) addresses of VMs in the DC.
  • IP Internet Protocol
  • FIG. 2 shows an exemplary MAC address allocation in which MAC addresses are 48-bit (six-byte or six-octet) values comprising a three-byte organizationally unique identifier (OEP) that uniquely identifies a vendor, manufacturer, or other organization associated with the DC.
  • OEP organizationally unique identifier
  • the remaining three bytes of the MAC address i.e ., Network Interface Controller (NIC) Specific in Figure 2), which correspond to the particular VM, are generated randomly by the Cloud Orchestrator.
  • An exemplary algorithm for random MAC address generation is given below:
  • FIG. 1 shows an exemplary DC configured for operation in an SDN cloud network.
  • the DC shown in Figure 3 includes an additional vS witch (i.e., vSwitch2) and two additional VMs, such that VMs 1-3 are connected to vS witch 1-2.
  • Each of VMs 1-3 have a MAC address comprising three identical bytes (OUI) and a non-identical MAC address portion comprising three bytes (assigned, in this example, sequentially rather than randomly).
  • the DC of Figure 3 communicates externally via one (or more) DC gateways (DC- GW).
  • DC- GW DC gateways
  • the DC of Figure 3 can communicate with a peer DC (not illustrated) via the respective DC-GWs interconnecting the two DCs.
  • DC-GW DC gateways
  • FIG. 4 is a block diagram illustrating two DCs, DC1 and DC2, configured to communicate via respective DC-GWs.
  • a common protocol for inter-DC communication via DC-GWs is Border Gateway Protocol (BGP), as described in Request for Comments (RFC) 4271 published by the Internet Engineering Task Force (IETF).
  • BGP Border Gateway Protocol
  • RFC Request for Comments
  • IETF Internet Engineering Task Force
  • Such multi-DC layer-2 domains are not without problems, however.
  • the respective Cloud Orchestrators in the DCs shown in Figure 4 will operate independently with respect to allocation of MAC addresses. This creates the possibility of the same MAC address - involving the same OUI and two random selections of the same NIC-specific portion - being assigned to an active VM in each DC. This is illustrated in Figure 4, where MAC address FA:l6:3E:0l:0l:0l has been assigned to VM1 in DC1 and VM2 in DC2.
  • Another approach to address these problems is a centralized MAC allocator for ah DCs in the EVPN. While this approach avoids unused MAC addresses, its drawbacks include increased delay for MAC address allocation due to communication delay with the DCs and/or processing delay of the centralized MAC allocator. In addition, the centralized MAC allocator presents a single point of failure that may not be suitable for high- availability applications.
  • certain exemplary embodiments of systems, devices, methods/procedures, and computer-readable media according to the present disclosure can facilitate conflict-free allocation of MAC addresses across multiple data centers in an Ethernet VPN (as implemented, e.g., by a Software-Defined Networking (SDN) network), without requiring additional traffic in the EVPN.
  • SDN Software-Defined Networking
  • these exemplary embodiments can outperform conventional methods, techniques, and systems in various known applications, including exemplary applications discussed herein.
  • Certain exemplary embodiments include methods and/or procedures for allocating medium access control (MAC) addresses to virtual computing machines (VMs) in a first data center configured to communicate with a second data center in an EVPN.
  • the exemplary methods and/or procedures can include receiving a request to create a first VM local to the first data center.
  • the exemplary methods and/or procedures can also include allocating a first MAC address to the first VM.
  • the first MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the first MAC address, and assigning a first randomly- selected identifier as a second portion of the first MAC address.
  • UAI organizationally unique identifier
  • the exemplary methods and/or procedures can also include determining whether the first MAC address has been allocated to another VM within the EVPN. In some exemplary embodiments, this determination can comprise comparing the first randomly- selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by a local datastore. If it is determined that the first MAC address has been allocated to another VM, the exemplary methods and/or procedures can include allocating a second MAC address, instead of the first MAC address, to the first VM.
  • the second MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the second MAC address, and assigning a second randomly- selected identifier as a second portion of the second MAC address.
  • the exemplary methods and/or procedures can also include rebooting the first VM if the second MAC address is allocated to the first VM.
  • the exemplary methods and/or procedures can also include updating the local datastore with an entry associating the first VM with the MAC address allocated to the first VM, i.e., either the first or the second MAC address; and sending a message to the second data center indicating allocation of either the first or the second MAC address, as the case may be.
  • the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message (e.g ., OPEN message) comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI) advertising a MAC/IP address pair of the first VM.
  • Border Gateway Protocol (BGP) message (e.g ., OPEN message) comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI) advertising a MAC/IP address pair of the first VM.
  • RT-2 EVPN Route- Type-2
  • NLRI Network-La
  • exemplary methods and/or procedures can be provided for allocating medium access control (MAC) addresses to virtual computing machines (VMs) in a first data center configured to communicate with a second data center in an Ethernet virtual private network (EVPN).
  • MAC medium access control
  • VMs virtual computing machines
  • EVPN Ethernet virtual private network
  • These exemplary methods and/or procedures can include receiving a message from the second data center indicating an allocation of a first MAC address to a first VM in the second data center.
  • the exemplary methods and/or procedures can also include determining if the first MAC address has been allocated to a second VM in the first data center, based on the contents of a datastore local to the first data center.
  • the exemplary methods and/or procedures can also include determining whether the second VM should be allocated a different MAC address than the first MAC address. In some exemplary embodiments, this can comprise comparing first and second values of an identification parameter, the first value associated with the first data center and the second value associated with the second data center; and if the first value is greater than the second value, determining that the second VM should be allocated a different MAC address.
  • the identification parameter is a router identifier
  • the second value is received from the second data center in a Boarder Gateway Protocol (BGP) message.
  • BGP Boarder Gateway Protocol
  • the exemplary methods and/or procedures can also Include: allocating a second MAC address to the second VM; updating the local datastore with an entry associating the second VM with the second MAC address; and sending a message to the second data center indicating an allocation of the second MAC address to the second VM.
  • the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message (e.g ., OPEN message) comprising EVPN Route-Type-2 (RT-2) Network- Layer Reachability Information (NLRI) advertising a MAC/IP address pair.
  • BGP Border Gateway Protocol
  • RT-2 EVPN Route-Type-2
  • NLRI Network- Layer Reachability Information
  • the first and second MAC addresses comprise an organizationally unique identifier (OUI) and respective first and second randomly- selected identifiers.
  • determining if the first MAC address has been allocated to the second VM comprises comparing the first randomly-selected identifier to corresponding randomly-selected identifiers of one or more further MAC addresses stored by the local datastore.
  • the exemplary methods and/or procedures can also include updating the local datastore with an entry associating the first VM with the first MAC address, based on determining that the first MAC address has not been allocated to the second VM, and/or that the second VM should be allocated a different MAC address.
  • Other exemplary embodiments include data centers comprising components such as memories and processors that configure the data center to perform operations corresponding to the exemplary methods and/or procedures described above.
  • Other exemplary embodiments include non-transitory, computer-readable media storing program instructions that, when executed by at least one processor, configure a data center to perform operations corresponding to the exemplary methods and/or procedures described above.
  • FIG. 1 is a block diagram of an exemplary data center (DC);
  • Figure 2 illustrates exemplary techniques for allocating MAC addresses in a local area network (LAN), e.g., within a DC;
  • LAN local area network
  • Figure 3 is a block diagram of an exemplary DC configured for operation in an SDN cloud network
  • Figure 4 is a block diagram of two DCs configured to communicate via respective DC gateways using EVPN techniques, according to one or more exemplary embodiments of the present disclosure
  • Figures 5a-b illustrate the structure of an exemplary EVPN Route Type-2 (RT-2) message and a Border Gateway Protocol (BGP) message that can encapsulate the exemplary RT-2 messages, according to one or more exemplary embodiments of the present disclosure;
  • RT-2 EVPN Route Type-2
  • BGP Border Gateway Protocol
  • Figure 6 is a network block diagram illustrating flow of RT-2 messages within a DC configured to communicate using EVPN techniques, according to one or more exemplary embodiments of the present disclosure
  • Figure 7 is network block diagram illustrating flow of RT-2 messages across a wide-area network (WAN) between two DCs configured to communicate using EVPN techniques, according to one or more exemplary embodiments of the present disclosure
  • Figure 8 is a network block diagram illustrating flow of RT-2 messages within and between two DCs configured to communicate using EVPN techniques over a WAN, according to one or more exemplary embodiments of the present disclosure
  • Figure 9 is a flow diagram of an exemplary method and/or procedure for allocating MAC addresses to VMs in a first DC configured to communicate with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure
  • Figure 10 is a flow diagram of another exemplary method and/or procedure for allocating MAC addresses to VMs in a first DC configured to communicate with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure; and [0037] Figure 11 is a block diagram of an exemplary DC according to one or more exemplary embodiments of the present disclosure.
  • the RFC 7432 specification for BGP MPLS -based EVPNs defines EVPN Network Layer Reachability Information (NLRI) that includes a Route Type field, a Route Type-Specific field, and a value indicating the length of the Route Type-Specific field.
  • EVPN Route Type 2 (RT-2 - MAC/IP Advertisement Route) is used to exchange advertisements of MAC/IP addresses between the BGP peers (e.g ., DC-GWs).
  • the Route Type-Specific field of an exemplary EVPN RT-2 message is shown in Figure 5a. This field includes a Route Distinguisher (RD), an Ethernet Segment Identifier (ESI), an Ethernet Tag ID, the respective MAC/IP addresses and their respective lengths, and an MPLS label.
  • RD Route Distinguisher
  • EI Ethernet Segment Identifier
  • Ethernet Tag ID the respective MAC/IP addresses and their respective lengths
  • MPLS label an MPLS label.
  • An EVPN instance requires a Route Distinguisher (RD) that is unique per MAC-Virtual Routing and Forwarding (VRF) table and one or more globally unique Route Targets (RTs).
  • RD Route Distinguisher
  • VRF MAC-Virtual Routing and Forwarding
  • RTs globally unique Route Targets
  • Each Ethernet segment within the EVPN (e.g., respective segments in DC1 and DC2 of Figure 4) will have a unique ESI.
  • Ethernet Tag ID comprises either a l2-bit or 24-bit identifier that identifies a particular broadcast domain (e.g., a virtual LAN or VLAN) in an EVPN.
  • An EVPN instance comprises one or more broadcast domains.
  • the EVPN RT-2 message can be carried, for example, in the“Optional Parameters” field of a BGP“OPEN” message, as defined in RFC 4271 and illustrated in Figure 5b.
  • the OPEN message also includes a BGP Identifier, which can be the IP address of the sender, as explained in more detail below.
  • the DC1 Cloud Orchestrator will inform the SDN Controller of the allocation, causing the SDN Controller (as a BGP Speaker) to send an EVPN RT-2 message comprising the newly allocated MAC/IP addresses to DC-GW (e.g., encapsulated in an OPEN message).
  • DC-GW e.g., encapsulated in an OPEN message
  • the RT-2 message can include a Route Target (RT) field and a source/next-hop field, which in this case identifies vS witch TEP1 that booted the new VM.
  • Figure 7 illustrates subsequent operations where the DC-GW of DC1 sends the received RT- 2 message to the DC-GW of DC2, indicating the MAC/IP addresses of the newly-booted VM in DC1.
  • the DC-GW of DC1 can also append an MPLS label to the RT-2 message as needed.
  • Figure 8 illustrates subsequent operations where the EVPN RT-2 message from DC1 is passed, via the two DC-GWs, to the SDN Controller of DC2.
  • Figure 8 also illustrates that a VM is booted in DC2 and assigned MAC address xx.xx.xx.bb.bb.bb and IP address 1.1.1.3.
  • the BGP speaker e.g ., SDN Controller
  • the BGP speaker sends an EVPN RT-2 message comprising this MAC/IP address pair to its DC-GW, indicating TEP5 as the vSwitch that booted the new VM.
  • This message which can be encapsulated in a BGP OPEN message, then traverses the EVPN in a manner similar to the RT-2 message from DC1 to DC2, described above.
  • a BGP speaker e.g., SDN Controller
  • a BGP speaker in a DC that uses BGP EVPN for establishing multi-DC MAC domains can be aware of the MAC/IP addresses of all VMs that are on those L2 domains (every L2 domain would be an EVPN instance in EVPN terminology).
  • a MAC allocator in a particular DC e.g., Cloud Orchestrator in DC1 of Figure 4
  • can use this information - can be stored, e.g., in a database - for allocating MAC addresses while booting VMs in that particular DC.
  • the Cloud Orchestrator when the Cloud Orchestrator wants to allocate a randomly-generated MAC address portion, it consults this database to determine if the randomly-generated MAC address portion is already allocated in another remote DC on the EVPN. In this manner, exemplary embodiments of the present disclosure can avoid most of the MAC address conflicts and overlap that can occur in EVPNs.
  • a tie-breaker can be used to determine which VM can retain the conflicting MAC address, and which DC(s)s must force their VMs to relinquish the conflicting MAC addresses and obtain new, non-conflicting ones. As such, all DCs except one allocator (i.e.
  • the tie -breaker winner force the local VMs with the incorrect MACs to renew their MAC addresses, such that non-overlapping MAC addresses are allocated to these local VMs.
  • Various exemplary methods to force this reallocation are described in more detail hereinbelow.
  • the tie-breaking algorithm requires no additional BGP message exchanges.
  • the Cloud Orchestrators in DC1 and DC2 allocate MAC addresses from the same pool, such that VMs in each DC can be allocated the same MAC address aa:aa:aa:aa:aa:aaa.. Even though EVPN RT-2 messages are provided between DCs, the DCs do not act upon information included in such messages and the conflicting VMs will be blocked, causing the problems discussed above.
  • the Cloud Orchestrator in DC1 can allocate MAC address aa:aa:aa:aa:aa:aaa for a VM and send this information in an EVPN RT-2 message to DC2, whose SDN Controller will mark the received MAC address as already used or, if already allocated, determine whether reallocation of a conflicting MAC address is required.
  • Figure 9 shows a flow diagram of an exemplary method and/or procedure for allocating MAC addresses to VMs in a first data center (DC) configured to communicate with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure.
  • the exemplary method illustrated in Figure 9 can be implemented, for example, in one or more data centers configured according to Figure 11 (described below).
  • the method is illustrated by blocks in the particular order of Figure 9, this order is merely exemplary, and the steps of the method may be performed in a different order than shown by Figure 9, and may be combined and/or divided into blocks having different functionality.
  • the exemplary method and/or procedure shown in Figure 9 is complementary to, and can be used in conjunction with, the exemplary method and/or procedure shown in Figure 10 to provide improvements and/or solutions to problems described herein.
  • the first data center can receive a request to create a first VM local to the first data center.
  • the data center can allocate a first MAC address to the first VM.
  • the allocation of the first MAC address can be performed, e.g., by a Cloud Orchestrator that is part of the first data center.
  • the first MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the first MAC address, and assigning a first randomly-selected identifier as a second portion of the first MAC address.
  • UAI organizationally unique identifier
  • the first data center can determine whether the first MAC address is allocated to another VM within the EVPN. This determination can be performed, e.g., by the Cloud Orchestrator that is part of the first data center. In some exemplary embodiments, the determination in block 930 can comprise comparing the first randomly-selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by a local datastore 900 or, alternatively, a remote datastore accessible by the first data center.
  • the first data center can allocate a second MAC address, instead of the first MAC address, to the first VM.
  • the second MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the second MAC address, and assigning a second randomly-selected identifier as a second portion of the second MAC address.
  • the exemplary method and/or procedure of Figure 9 can also include rebooting the first VM if the second MAC address is allocated to the first VM.
  • Block 950 of the exemplary method and/or procedure of Figure 9 is reached after completion of block 940, or if it is determined (in block 930) that the first MAC address has not been allocated to another VM within the EVPN.
  • the first data center updates the local datastore 900 (or alternatively, a remote datastore accessible by the first data center) with an entry associating the first VM with the MAC address allocated to the first VM, i.e., either the first or the second MAC address according to block 930 or block 940, respectively.
  • the first data center sends a message to a second data center indicating allocation of either the first or the second MAC address, as the case may be, to the first VM.
  • the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message (e.g., OPEN message) comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI) advertising a MAC/IP address pair of the first VM.
  • BGP Border Gateway Protocol
  • RT-2 EVPN Route- Type-2
  • NLRI Network-Layer Reachability Information
  • the second MAC address might also be determined to be allocated to another VM within the EVPN (in a second iteration through block 930).
  • the first data center can allocate a third (different) MAC address, instead of the second MAC address, to the first VM (in a second iteration through block 940).
  • the loop from blocks 930 through 950 can be traversed repeatedly until the first data center allocates a unique (within the EVPN) MAC address to the first VM.
  • This unique MAC address is indicated in the message sent to the second data center at block 960.
  • Figure 10 shows a flow diagram of another exemplary method and/or procedure for allocating MAC addresses to VMs in a first data center (DC) configured to communication with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure.
  • the exemplary method illustrated in Figure 10 can be implemented, for example, in one or more data centers configured according to Figure 11 (described below).
  • Figure 11 described below.
  • the method is illustrated by blocks in the particular order of Figure 10, this order is merely exemplary, and the steps of the method may be performed in a different order than shown by Figure 10, and may be combined and/or divided into blocks having different functionality.
  • the exemplary method and/or procedure shown in Figure 10 is complementary to, and can be used in conjunction with, the exemplary method and/or procedure shown in Figure 9 to provide improvements and/or solutions to problems described herein.
  • the first data center can receive a message from a second data center of the EVPN indicating allocation of a first MAC address to a first VM within the second data center.
  • the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message (e.g ., OPEN message) comprising EVPN Route-Type-2 (RT-2) Network-Fayer Reachability Information (NFRI) advertising a MAC/IP address pair of the first VM.
  • BGP Border Gateway Protocol
  • OPEN message e.g ., OPEN message
  • RT-2 EVPN Route-Type-2
  • NFRI Network-Fayer Reachability Information
  • the first MAC address can comprise an organizationally unique identifier (OFT) and a first randomly- selected identifier.
  • OFT organizationally unique identifier
  • the first data center can determine whether the first MAC address has been allocated to a second VM within the first data center. This determination can be performed, e.g., by the Cloud Orchestrator that is part of the first data center.
  • the determination in block 1020 can comprise extracting the first MAC address from the received message (e.g., from the NFRI of the received message), and/or comparing the first randomly- selected identifier to corresponding randomly-selected identifiers of one or more further MAC addresses stored by a local datastore 1000 or, alternatively, a remote datastore accessible by the first data center.
  • operation proceeds to block 1070 where the local datastore 1000 (or alternatively, a remote datastore accessible by the first data center) is updated with an entry associating the first VM (i.e., in the second/remote data center) with the first MAC address.
  • operation proceeds to block 1030 where it is determined whether the second VM needs to be allocated a different MAC address than the first MAC address.
  • This operation can be referred to, for example, a“tie-breaker” between conflicting MAC addresses.
  • the operations of block 1030 can include comparing first and second values of an identification parameter, the first value associated with the first data center and the second value associated with the second data center.
  • the identification parameter can be a BGP (e.g ., router) identifier received from the second data center in a BGP message (e.g., OPEN message).
  • the exemplary method and/or procedure ends without updating the local datastore 1000 with an entry associating the first MAC address with the first VM in the second (remote) data center.
  • the first data center upon a determination that the first value is not greater than the second value, notifies (e.g., by sending a BGP message such as an OPEN message) the second data center that a new (different) MAC address needs to be allocated to the first VM.
  • a second MAC address is allocated to the second VM.
  • the opposite may be true. That is, if it is determined that the first value is greater than the second value, then a second MAC address is not allocated to the second VM (i.e., the second VM retains the first MAC address) and the first data center may notify the second data center that a new (different) MAC address needs to be allocated to the first VM.
  • allocating a second MAC address in block 1040 can include assigning an organizationally unique identifier (OEP) as a first portion of the second MAC address, and assigning a second randomly-selected identifier as a second portion of the second MAC address.
  • OEP organizationally unique identifier
  • the exemplary method and/or procedure of Figure 10 can also include rebooting the second VM after allocating the second MAC address.
  • a second MAC address is allocated to the second VM in block 1040, operation proceeds to block 1050 where the local datastore 1000 (or alternatively, a remote datastore accessible by the first data center) is updated with an entry associating the second VM with the newly-allocated second MAC address.
  • the first data center sends a message to the second data center indicating allocation of the second MAC address to the second VM within the first data center.
  • the message can be a BGP message (e.g ., OPEN message) comprising EVPN RT-2 NLRI advertising the new MAC/IP address pair of the second VM.
  • the local datastore 1000 (or alternatively, a remote datastore accessible by the first data center) can be updated with an entry associating the first VM (in the second/remote data center) with the first MAC address. Similar to the discussion above for Figure 9, in some exemplary embodiments, the loop from blocks 1020 through 1050 can be traversed repeatedly until the first data center allocates a unique (within the EVPN) MAC address to the second VM. This unique MAC address is indicated in the message sent to the second data center at block 1060.
  • FIG. 11 shows a block diagram of an exemplary data center 1100 utilizing certain embodiments of the present disclosure, including those described above with reference to other figures.
  • data center 1100 can comprise an SDN Controller configured, e.g., as part of an Open Daylight (ODL) HA cluster.
  • ODL Open Daylight
  • Data center 1100 can comprise one or more processing units 1110 that can be operably connected to one or more memories 1120.
  • processing units 1110 can comprise multiple individual processors (not shown), each of which can implement and/or provide a portion of the functionality described above. In such case, multiple individual processors may be commonly connected to memories 1120, or individually connected to multiple individual memories.
  • data center 1100 may be implemented in many different combinations of hardware and software including, but not limited to, application processors, signal processors, general-purpose processors, multi-core processors, ASICs, fixed digital circuitry, programmable digital circuitry, analog baseband circuitry, radio -frequency circuitry, software, firmware, and middleware.
  • connection(s) between processing units 1110 and memories 1120 can comprise parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art.
  • Memories 1120 can comprise non-volatile memory (e.g ., flash memory, hard disk, etc.), volatile memory (e.g., static or dynamic RAM), network-based (e.g.,“cloud”) storage, or a combination thereof.
  • data center 1100 comprises a communications interface 1130 usable to communicate with various devices within data center 1100 as well as with other data centers, as shown in other figures herein.
  • communications interface 1130 is described as a single “interface,” this is for convenience only and skilled persons will recognize that communications interface 1130 can comprise a plurality of interfaces, each for communication with external network devices and/or nodes as desired.
  • communications interface 1130 can comprise one or more Gigabit Ethernet interfaces, optical network interfaces, etc.
  • Memories 1120 can comprise program memory usable to store software code (e.g., program instructions) executed by processing units 1110 that can configure and/or facilitate data center 1100 to perform exemplary methods and/or procedures described herein.
  • memories 1120 can comprise software code executed by processing units 1110 that can facilitate and specifically configure data center 1100 to perform the functions of one or more SDN Controllers as described above. Such functionality is illustrated in Figure 11 as SDN Controller 1160.
  • memories 1120 can comprise software code executed by processing units 1110 that can facilitate and specifically configure data center 1100 to perform the functions of a Cloud Orchestrator as described above. Such functionality is illustrated in Figure 11 as Cloud Orchestrator 1170.
  • memories 1120 can comprise software code executed by processing units 1110 that can facilitate and specifically configure data center 1100 to perform the functions of a BGP Gateway, as described above, in conjunction with communication interface 1130. Such functionality is illustrated in Figure 11 as DC-GW 1140.
  • processing units 1110 and memories 1120 can be used to provide data-plane functionality.
  • one or more processing units 1110 can be configured as VMs as needed and/or desired. Such functionality is illustrated in Figure 11 as VM(s) 1150.
  • one or more processing units 1110 can be used to provide and/or facilitate virtual switching functionality among the VMs and DC- GW. Such functionality is illustrated in Figure 11 as vSwitch 1170.
  • Data Center 1100 can comprise other processing units (not shown) that can be dedicated to providing data-plane functionality including VM(s) 1150 and/or vSwitch 1170, as needed and/or desired.
  • Memories 1120 can also comprise data memory usable for permanent, semi permanent, and/or temporary storage of information for further processing and/or communication by processing units 1110.
  • memories 1120 can comprise a portion usable for local storage of MAC database information, which is illustrated in Figure 11 as local datastore 1180.
  • device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor.
  • functionality of a device or apparatus can be implemented by any combination of hardware and software.
  • a device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other.
  • devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.

Abstract

Exemplary embodiments include methods and/or procedures performed in a first data center configured to communicate with a second data center in an Ethernet virtual private network (EVPN) using a common pool of medium access control (MAC) addresses, including: receiving a message from the second data center indicating allocation of a first MAC address to a first VM; determining if the first MAC address has been allocated to a second VM and, if so, determining whether the second VM should be allocated a different MAC address. If it is determined that the second VM should be allocated a different MAC address, allocating a second MAC address to the second VM and updating a local data store with an entry associating the second VM with the second MAC address. Exemplary embodiments also include data centers configured to perform, and computer-readable media comprising instructions embodying, the exemplary procedural operations.

Description

MEDIUM ACCESS CONTROL (MAC) ADDRESS ALLOCATION ACROSS
MULTIPLE DATA CENTERS
TECHNICAL FIELD
[0001] The present application relates generally to the field of networking, and more specifically to Ethernet Virtual Private Networks (EVPN) between remote data centers or other computing environments including, but not limited to, software-defined networking (SDN) environments where the packet-forwarding functionality (e.g., data plane) is separated from the packet routing or switching process (e.g , control plane).
BACKGROUND
[0002] Software-defined networking (SDN) is an architecture addressing the goals and requirements of various modem high-bandwidth applications by providing dynamic, manageable, cost-effective, and adaptable networking configurations. In general, SDN architectures decouple network control functions - also referred to as“control plane” - and packet switching and/or forwarding functions, also referred to as“data plane.” This separation enables network control to be directly programmable and the underlying infrastructure to be abstracted from applications and network services.
[0003] The primary components of an SDN network are controller nodes (also referred to as“SDN controllers”) and data-plane nodes (DPNs, also referred to as“switches” or, collectively, a“datapath”) that handle the switching and forwarding of the data traffic under direction of the SDN controllers. Furthermore, SDN controllers are often logically- centralized entities that translate requirements of higher-layer applications into configuration of the DPNs that they control, while providing a simpler, more abstract view of the datapath to these applications. The interface to the SDN applications is often referred to as the SDN controller’s “northbound interface.” An exemplary northbound controller interface is OpenStack.
[0004] Similarly, the logical interface between an SDN controller and the controlled DPNs or switches is often referred to as the“southbound interface.” Various standardized southbound interfaces are available, including the OpenFlow (OF) protocol standardized and published by the Open Networking Foundation (ONF). Within the OF protocol, a Logical Switch consists of one or more flow tables and a group table, which collectively perform packet lookups and forwarding from input ports to output ports; and one or more OF channels to a controller. Via these channels, the controller can configure and/or manage the switch, such as by adding, updating, and deleting flow entries in flow tables, both reactively (e.g., responsive to packets) and proactively. A controller can also receive events from the switch and send packets out to the switch via OF channels. A switch’s control channel may support a single OF channel with a single controller or, in some implementations, multiple OF channels enabling multiple controllers to share management of a single switch.
[0005] For example, multiple controllers can be configured in a“high-availability” (HA) cluster, whereby one controller serves as a“master” of the connection from a switch to the cluster, and one or more other controllers are connection“slaves.” In such a configuration, SDN controller nodes in the cluster can be front-ended by a load balancer proxy, which exposes a single virtual Internet Protocol (VIP) address used by the switches or DPNs to connect to the controller cluster. The proxy also can distribute incoming switch connections to controller nodes of the cluster based on some predetermined policy, such as round-robin.
[0006] One popular SDN application is in a data center (DC), which is a physical infrastructure for hosting compute nodes in a room or building, which can be subdivided into different performance-optimized DCs (PODs). Each POD can be a modular, portable, self-contained environment that can be deployed as a high-capacity, scalable DC with low operational cost. Moreover, physical DCs can also be divided into virtual DCs (vDCs) or, similarly, PODs can be divided into virtual PODs (vPODs). This document uses the terms DC and vDCs interchangeably, and the terms POD and vPOD interchangeably.
[0007] Figure 1 shows a block diagram of an exemplary DC comprising an SDN controller, a Cloud Orchestrator, a virtual switch (vSwitch), and one or more virtual computing machines (e.g., VM1 as shown). The vSwitch connects VM1 with any other VMs in the DC via local-area network (LAN) functionality, such as Ethernet. As such, when a new VM (e.g., VM1) is booted, the Cloud Orchestrator allocates a medium access control (MAC) address to the VM. All subsequent communication to and from the VM will use this unique MAC address. An exemplary Cloud Orchestrator is OpenS tack, in which the “Nova” component pre-determines MAC and Internet Protocol (IP) addresses of VMs in the DC.
[0008] Figure 2 shows an exemplary MAC address allocation in which MAC addresses are 48-bit (six-byte or six-octet) values comprising a three-byte organizationally unique identifier (OEP) that uniquely identifies a vendor, manufacturer, or other organization associated with the DC. The remaining three bytes of the MAC address ( i.e ., Network Interface Controller (NIC) Specific in Figure 2), which correspond to the particular VM, are generated randomly by the Cloud Orchestrator. An exemplary algorithm for random MAC address generation is given below:
def generate_mac_addressQ:
"""Generate an Ethernet MAC address. """
# NOTE(vish): We would prefer to use Oxfe here to ensure that linux
# bridge mac addresses don 't change, but it appears to
# conflict with libvirt, so we use the next highest octet
# that has the unicast and locally administered bits set
# properly: Oxfa.
# Discussion: https://bugs.launchpad.net/nova/+bug/921838
mac = [Oxfa, 0x16, 0x3e,
random.randint(0x00, Oxff ),
random.randint(0x00, Oxff),
random.randint(0x00, Oxff]]
return ':'.join(map(lambda x: "%02x" % x, mac]]
[0009] In the DC architecture shown in Figure 1, since the Cloud Orchestrator allocates MAC addresses for all the VMs in a DC, even if the random MAC address portion generated for a particular VM is identical to an existing MAC address portion for another VM, the Cloud Orchestrator can allocate a new MAC address portion for the particular VM. In SDN cloud networks, an SDN controller can manage and facilitate connectivity both within its own DC and with other DCs. Figure 3 shows an exemplary DC configured for operation in an SDN cloud network. In addition to the components shown in Figure 1, the DC shown in Figure 3 includes an additional vS witch (i.e., vSwitch2) and two additional VMs, such that VMs 1-3 are connected to vS witch 1-2. Each of VMs 1-3 have a MAC address comprising three identical bytes (OUI) and a non-identical MAC address portion comprising three bytes (assigned, in this example, sequentially rather than randomly). In addition, the DC of Figure 3 communicates externally via one (or more) DC gateways (DC- GW). For example, the DC of Figure 3 can communicate with a peer DC (not illustrated) via the respective DC-GWs interconnecting the two DCs. As such, communication between DCs and communication between DC-GWs will be used interchangeably herein.
[0010] Figure 4 is a block diagram illustrating two DCs, DC1 and DC2, configured to communicate via respective DC-GWs. A common protocol for inter-DC communication via DC-GWs is Border Gateway Protocol (BGP), as described in Request for Comments (RFC) 4271 published by the Internet Engineering Task Force (IETF). In cloud networks - not necessarily SDN-based cloud networks - there has been interest in extending enterprise- level MAC-layer (e.g., private layer 2 or L2) domains across DCs. One approach is known as Ethernet VPN (EVPN), which is described in RFC 7209, published by the IETF. Similarly, EVPNs utilizing BGP and multiprotocol label switching (MPLS) are described in RFC 7432, also published by IETF. In Figure 4, this is illustrated by running EVPN using MPLS over the BGP between gateways of DC1 and DC2. In SDN-based cloud networks, the SDN controller acts as a BGP“speaker” in addition to controlling the data plane using southbound protocols such as OpenFlow. However, SDN is not mandatory for establishing BGP-EVPN network connectivity across the DCs. As shown in DC2 of Figure 4, a virtual router (vRouter) with Dynamic Host Configuration Protocol (DHCP) allocator can also be used.
[0011] Such multi-DC layer-2 domains are not without problems, however. For example, the respective Cloud Orchestrators in the DCs shown in Figure 4 will operate independently with respect to allocation of MAC addresses. This creates the possibility of the same MAC address - involving the same OUI and two random selections of the same NIC-specific portion - being assigned to an active VM in each DC. This is illustrated in Figure 4, where MAC address FA:l6:3E:0l:0l:0l has been assigned to VM1 in DC1 and VM2 in DC2.
[0012] Unlike in a single DC, it is not possible to detect such MAC address overlap or conflict across multiple DCs. These MAC address conflicts can cause problems such as MAC-layer lockup and/or VM unreachability (due to repetitive MAC address changes), packet loops, and unstable control of vSwitches due to changing MAC address information. One approach to address these problems is to assign static, non-overlapping pools of MAC addresses for random selection in each DC. Even so, it is difficult to predict MAC address demand across multiple DCs in an EVPN. In some cases, a highly-used DC can starve for MAC addresses while other DC(s) have unused MAC addresses.
[0013] Another approach to address these problems is a centralized MAC allocator for ah DCs in the EVPN. While this approach avoids unused MAC addresses, its drawbacks include increased delay for MAC address allocation due to communication delay with the DCs and/or processing delay of the centralized MAC allocator. In addition, the centralized MAC allocator presents a single point of failure that may not be suitable for high- availability applications.
[0014] Accordingly, it can be beneficial to address these problems with a solution that provides robust MAC address allocation across a multi-DC EVPN without over- or under utilizing a limited pool of MAC addresses, incurring excess allocation delay, or creating a single point of failure in the EVPN.
SUMMARY
[0015] Accordingly, to address at least some of such issues and/or problems, certain exemplary embodiments of systems, devices, methods/procedures, and computer-readable media according to the present disclosure can facilitate conflict-free allocation of MAC addresses across multiple data centers in an Ethernet VPN (as implemented, e.g., by a Software-Defined Networking (SDN) network), without requiring additional traffic in the EVPN. As such, these exemplary embodiments can outperform conventional methods, techniques, and systems in various known applications, including exemplary applications discussed herein.
[0016] Certain exemplary embodiments include methods and/or procedures for allocating medium access control (MAC) addresses to virtual computing machines (VMs) in a first data center configured to communicate with a second data center in an EVPN. The exemplary methods and/or procedures can include receiving a request to create a first VM local to the first data center. The exemplary methods and/or procedures can also include allocating a first MAC address to the first VM. In some exemplary embodiments the first MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the first MAC address, and assigning a first randomly- selected identifier as a second portion of the first MAC address.
[0017] The exemplary methods and/or procedures can also include determining whether the first MAC address has been allocated to another VM within the EVPN. In some exemplary embodiments, this determination can comprise comparing the first randomly- selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by a local datastore. If it is determined that the first MAC address has been allocated to another VM, the exemplary methods and/or procedures can include allocating a second MAC address, instead of the first MAC address, to the first VM. In some exemplary embodiments, the second MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the second MAC address, and assigning a second randomly- selected identifier as a second portion of the second MAC address. In some embodiments, the exemplary methods and/or procedures can also include rebooting the first VM if the second MAC address is allocated to the first VM.
[0018] The exemplary methods and/or procedures can also include updating the local datastore with an entry associating the first VM with the MAC address allocated to the first VM, i.e., either the first or the second MAC address; and sending a message to the second data center indicating allocation of either the first or the second MAC address, as the case may be. In some exemplary embodiments, the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message ( e.g ., OPEN message) comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI) advertising a MAC/IP address pair of the first VM.
[0019] Other exemplary methods and/or procedures can be provided for allocating medium access control (MAC) addresses to virtual computing machines (VMs) in a first data center configured to communicate with a second data center in an Ethernet virtual private network (EVPN). These exemplary methods and/or procedures can include receiving a message from the second data center indicating an allocation of a first MAC address to a first VM in the second data center. The exemplary methods and/or procedures can also include determining if the first MAC address has been allocated to a second VM in the first data center, based on the contents of a datastore local to the first data center.
[0020] If it is determined that the first MAC address has been allocated to the second VM, the exemplary methods and/or procedures can also include determining whether the second VM should be allocated a different MAC address than the first MAC address. In some exemplary embodiments, this can comprise comparing first and second values of an identification parameter, the first value associated with the first data center and the second value associated with the second data center; and if the first value is greater than the second value, determining that the second VM should be allocated a different MAC address. In some exemplary embodiments, the identification parameter is a router identifier, and the second value is received from the second data center in a Boarder Gateway Protocol (BGP) message.
[0021] If it is determined that the second VM should be allocated a different MAC address, the exemplary methods and/or procedures can also Include: allocating a second MAC address to the second VM; updating the local datastore with an entry associating the second VM with the second MAC address; and sending a message to the second data center indicating an allocation of the second MAC address to the second VM. In some exemplary embodiments, the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message ( e.g ., OPEN message) comprising EVPN Route-Type-2 (RT-2) Network- Layer Reachability Information (NLRI) advertising a MAC/IP address pair. In some exemplary embodiments, determining if the first MAC address has been allocated to the second VM comprises extracting the first MAC address from the NLRI of the message received from the second data center.
[0022] In some exemplary embodiments, the first and second MAC addresses comprise an organizationally unique identifier (OUI) and respective first and second randomly- selected identifiers. In some exemplary embodiments, determining if the first MAC address has been allocated to the second VM comprises comparing the first randomly-selected identifier to corresponding randomly-selected identifiers of one or more further MAC addresses stored by the local datastore.
[0023] The exemplary methods and/or procedures can also include updating the local datastore with an entry associating the first VM with the first MAC address, based on determining that the first MAC address has not been allocated to the second VM, and/or that the second VM should be allocated a different MAC address.
[0024] Other exemplary embodiments include data centers comprising components such as memories and processors that configure the data center to perform operations corresponding to the exemplary methods and/or procedures described above. Other exemplary embodiments include non-transitory, computer-readable media storing program instructions that, when executed by at least one processor, configure a data center to perform operations corresponding to the exemplary methods and/or procedures described above.
[0025] These and other objects, features and advantages of the exemplary embodiments of the present disclosure will become apparent upon reading the following detailed description of the exemplary embodiments of the present disclosure, when taken in conjunction with the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] Further objects, features and advantages of the present disclosure will become apparent from the following detailed description taken in conjunction with the accompanying Figures showing illustrative embodiments, in which:
[0027] Figure 1 is a block diagram of an exemplary data center (DC);
[0028] Figure 2 illustrates exemplary techniques for allocating MAC addresses in a local area network (LAN), e.g., within a DC;
[0029] Figure 3 is a block diagram of an exemplary DC configured for operation in an SDN cloud network;
[0030] Figure 4 is a block diagram of two DCs configured to communicate via respective DC gateways using EVPN techniques, according to one or more exemplary embodiments of the present disclosure;
[0031] Figures 5a-b illustrate the structure of an exemplary EVPN Route Type-2 (RT-2) message and a Border Gateway Protocol (BGP) message that can encapsulate the exemplary RT-2 messages, according to one or more exemplary embodiments of the present disclosure;
[0032] Figure 6 is a network block diagram illustrating flow of RT-2 messages within a DC configured to communicate using EVPN techniques, according to one or more exemplary embodiments of the present disclosure;
[0033] Figure 7 is network block diagram illustrating flow of RT-2 messages across a wide-area network (WAN) between two DCs configured to communicate using EVPN techniques, according to one or more exemplary embodiments of the present disclosure;
[0034] Figure 8 is a network block diagram illustrating flow of RT-2 messages within and between two DCs configured to communicate using EVPN techniques over a WAN, according to one or more exemplary embodiments of the present disclosure;
[0035] Figure 9 is a flow diagram of an exemplary method and/or procedure for allocating MAC addresses to VMs in a first DC configured to communicate with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure;
[0036] Figure 10 is a flow diagram of another exemplary method and/or procedure for allocating MAC addresses to VMs in a first DC configured to communicate with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure; and [0037] Figure 11 is a block diagram of an exemplary DC according to one or more exemplary embodiments of the present disclosure.
[0038] While the present disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments and is not limited by the particular embodiments illustrated in the figure(s) or in the appended claims.
DETAILED DESCRIPTION
[0039] The RFC 7432 specification for BGP MPLS -based EVPNs defines EVPN Network Layer Reachability Information (NLRI) that includes a Route Type field, a Route Type-Specific field, and a value indicating the length of the Route Type-Specific field. EVPN Route Type 2 (RT-2 - MAC/IP Advertisement Route) is used to exchange advertisements of MAC/IP addresses between the BGP peers ( e.g ., DC-GWs). The Route Type-Specific field of an exemplary EVPN RT-2 message is shown in Figure 5a. This field includes a Route Distinguisher (RD), an Ethernet Segment Identifier (ESI), an Ethernet Tag ID, the respective MAC/IP addresses and their respective lengths, and an MPLS label. An EVPN instance requires a Route Distinguisher (RD) that is unique per MAC-Virtual Routing and Forwarding (VRF) table and one or more globally unique Route Targets (RTs). Each Ethernet segment within the EVPN (e.g., respective segments in DC1 and DC2 of Figure 4) will have a unique ESI. Ethernet Tag ID comprises either a l2-bit or 24-bit identifier that identifies a particular broadcast domain (e.g., a virtual LAN or VLAN) in an EVPN. An EVPN instance comprises one or more broadcast domains. The EVPN RT-2 message can be carried, for example, in the“Optional Parameters” field of a BGP“OPEN” message, as defined in RFC 4271 and illustrated in Figure 5b. The OPEN message also includes a BGP Identifier, which can be the IP address of the sender, as explained in more detail below.
[0040] This is further illustrated by Figures 6 through 8. Figure 6 illustrates data center DC1, where a Cloud Orchestrator receives a new VM boot request and allocates the VM MAC address based on the three-byte OUI and a three-byte random value, e.g., MAC = aa:aa:aa:aa:aa:aa. The DC1 Cloud Orchestrator will inform the SDN Controller of the allocation, causing the SDN Controller (as a BGP Speaker) to send an EVPN RT-2 message comprising the newly allocated MAC/IP addresses to DC-GW (e.g., encapsulated in an OPEN message). In addition, the RT-2 message can include a Route Target (RT) field and a source/next-hop field, which in this case identifies vS witch TEP1 that booted the new VM. Figure 7 illustrates subsequent operations where the DC-GW of DC1 sends the received RT- 2 message to the DC-GW of DC2, indicating the MAC/IP addresses of the newly-booted VM in DC1. The DC-GW of DC1 can also append an MPLS label to the RT-2 message as needed.
[0041] Figure 8 illustrates subsequent operations where the EVPN RT-2 message from DC1 is passed, via the two DC-GWs, to the SDN Controller of DC2. Figure 8 also illustrates that a VM is booted in DC2 and assigned MAC address xx.xx.xx.bb.bb.bb and IP address 1.1.1.3. After being informed by the DC2 Cloud Orchestrator, the BGP speaker ( e.g ., SDN Controller) sends an EVPN RT-2 message comprising this MAC/IP address pair to its DC-GW, indicating TEP5 as the vSwitch that booted the new VM. This message, which can be encapsulated in a BGP OPEN message, then traverses the EVPN in a manner similar to the RT-2 message from DC1 to DC2, described above.
[0042] As such, a BGP speaker (e.g., SDN Controller) in a DC that uses BGP EVPN for establishing multi-DC MAC domains can be aware of the MAC/IP addresses of all VMs that are on those L2 domains (every L2 domain would be an EVPN instance in EVPN terminology). In exemplary embodiments of the present disclosure, a MAC allocator in a particular DC (e.g., Cloud Orchestrator in DC1 of Figure 4) can use this information - which can be stored, e.g., in a database - for allocating MAC addresses while booting VMs in that particular DC. For example, when the Cloud Orchestrator wants to allocate a randomly-generated MAC address portion, it consults this database to determine if the randomly-generated MAC address portion is already allocated in another remote DC on the EVPN. In this manner, exemplary embodiments of the present disclosure can avoid most of the MAC address conflicts and overlap that can occur in EVPNs.
[0043] Nevertheless, even before the arrival of an EVPN RT-2 message advertising a MAC/IP address of a VM in a remote DC, the local DC may need to allocate a MAC address to a booting VM. This demand can lead to overlap or inconsistent MAC address allocation. Exemplary embodiments of the present disclosure address this potential issue as follows. When a BGP speaker receives the routes later, a tie-breaker can be used to determine which VM can retain the conflicting MAC address, and which DC(s)s must force their VMs to relinquish the conflicting MAC addresses and obtain new, non-conflicting ones. As such, all DCs except one allocator (i.e. the tie -breaker winner) force the local VMs with the incorrect MACs to renew their MAC addresses, such that non-overlapping MAC addresses are allocated to these local VMs. Various exemplary methods to force this reallocation are described in more detail hereinbelow. In these exemplary embodiments, the tie-breaking algorithm requires no additional BGP message exchanges.
[0044] If the network shown in Figure 8 is implemented without the exemplary embodiments disclosed herein, the Cloud Orchestrators in DC1 and DC2 allocate MAC addresses from the same pool, such that VMs in each DC can be allocated the same MAC address aa:aa:aa:aa:aa:aa. Even though EVPN RT-2 messages are provided between DCs, the DCs do not act upon information included in such messages and the conflicting VMs will be blocked, causing the problems discussed above. In contrast, if the network shown in Figure 8 is implemented using exemplary embodiments disclosed herein, the Cloud Orchestrator in DC1 can allocate MAC address aa:aa:aa:aa:aa:aa for a VM and send this information in an EVPN RT-2 message to DC2, whose SDN Controller will mark the received MAC address as already used or, if already allocated, determine whether reallocation of a conflicting MAC address is required.
[0045] Figure 9 shows a flow diagram of an exemplary method and/or procedure for allocating MAC addresses to VMs in a first data center (DC) configured to communicate with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure. The exemplary method illustrated in Figure 9 can be implemented, for example, in one or more data centers configured according to Figure 11 (described below). Although the method is illustrated by blocks in the particular order of Figure 9, this order is merely exemplary, and the steps of the method may be performed in a different order than shown by Figure 9, and may be combined and/or divided into blocks having different functionality. Furthermore, the exemplary method and/or procedure shown in Figure 9 is complementary to, and can be used in conjunction with, the exemplary method and/or procedure shown in Figure 10 to provide improvements and/or solutions to problems described herein.
[0046] For example, in block 910, the first data center can receive a request to create a first VM local to the first data center. In block 920, the data center can allocate a first MAC address to the first VM. The allocation of the first MAC address can be performed, e.g., by a Cloud Orchestrator that is part of the first data center. In some exemplary embodiments, the first MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the first MAC address, and assigning a first randomly-selected identifier as a second portion of the first MAC address.
[0047] In block 930, the first data center can determine whether the first MAC address is allocated to another VM within the EVPN. This determination can be performed, e.g., by the Cloud Orchestrator that is part of the first data center. In some exemplary embodiments, the determination in block 930 can comprise comparing the first randomly-selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by a local datastore 900 or, alternatively, a remote datastore accessible by the first data center.
[0048] If it is determined that the first MAC address has been allocated to another VM, operation proceeds to block 940, where the first data center can allocate a second MAC address, instead of the first MAC address, to the first VM. In some exemplary embodiments, the second MAC address can be allocated by assigning an organizationally unique identifier (OUI) as a first portion of the second MAC address, and assigning a second randomly-selected identifier as a second portion of the second MAC address. In some embodiments, the exemplary method and/or procedure of Figure 9 can also include rebooting the first VM if the second MAC address is allocated to the first VM.
[0049] Block 950 of the exemplary method and/or procedure of Figure 9 is reached after completion of block 940, or if it is determined (in block 930) that the first MAC address has not been allocated to another VM within the EVPN. In block 950, the first data center updates the local datastore 900 (or alternatively, a remote datastore accessible by the first data center) with an entry associating the first VM with the MAC address allocated to the first VM, i.e., either the first or the second MAC address according to block 930 or block 940, respectively. In block 960, the first data center sends a message to a second data center indicating allocation of either the first or the second MAC address, as the case may be, to the first VM. In some exemplary embodiments, the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message (e.g., OPEN message) comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI) advertising a MAC/IP address pair of the first VM. In some exemplary embodiments, the second MAC address might also be determined to be allocated to another VM within the EVPN (in a second iteration through block 930). In this case, the first data center can allocate a third (different) MAC address, instead of the second MAC address, to the first VM (in a second iteration through block 940). Thus, the loop from blocks 930 through 950 can be traversed repeatedly until the first data center allocates a unique (within the EVPN) MAC address to the first VM. This unique MAC address is indicated in the message sent to the second data center at block 960.
[0050] Figure 10 shows a flow diagram of another exemplary method and/or procedure for allocating MAC addresses to VMs in a first data center (DC) configured to communication with a second DC in an EVPN, according to one or more exemplary embodiments of the present disclosure. The exemplary method illustrated in Figure 10 can be implemented, for example, in one or more data centers configured according to Figure 11 (described below). Although the method is illustrated by blocks in the particular order of Figure 10, this order is merely exemplary, and the steps of the method may be performed in a different order than shown by Figure 10, and may be combined and/or divided into blocks having different functionality. Furthermore, the exemplary method and/or procedure shown in Figure 10 is complementary to, and can be used in conjunction with, the exemplary method and/or procedure shown in Figure 9 to provide improvements and/or solutions to problems described herein.
[0051] For example, in block 1010, the first data center can receive a message from a second data center of the EVPN indicating allocation of a first MAC address to a first VM within the second data center. In some exemplary embodiments, the first and second data centers can be connected to a WAN via respective first and second gateways, and the message can be a Border Gateway Protocol (BGP) message ( e.g ., OPEN message) comprising EVPN Route-Type-2 (RT-2) Network-Fayer Reachability Information (NFRI) advertising a MAC/IP address pair of the first VM. In some exemplary embodiments, the first MAC address can comprise an organizationally unique identifier (OFT) and a first randomly- selected identifier.
[0052] In block 1020, the first data center can determine whether the first MAC address has been allocated to a second VM within the first data center. This determination can be performed, e.g., by the Cloud Orchestrator that is part of the first data center. In some exemplary embodiments, the determination in block 1020 can comprise extracting the first MAC address from the received message (e.g., from the NFRI of the received message), and/or comparing the first randomly- selected identifier to corresponding randomly-selected identifiers of one or more further MAC addresses stored by a local datastore 1000 or, alternatively, a remote datastore accessible by the first data center. If it is determined that the first MAC address has not been allocated to a second VM, operation proceeds to block 1070 where the local datastore 1000 (or alternatively, a remote datastore accessible by the first data center) is updated with an entry associating the first VM (i.e., in the second/remote data center) with the first MAC address.
[0053] If it is determined that the first MAC address has been allocated to the second VM, operation proceeds to block 1030 where it is determined whether the second VM needs to be allocated a different MAC address than the first MAC address. This operation can be referred to, for example, a“tie-breaker” between conflicting MAC addresses. In some exemplary embodiments, the operations of block 1030 can include comparing first and second values of an identification parameter, the first value associated with the first data center and the second value associated with the second data center. In some exemplary embodiments, the identification parameter can be a BGP ( e.g ., router) identifier received from the second data center in a BGP message (e.g., OPEN message). If the first value is not greater than the second value, then a second MAC address is not allocated to the second VM ( i.e ., the second VM retains the first MAC address). In such case, the exemplary method and/or procedure ends without updating the local datastore 1000 with an entry associating the first MAC address with the first VM in the second (remote) data center. In some exemplary embodiments, upon a determination that the first value is not greater than the second value, the first data center notifies (e.g., by sending a BGP message such as an OPEN message) the second data center that a new (different) MAC address needs to be allocated to the first VM.
[0054] Otherwise, in block 1030, if the first value is greater than the second value, it is determined that the second VM should be allocated a different MAC address. Operation then proceeds to block 1040 where a second MAC address is allocated to the second VM. In some exemplary embodiments, the opposite may be true. That is, if it is determined that the first value is greater than the second value, then a second MAC address is not allocated to the second VM (i.e., the second VM retains the first MAC address) and the first data center may notify the second data center that a new (different) MAC address needs to be allocated to the first VM. In some exemplary embodiments, allocating a second MAC address in block 1040 can include assigning an organizationally unique identifier (OEP) as a first portion of the second MAC address, and assigning a second randomly-selected identifier as a second portion of the second MAC address. In some embodiments, the exemplary method and/or procedure of Figure 10 can also include rebooting the second VM after allocating the second MAC address.
[0055] If a second MAC address is allocated to the second VM in block 1040, operation proceeds to block 1050 where the local datastore 1000 (or alternatively, a remote datastore accessible by the first data center) is updated with an entry associating the second VM with the newly-allocated second MAC address. In block 1060, the first data center sends a message to the second data center indicating allocation of the second MAC address to the second VM within the first data center. In some exemplary embodiments, the message can be a BGP message ( e.g ., OPEN message) comprising EVPN RT-2 NLRI advertising the new MAC/IP address pair of the second VM. In block 1070, the local datastore 1000 (or alternatively, a remote datastore accessible by the first data center) can be updated with an entry associating the first VM (in the second/remote data center) with the first MAC address. Similar to the discussion above for Figure 9, in some exemplary embodiments, the loop from blocks 1020 through 1050 can be traversed repeatedly until the first data center allocates a unique (within the EVPN) MAC address to the second VM. This unique MAC address is indicated in the message sent to the second data center at block 1060.
[0056] Although various embodiments were described above in terms of exemplary methods and/or procedures, the person of ordinary skill will readily comprehend that such methods can be embodied by various combinations of hardware and software in various systems, communication devices, computing devices, control devices, apparatuses, network nodes, components, non-transitory computer-readable media, virtualized nodes and/or components, etc. Figure 11 shows a block diagram of an exemplary data center 1100 utilizing certain embodiments of the present disclosure, including those described above with reference to other figures. In some exemplary embodiments, data center 1100 can comprise an SDN Controller configured, e.g., as part of an Open Daylight (ODL) HA cluster.
[0057] Data center 1100 can comprise one or more processing units 1110 that can be operably connected to one or more memories 1120. Persons of ordinary skill in the art will recognize that processing units 1110 can comprise multiple individual processors (not shown), each of which can implement and/or provide a portion of the functionality described above. In such case, multiple individual processors may be commonly connected to memories 1120, or individually connected to multiple individual memories. More generally, persons of ordinary skill in the art will recognize that various protocols and other functions of data center 1100 may be implemented in many different combinations of hardware and software including, but not limited to, application processors, signal processors, general-purpose processors, multi-core processors, ASICs, fixed digital circuitry, programmable digital circuitry, analog baseband circuitry, radio -frequency circuitry, software, firmware, and middleware.
[0058] The connection(s) between processing units 1110 and memories 1120 can comprise parallel address and data buses, serial ports, or other methods and/or structures known to those of ordinary skill in the art. Memories 1120 can comprise non-volatile memory ( e.g ., flash memory, hard disk, etc.), volatile memory (e.g., static or dynamic RAM), network-based (e.g.,“cloud”) storage, or a combination thereof. In addition, data center 1100 comprises a communications interface 1130 usable to communicate with various devices within data center 1100 as well as with other data centers, as shown in other figures herein. Although communications interface 1130 is described as a single “interface,” this is for convenience only and skilled persons will recognize that communications interface 1130 can comprise a plurality of interfaces, each for communication with external network devices and/or nodes as desired. For example, communications interface 1130 can comprise one or more Gigabit Ethernet interfaces, optical network interfaces, etc.
[0059] Memories 1120 can comprise program memory usable to store software code (e.g., program instructions) executed by processing units 1110 that can configure and/or facilitate data center 1100 to perform exemplary methods and/or procedures described herein. For example, memories 1120 can comprise software code executed by processing units 1110 that can facilitate and specifically configure data center 1100 to perform the functions of one or more SDN Controllers as described above. Such functionality is illustrated in Figure 11 as SDN Controller 1160. Likewise, memories 1120 can comprise software code executed by processing units 1110 that can facilitate and specifically configure data center 1100 to perform the functions of a Cloud Orchestrator as described above. Such functionality is illustrated in Figure 11 as Cloud Orchestrator 1170. Similarly, memories 1120 can comprise software code executed by processing units 1110 that can facilitate and specifically configure data center 1100 to perform the functions of a BGP Gateway, as described above, in conjunction with communication interface 1130. Such functionality is illustrated in Figure 11 as DC-GW 1140.
[0060] Although the above description of processing units 1110 and memories 1120 has focused on control-plane functionality, a portion of processing units 1110 can used to provide data-plane functionality. In some exemplary embodiments, one or more processing units 1110 can be configured as VMs as needed and/or desired. Such functionality is illustrated in Figure 11 as VM(s) 1150. Similarly, one or more processing units 1110 can be used to provide and/or facilitate virtual switching functionality among the VMs and DC- GW. Such functionality is illustrated in Figure 11 as vSwitch 1170. Alternately, Data Center 1100 can comprise other processing units (not shown) that can be dedicated to providing data-plane functionality including VM(s) 1150 and/or vSwitch 1170, as needed and/or desired.
[0061] Memories 1120 can also comprise data memory usable for permanent, semi permanent, and/or temporary storage of information for further processing and/or communication by processing units 1110. For example, memories 1120 can comprise a portion usable for local storage of MAC database information, which is illustrated in Figure 11 as local datastore 1180.
[0062] As described herein, device and/or apparatus can be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of a device or apparatus, instead of being hardware implemented, be implemented as a software module such as a computer program or a computer program product comprising executable software code portions for execution or being run on a processor. Furthermore, functionality of a device or apparatus can be implemented by any combination of hardware and software. A device or apparatus can also be regarded as an assembly of multiple devices and/or apparatuses, whether functionally in cooperation with or independently of each other. Moreover, devices and apparatuses can be implemented in a distributed fashion throughout a system, so long as the functionality of the device or apparatus is preserved. Such and similar principles are considered as known to a skilled person.
[0063] The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures that, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various different exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. In addition, certain terms used in the present disclosure, including the specification, drawings and claims thereof, can be used synonymously in certain instances, including, but not limited to, e.g., data and information. It should be understood that, while these words, and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.

Claims

CLAIMS:
1. A method for allocating medium access control (MAC) addresses to virtual computing machines (VMs) (1150) in a first data center (1100) configured to communicate with a second data center in an Ethernet virtual private network (EVPN), the method comprising:
receiving a request to create a first VM in the first data center (910);
allocating a first MAC address to the first VM (920);
determining if the first MAC address has been allocated to a second VM in the
EVPN (930), based on the contents of a datastore (900) local to the first data center;
if it is determined that the first MAC address has been allocated to the second VM, allocating a second MAC address, instead of the first MAC address, to the first VM (940); and
updating the local datastore with an entry associating the first VM with the first or the second MAC address (950).
2. The method of claim 1, further comprising sending a message to the second data center indicating the allocation of the first or the second MAC address to the first VM (960).
3. The method of claim 2, wherein:
the first and second data centers are connected to a wide-area network (WAN) via respective first and second gateways (1140); and
the message is a Border Gateway Protocol (BGP) message comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI).
4. The method of claim 1, further comprising rebooting the first VM if the second MAC address is allocated to the first VM.
5. The method of claim 1, wherein allocating the first and the second MAC addresses comprises assigning an organizationally unique identifier (OEP) as a first portion of the first and the second MAC addresses.
6. The method of claim 5, wherein:
allocating the first MAC address further comprises assigning a first randomly- selected identifier as a second portion of the first MAC address; and allocating the second MAC address further comprises assigning a second randomly- selected identifier as a second portion of the second MAC address.
7. The method of claim 6, wherein determining if the first MAC address has been allocated to a second VM in the EVPN comprises comparing the first randomly- selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by the local datastore.
8. A method for allocating medium access control (MAC) addresses to virtual computing machines (VMs) in a first data center (1100) configured to communicate with a second data center in an Ethernet virtual private network (EVPN), the method comprising: receiving a message from the second data center indicating an allocation of a first MAC address to a first VM in the second data center (1010); determining if the first MAC address has been allocated to a second VM in the first data center (1020), based on the contents of a datastore (1000) local to the first data center;
if it is determined that the first MAC address has been allocated to the second VM, determining whether the second VM should be allocated a different MAC address than the first MAC address (1030); and
if it is determined that the second VM should be allocated a different MAC address: allocating a second MAC address to the second VM (1040); and updating the local datastore with an entry associating the second VM with the second MAC address (1050).
9. The method of claim 8, further comprising: if it is determined that the second VM should be allocated a different MAC address, sending a message to the second data center indicating an allocation of the second MAC address to the second VM (1060).
10. The method of claim 9, wherein:
the first and second data centers are connected to a wide-area network (WAN) via respective first and second gateways (1140); and
the messages received from and sent to the second data center are Border Gateway Protocol (BGP) messages comprising EVPN Route-Type-2 Network-Layer Reachability Information (NLRI).
11. The method of claim 10, wherein determining if the first MAC address has been allocated to the second VM comprises extracting the first MAC address from the NLRI of the message received from the second data center.
12. The method of claim 8, wherein the first MAC address comprises an
organizationally unique identifier (OUI) and a first randomly-selected identifier.
13. The method of claim 12, wherein determining if the first MAC address has been allocated to the second VM comprises comparing the first randomly-selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by the local datastore.
14. The method of claim 8, wherein determining whether the second VM should be allocated a different MAC address comprises:
comparing first and second values of an identification parameter, the first value associated with the first data center and the second value associated with the second data center; and
if the first value is greater than the second value, determining that the second VM should be allocated a different MAC address.
15. The method of claim 14, wherein the identification parameter is a router identifier, and wherein the first data center receives the second value of the identification parameter from the second data center in a Boarder Gateway Protocol (BGP) message.
16. The method of claim 8, wherein allocating the second MAC address comprises assigning an organizationally unique identifier (OUI) as a first portion of the second MAC address; and assigning a second randomly-selected identifier as a second portion of the second MAC address.
17. The method of claim 8, further comprising: updating the local datastore with an entry associating the first VM with the first MAC address (1070), based on determining at least one of the following: the first MAC address has not been allocated to the second VM; and
the second VM should be allocated a different MAC address.
18. A first data center (1100) configured to communicate with a second data center in an Ethernet virtual private network (EVPN) having a common pool of medium access control (MAC) addresses, the first data center comprising:
a local datastore (1180) for storing MAC address information;
at least one processing unit (1110); and
at least one memory (1120) storing computer-executable instructions that, when executed by the at least one processing unit, configure the first data center to: receive a request to create a first virtual computing machine (VM) (1150) in the first data center;
allocate a first MAC address to the first VM;
determine if the first MAC address has been allocated to a second VM in the EVPN, based on the contents of the local datastore;
if it is determined that the first MAC address has been allocated to the second VM, allocate a second MAC address, instead of the first MAC address, to the first VM; and
update the local datastore with an entry associating the first VM with the first or the second MAC address.
19. The first data center of claim 18, wherein execution of the program instructions further configures the first data center to: send a message to the second data center indicating the allocation of the first or the second MAC address to the first VM.
20. The first data center of claim 19, wherein:
the first and second data centers are connected to a wide-area network (WAN) via respective first and second gateways (1140); and
the message is a Border Gateway Protocol (BGP) message comprising EVPN Route- Type-2 (RT-2) Network-Layer Reachability Information (NLRI).
21. The first data center of claim 18, wherein execution of the program instructions further configures the first data center to: reboot the first VM if the second MAC address is allocated to the first VM.
22. The first data center of claim 18, wherein execution of the instructions configures the first data center to allocate the first and the second MAC addresses by assigning an organizationally unique identifier (OUI) as a first portion of the first and the second MAC addresses.
23. The first data center of claim 22, wherein execution of the instructions configures the first data center to:
allocate the first MAC address by assigning a first randomly- selected identifier as a second portion of the first MAC address; and
allocate the second MAC address by assigning a second randomly-selected identifier as a second portion of the second MAC address.
24. The first data center of claim 23, wherein execution of the instructions configures the first data center to determine if the first MAC address has been allocated to a second VM in the EVPN by comparing the first randomly-selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by the local datastore.
25. A first data center (1100) configured to communicate with a second data center in an Ethernet virtual private network (EVPN) having a common pool of medium access control (MAC) addresses, the first data center comprising:
a local datastore (1180) for storing MAC address information;
at least one processing unit (1110); and at least one memory (1120) storing computer-executable instructions that, when executed by the at least one processing unit, configure the first data center to: receive a message from the second data center indicating an allocation of a first MAC address to a first virtual computing machine (VM) in the second data center;
determine if the first MAC address has been allocated to a second VM (1150) in the first data center, based on the contents of the local datastore; if it is determined that the first MAC address has been allocated to the second VM, determine whether the second VM should be allocated a different MAC address than the first MAC address; and
if it is determined that the second VM should be allocated a different MAC address:
allocate a second MAC address to the second VM; and
update the local datastore with an entry associating the second VM with the second MAC address.
26. The first data center of claim 25, further comprising: if it is determined that the second VM should be allocated a different MAC address, sending a message to the second data center indicating an allocation of the second MAC address to the second VM.
27. The first data center of claim 26, wherein:
the first and second data centers are connected to a wide-area network (WAN) via respective first and second gateways (1140); and
the messages received from and sent to the second data center are Border Gateway Protocol (BGP) messages comprising EVPN Route-Type-2 (RT-2) Network- Layer Reachability Information (NLRI).
28. The first data center of claim 27, wherein execution of the instructions configures the first data center to determine if the first MAC address has been allocated to the second VM by extracting the first MAC address from the NLRI of the message received from the second data center.
29. The first data center of claim 25, wherein the first MAC address comprises an organizationally unique identifier (OUI) and a first randomly-selected identifier.
30. The first data center of claim 29, wherein execution of the instructions configures the first data center to determine if the first MAC address has been allocated to the second VM by comparing the first randomly- selected identifier to corresponding randomly- selected identifiers of one or more further MAC addresses stored by the local datastore.
31. The first data center of claim 25, wherein execution of the instructions configures the first data center to determine whether the second VM should be allocated a different MAC address by:
comparing first and second values of an identification parameter, the first value associated with the first data center and the second value associated with the second data center; and
if the first value is greater than the second value, determining that the second VM should be allocated a different MAC address.
32. The first data center of claim 31, wherein the identification parameter is a router identifier, and wherein execution of the instructions further configures the first data center to receive the second value of the identification parameter from the second data center in a Boarder Gateway Protocol (BGP) message.
33. The first data center of claim 25, wherein execution of the instructions configures the first data center to allocate the second MAC address by: assigning an organizationally unique identifier (OUI) as a first portion of the second MAC address; and assigning a second randomly-selected identifier as a second portion of the second MAC address.
34. The first data center of claim 25, wherein execution of the instructions further configures the first data center to update the local datastore with an entry associating the first VM with the first MAC address, based on determining at least one of the following: the first MAC address has not been allocated to the second VM; and the second VM should be allocated a different MAC address.
35. A non-transitory, computer-readable medium storing computer-executable instructions executable by at least one processor of a first data center arranged to communicate with a second data center in an Ethernet virtual private network (EVPN) having a common pool of medium access control (MAC) addresses, wherein execution of the instructions configure the first data center to:
receive a request to create a first virtual computing machine (VM) in the first data center;
allocate a first MAC address to the first VM;
determine if the first MAC address has been allocated to a second VM in the EVPN, based on the contents of a datastore local to the first data center; if it is determined that the first MAC address has been allocated to the second VM, allocate a second MAC address, instead of the first MAC address, to the first VM; and
update the local datastore with an entry associating the first VM with the first or the second MAC address.
36. A non-transitory, computer-readable medium storing computer-executable instructions executable by at least one processor of a first data center arranged to communicate with a second data center in an Ethernet virtual private network (EVPN) having a common pool of medium access control (MAC) addresses, wherein execution of the instructions configure the first data center to:
receive a message from the second data center indicating an allocation of a first MAC address to a first VM in the second data center;
determine if the first MAC address has been allocated to a second VM in the first data center, based on the contents of a datastore local to the first data center; if it is determined that the first MAC address has been allocated to the second VM, determine whether the second VM should be allocated a different MAC address than the first MAC address; and
if it is determined that the second VM should be allocated a different MAC address: allocate a second MAC address to the second VM; and
update the local datastore with an entry associating the second VM with the second MAC address.
PCT/IN2017/050629 2017-12-30 2017-12-30 Medium access control (mac) address allocation across multiple data centers WO2019130327A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IN2017/050629 WO2019130327A1 (en) 2017-12-30 2017-12-30 Medium access control (mac) address allocation across multiple data centers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IN2017/050629 WO2019130327A1 (en) 2017-12-30 2017-12-30 Medium access control (mac) address allocation across multiple data centers

Publications (1)

Publication Number Publication Date
WO2019130327A1 true WO2019130327A1 (en) 2019-07-04

Family

ID=67063291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2017/050629 WO2019130327A1 (en) 2017-12-30 2017-12-30 Medium access control (mac) address allocation across multiple data centers

Country Status (1)

Country Link
WO (1) WO2019130327A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090327462A1 (en) * 2008-06-27 2009-12-31 International Business Machines Corporation Method, system and program product for managing assignment of mac addresses in a virtual machine environment
US20150016461A1 (en) * 2013-06-18 2015-01-15 Telefonaktiebolaget L M Ericsson (Publ) Duplicate mac address detection
US20170288970A1 (en) * 2016-03-29 2017-10-05 Juniper Networks, Inc. Mass mac withdrawal for evpn-dci using virtual esi

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090327462A1 (en) * 2008-06-27 2009-12-31 International Business Machines Corporation Method, system and program product for managing assignment of mac addresses in a virtual machine environment
US20150016461A1 (en) * 2013-06-18 2015-01-15 Telefonaktiebolaget L M Ericsson (Publ) Duplicate mac address detection
US20170288970A1 (en) * 2016-03-29 2017-10-05 Juniper Networks, Inc. Mass mac withdrawal for evpn-dci using virtual esi

Similar Documents

Publication Publication Date Title
US11563602B2 (en) Method and apparatus for providing a point-to-point connection over a network
US11895023B2 (en) Enabling hardware switches to perform logical routing functionalities
US10484515B2 (en) Implementing logical metadata proxy servers in logical networks
US11902050B2 (en) Method for providing distributed gateway service at host computer
EP3984181B1 (en) L3 underlay routing in a cloud environment using hybrid distributed logical router
US9847938B2 (en) Configuring logical routers on hardware switches
US9912612B2 (en) Extended ethernet fabric switches
US20170317969A1 (en) Implementing logical dhcp servers in logical networks
US11451413B2 (en) Method for advertising availability of distributed gateway service and machines at host computer
US10530656B2 (en) Traffic replication in software-defined networking (SDN) environments
US9860117B2 (en) Automatically generated virtual network elements for virtualized packet networks
US11258729B2 (en) Deploying a software defined networking (SDN) solution on a host using a single active uplink
US9590824B1 (en) Signaling host move in dynamic fabric automation using multiprotocol BGP
WO2013113264A1 (en) Interconnecting data centers for migration of virtual machines
CN112910750A (en) Ingress ECMP in a virtual distributed routing environment
US11895030B2 (en) Scalable overlay multicast routing
WO2022026012A1 (en) Route advertisement to support distributed gateway services architecture
US20220038379A1 (en) Route advertisement to support distributed gateway services architecture
US11153169B2 (en) Distributed storage system with overlay network
US11949602B2 (en) Stretched EPG and micro-segmentation in multisite fabrics
US11303701B2 (en) Handling failure at logical routers
US9876689B1 (en) Automatically generated virtual network elements for virtualized local area networks
WO2019130327A1 (en) Medium access control (mac) address allocation across multiple data centers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936324

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936324

Country of ref document: EP

Kind code of ref document: A1