US11729094B2 - Source-based routing for virtual datacenters - Google Patents

Source-based routing for virtual datacenters Download PDF

Info

Publication number
US11729094B2
US11729094B2 US17/366,676 US202117366676A US11729094B2 US 11729094 B2 US11729094 B2 US 11729094B2 US 202117366676 A US202117366676 A US 202117366676A US 11729094 B2 US11729094 B2 US 11729094B2
Authority
US
United States
Prior art keywords
edge gateway
data messages
workloads
router
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/366,676
Other versions
US20230006920A1 (en
Inventor
Ganes Kumar Arumugam
Vijai Coimbatore Natarajan
Harish KANAKARAJU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VMware LLC
Original Assignee
VMware LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VMware LLC filed Critical VMware LLC
Priority to US17/366,676 priority Critical patent/US11729094B2/en
Assigned to VMWARE, INC. reassignment VMWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NATARAJAN, VIJAI COIMBATORE, ARUMUGAM, GANES KUMAR, KANAKARAJU, HARISH
Publication of US20230006920A1 publication Critical patent/US20230006920A1/en
Application granted granted Critical
Publication of US11729094B2 publication Critical patent/US11729094B2/en
Assigned to VMware LLC reassignment VMware LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VMWARE, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/34Source routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2514Translation of Internet protocol [IP] addresses between local and global IP addresses

Definitions

  • More and more enterprises have moved or are in the process of moving large portions of their computing workloads into various public clouds (e.g., Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, etc.).
  • AWS Amazon Web Services
  • GCP Google Cloud Platform
  • Azure Microsoft Azure
  • Some of these public clouds enable a tenant to configure a virtual datacenter within the public cloud, with a portion of the public cloud (i.e., a set of host computers) fully allocated to that tenant's virtual datacenter.
  • the tenant will often want this virtual datacenter to be able to connect with various external components, such as the Internet, an on-premises network, etc.
  • the virtual datacenter will need to be able to differentiate between different types of traffic sent to the same destinations.
  • Some embodiments provide a method for configuring a virtual datacenter on a set of host computers in a public cloud, such that the virtual datacenter can communicate with different external entities.
  • a router between an edge gateway of the virtual datacenter and an underlay network of the public cloud e.g., executing on the same host computer as the edge gateway
  • the edge gateway is configured to apply different sets of services to data messages received at these different interfaces, thereby enabling the edge gateway to apply different sets of services to data traffic received from different external entities.
  • the virtual datacenter is defined within virtual private clouds (VPCs) of the public cloud.
  • VPC virtual private clouds
  • a VPC in some embodiments, is a set of workloads that are allocated to the tenant of the public cloud (e.g., an enterprise) and that are isolated from workloads of other tenants.
  • the tenant VPC is allocated a set of physical host computers of the public cloud that only host workload data compute nodes (e.g., virtual machines (VMs), containers, etc.) that are part of the tenant virtual datacenter (i.e., the physical host computers are not shared with other tenants of the public cloud).
  • VMs virtual machines
  • a tenant logical network is defined, to which both the management components and the endpoint workloads connect.
  • the workloads of the virtual datacenter include a set of network management components (e.g., network manager(s) and/or controller(s), compute manager(s), etc.), a set of network endpoints (e.g., on which applications operate), and an edge gateway (e.g., a virtual machine) that executes on a particular host computer and processes data traffic between the workloads of the virtual datacenter and the external entities.
  • the network management components of the virtual datacenter manage the network endpoint workloads and configure the host computers (e.g., managed forwarding elements executing on the host computers) to implement a logical network for communication between the network endpoint workloads.
  • the edge gateway of the virtual datacenter is an interface between the virtual datacenter and an underlay network of the public cloud (i.e., the public cloud physical network).
  • the virtual datacenter workloads communicate with various external entities through this underlay network.
  • Some embodiments configure a router between the edge gateway and the public cloud underlay network, executing on the same particular host computer as the edge gateway (e.g., in the virtualization software of that host computer).
  • the edge gateway is configured with multiple different interfaces corresponding to multiple different external entities with which the virtual datacenter workloads exchange traffic. Some embodiments configure the edge gateway to perform different sets of services on data traffic depending on which of these interfaces the traffic is being sent out (for outgoing traffic) of or received through (for incoming traffic). That is, the edge gateway performs different sets of services for traffic sent to or received from different external entities.
  • These external entities can include the workloads of one or more on-premises datacenters (e.g., enterprise datacenters managed by the public cloud tenant), public cloud provider services (e.g., storage, etc.), public Internet traffic, and/or other entities.
  • the virtual datacenter workloads, on-premises datacenter workloads, and public cloud provider services will each have their own separate associated sets of network addresses (i.e., one or more associated subnets), with network addresses outside of these subnets assumed to be associated with the public Internet traffic.
  • the edge gateway receives traffic from the virtual datacenter workloads directed to any of these external entities, the edge gateway routes the traffic to the local router via the interface associated with the destination address (e.g., a first interface associated with on-premises traffic, a second interface associated with cloud provider services traffic, and a third interface associated with Internet traffic).
  • the edge gateway can then perform the correct set of services (e.g., firewall, network address translation (NAT), VPN, etc.) on the traffic.
  • This traffic is sent to the local router through the correct interface, which uses its own routing table to route the outgoing traffic to the cloud provider underlay network.
  • NAT network address translation
  • the edge gateway applies source NAT to traffic directed to the public Internet, no such address translation is applied to traffic being sent to either the on-premises datacenter workloads or to public cloud provider services. While this is not a problem for routing outbound traffic, the local router needs to be able to route traffic from the on-premises datacenter workloads and the public cloud provider services to different interfaces of the edge gateway even when these data messages have the same destination address (i.e., a virtual datacenter workload address), so that the edge gateway receives the traffic through the appropriate interface and therefore applies the appropriate set of services to each data message.
  • a virtual datacenter workload address i.e., a virtual datacenter workload address
  • some embodiments configure the local router to use source address-based routing for these incoming data messages.
  • These routes specify that any data message received at the local router that (i) has a destination address associated with the virtual datacenter workloads and (ii) has a source address associated with a particular external entity is routed to the edge gateway interface associated with that external entity. That is, traffic directed to the virtual datacenter workloads having a source address associated with the on-premises datacenter is routed by the local router to the on-premises interface of the edge gateway, while traffic directed to the virtual datacenter workloads having a source address associated with the public cloud provider services is routed by the local router to the public cloud provider services interface of the edge gateway.
  • Traffic received from the public internet does not have a specific associated set of network addresses, in some embodiments, and thus any traffic directed to the virtual datacenter destination address the source address of which is not associated with any of the other interfaces is routed to the public Internet interface of the edge gateway.
  • the edge gateway can then apply the correct set of services to the incoming data messages based on the interface through which these data messages are received.
  • FIG. 1 conceptually illustrates a tenant logical network for a virtual datacenter of some embodiments.
  • FIG. 2 conceptually illustrates the physical implementation of the virtual datacenter in a public cloud datacenter according to some embodiments.
  • FIG. 3 conceptually illustrates a host computer hosting an edge gateway.
  • FIG. 4 conceptually illustrates a portion of the routing table configured for a local router in some embodiments.
  • FIG. 5 conceptually illustrates a process for configuring a virtual datacenter and the routing for data traffic that the virtual datacenter exchanges with external entities.
  • FIG. 6 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
  • Some embodiments provide a method for configuring a virtual datacenter on a set of host computers in a public cloud, such that the virtual datacenter can communicate with different external entities.
  • a router between an edge gateway of the virtual datacenter and an underlay network of the public cloud e.g., executing on the same host computer as the edge gateway
  • the edge gateway is configured to apply different sets of services to data messages received at these different interfaces, thereby enabling the edge gateway to apply different sets of services to data traffic received from different external entities.
  • the virtual datacenter is defined within virtual private clouds (VPCs) of the public cloud.
  • VPC virtual private clouds
  • a VPC in some embodiments, is a set of workloads that are allocated to the tenant of the public cloud (e.g., an enterprise) and that are isolated from workloads of other tenants.
  • the tenant VPC is allocated a set of physical host computers of the public cloud that only host workload data compute nodes (e.g., virtual machines (VMs), containers, etc.) that are part of the tenant virtual datacenter (i.e., the physical host computers are not shared with other tenants of the public cloud).
  • VMs virtual machines
  • a tenant logical network is defined, to which both the management components and the endpoint workloads connect.
  • the workloads of the virtual datacenter include a set of network management components (e.g., network manager(s) and/or controller(s), compute manager(s), etc.), a set of network endpoints (e.g., on which applications operate), and an edge gateway (e.g., a virtual machine) that executes on a particular host computer and processes data traffic between the workloads of the virtual datacenter and the external entities.
  • the network management components of the virtual datacenter manage the network endpoint workloads and configure the host computers (e.g., managed forwarding elements executing on the host computers) to implement a logical network for communication between the network endpoint workloads.
  • FIG. 1 conceptually illustrates a tenant logical network 100 for a virtual datacenter 105 of some embodiments.
  • the logical network 100 of the virtual datacenter is defined to include a tier-0 (T0) logical router 110 , a management tier-1 (T1) logical router 115 , and a compute tier-1 (T1) logical router 120 .
  • the T0 logical router 110 (implemented in part as an edge gateway) handles traffic entering and exiting the virtual datacenter 105 (e.g., traffic sent to and from an on-premises datacenter, traffic sent to and from services provided by the public cloud provider, traffic between workloads at the virtual datacenter 105 and client devices connected to the public Internet, etc.).
  • the T0 logical router 110 has multiple connections (i.e., multiple uplinks) to the underlay network 180 of the public cloud.
  • these different uplinks are used as interfaces for communicating different types of traffic with the public cloud underlay (i.e., traffic to and from different external entities).
  • some or all traffic between the management portion of the logical network 100 and the network endpoint workload data compute nodes (DCNs) is sent through the T0 logical router 110 (e.g., as a logical path between the management T1 logical router 115 and the compute T1 logical router 120 ).
  • the T0 logical router 110 may also be defined to provide certain services for data traffic that it processes (e.g., with different services defined for traffic with different external entities by defining different services on different uplinks).
  • the management and compute T1 logical routers 115 and 120 are sometimes referred to as the management gateway and compute gateway.
  • a typical virtual datacenter is defined with these two T1 logical routers connected to a T0 logical router, which segregates the management network segments from the compute network segments to which workload DCNs connect.
  • public network traffic from external client devices would not be allowed to connect to the management network but would (in certain cases) be allowed to connect to the compute network (e.g., if the compute network includes web servers for a public-facing application).
  • Each of the T1 logical routers 115 and 120 may also apply services to traffic that it processes, whether that traffic is received from the T0 logical router 110 or received from one of the network segments underneath the T1 logical router.
  • the virtual datacenter logical network 100 includes three management logical switches 125 - 135 (also referred to as network segments) and two compute logical switches 140 - 145 .
  • One or more compute manager DCNs 150 connect to the first management logical switch 125 and one or more network manager and controller DCNs 155 connect to the second management logical switch 130 .
  • the DCNs shown here may be implemented in the public cloud as virtual machines (VMs), containers, or other types of machines, in different embodiments.
  • multiple compute manager DCNs 150 form a compute manager cluster connected to the logical switch 125
  • multiple network manager DCNs 155 form a management plane cluster
  • multiple network controller DCNs 155 form a control plane cluster (both of which are connected to the same logical switch 130 ).
  • the virtual datacenter 105 also includes workload DCNs 160 .
  • These DCNs can host applications that are accessed by users (e.g., employees of the enterprise that owns and manages the virtual datacenter 105 ), external client devices (e.g., individuals accessing a web server through a public network), other DCNs (e.g., in the same virtual datacenter or different datacenters, such as an on-premises datacenter), or services provided by the public cloud.
  • the workload DCNs 160 in this example connect to two logical switches 140 and 145 (e.g., because they implement different tiers of an application, or different applications altogether). These DCNs 160 can communicate with each other, with workload DCNs in other datacenters, etc.
  • the workload DCNs 160 include a separate interface (e.g., in a different subnet) that connects to a management logical switch 135 .
  • the workload DCNs 160 communicate with the compute and network management DCNs 150 and 155 via this logical switch 135 , without requiring this control traffic to be sent through the TO logical router 110 .
  • FIG. 2 conceptually illustrates the physical implementation of the virtual datacenter 105 in a public cloud datacenter 200 according to some embodiments.
  • some embodiments implement a virtual datacenter such as that shown in FIG. 1 within a VPC of a public cloud datacenter.
  • This figure shows that the virtual datacenter 105 is implemented in an isolated VPC 205 of the public cloud datacenter 200 .
  • this VPC 205 is allocated not just a set of VMs or other DCNs that execute on host computers managed by the public cloud provider and potentially shared with other tenants of the public cloud, but rather a set of host computers 210 - 220 of the public cloud datacenter 200 .
  • the entire virtual datacenter may be implemented on a single host computer of the public datacenter 200 (which may host many VMs, containers, or other DCNs) or multiple different host computers.
  • the public datacenter 200 which may host many VMs, containers, or other DCNs
  • at least two host computers 210 and 215 execute workload and/or management VMs.
  • Another host computer 220 executes an edge gateway 225 (e.g., as a VM, within a datapath, etc.).
  • this edge gateway 225 implements a centralized component of the TO logical router, such that all traffic between any external networks and the virtual datacenter is processed by the gateway datapath 225 . Additional details regarding logical routers and their implementation can be found in U.S. Pat. No. 9,787,605, which is incorporated herein by reference.
  • the edge gateway 225 connects to a local router that is implemented in the hypervisor of the host computer 220 in some embodiments. This local router provides routing between the edge gateway uplinks and the public cloud underlay network 180 . Though not shown, in some embodiments both an active edge gateway and a standby edge gateway are implemented in the virtual datacenter (e.g., on two different host computers).
  • FIG. 3 conceptually illustrates a host computer 300 hosting an edge gateway 305 in more detail.
  • the virtual datacenter of some embodiments includes two edge gateways (e.g., on two separate host computers), with one as an active edge gateway and the other as a standby edge gateway.
  • the host computers on which the active and standby edge gateways execute have a similar structure to that shown in the figure.
  • the edge gateway 305 is a VM, container, etc. in some embodiments. This edge gateway 305 may execute a set of virtual switches and/or virtual routers, a DPDK-based datapath, a single flow-based forwarding element (e.g., Open vSwitch), or other forwarding element that enables the edge gateway to perform forwarding for multiple logical elements in addition to performing any services associated with those logical elements (e.g., firewall, VPN, NAT, etc.). As shown, the edge gateway 305 includes four interfaces: one downlink 310 and three uplinks 315 - 325 . The downlink is used for communication with virtual datacenter workloads (e.g., network management components, logical network endpoints, etc.) on the same host computer 300 or other host computers of the virtual datacenter.
  • virtual datacenter workloads e.g., network management components, logical network endpoints, etc.
  • Each of the uplinks 315 - 325 is used for sending data messages to and receiving data messages from a different external entity.
  • the first uplink 315 processes data traffic between the virtual datacenter workloads and one or more on-premises datacenters owned and/or managed by the public cloud tenant that owns the virtual datacenter.
  • the second uplink 320 processes data traffic between the virtual datacenter workloads and one or more cloud provider services, such as storage services, authentication services, deep packet inspection services, and/or other services provided by the public cloud.
  • the third uplink 325 processes data traffic between the virtual datacenter workloads and the public Internet. This could be traffic sent through the public Internet from various client devices or be used by the virtual datacenter workloads to contact other domains (e.g., other applications in other datacenters).
  • Some embodiments configure the multiple different uplinks for the edge gateway so that the edge gateway can perform different sets of services on the incoming and outgoing data traffic depending on the external entity with which the virtual datacenter workloads are communicating. For instance, VPN services might be used for communication with on-premises datacenter workloads, but not for communication with the provider services or Internet sources. Different firewall rules might be applied for the different external entities, and some embodiments perform source NAT (to convert the virtual datacenter workload network address to a public network address) for Internet traffic but not for cloud provider services data traffic or on-premises data traffic.
  • VPN services might be used for communication with on-premises datacenter workloads, but not for communication with the provider services or Internet sources.
  • Different firewall rules might be applied for the different external entities, and some embodiments perform source NAT (to convert the virtual datacenter workload network address to a public network address) for Internet traffic but not for cloud provider services data traffic or on-premises data traffic.
  • Each of the edge gateway uplinks 315 - 325 has a different network address and is on a different subnet in some embodiments.
  • the on-premises uplink 315 is configured with an IP address of 10.10.1.2 and is connected to the subnet 10.10.1.0/24.
  • the provider services uplink 320 is configured with an IP address of 10.10.2.2 and is connected to the subnet 10.10.2.0/24.
  • the Internet traffic uplink 325 is configured with an IP address of 10.10.3.2 as is connected to the subnet 10.10.3.0/24.
  • a local software router 330 also executes on the host computer 300 (e.g., in virtualization software of the host computer 300 ) and processes data traffic between the edge gateway 305 and the public cloud underlay network 335 .
  • the public cloud underlay network 335 is a layer-3 network (i.e., does not handle L2 traffic such as address resolution protocol (ARP) traffic, gratuitous ARP (GARP) traffic, etc.).
  • ARP address resolution protocol
  • GARP gratuitous ARP
  • a software router executing on the host computer 300 handles this communication.
  • the edge gateway 305 does not communicate directly with the underlay 335 because the uplinks 315 - 325 are on their own respective subnets separate from the underlay and the underlay 335 would not necessarily be able to differentiate between traffic for the different uplinks.
  • the local software router 330 is configured to route traffic from different external entities to the different edge gateway uplinks 315 - 325 .
  • the local software router 330 has three interfaces 340 - 350 that correspond to the three edge gateway uplinks 315 - 325 . Each of these interfaces is on the same subnet as the corresponding edge gateway interface: the on-premises interface 340 of the router 330 is configured with an IP address of 10.10.1.1, the cloud provider services interface 345 is configured with an IP address of 10.10.2.1, and the public Internet traffic interface 350 is configured with an IP address of 10.10.3.1.
  • the local software router 330 also has a single interface 355 to the public cloud underlay 335 , which is on the same subnet as that underlay (i.e., as the underlay router(s)). In this example, the underlay interface 355 is configured with an IP address of 10.10.4.2, with the underlay having a subnet 10.10.4.0/24.
  • the virtual datacenter workloads, on-premises datacenter workloads, and public cloud provider services each have their own separate associated sets of network addresses.
  • the logical network within the virtual datacenter has one or more subnets (e.g., a different subnet for each logical switch within the logical network) that are used to communicate with the public cloud provider services and the on-premises datacenter workloads.
  • the on-premises datacenter workloads have their own subnets and the cloud provider services are also typically assigned network addresses in one or more separate subnets. Network addresses outside of these subnets can be assumed to be associated with public Internet traffic.
  • the edge gateway 305 When the edge gateway 305 receives traffic from the virtual datacenter workloads directed to any of these various external entities, the edge gateway 305 routes the traffic to the local router via the interface 315 - 325 associated with the destination address. That is, the edge gateway routing table routes data messages having destination addresses associated with the virtual datacenter workloads via the downlink 310 (i.e., to a particular host computer in the virtual datacenter via the downlink 310 ), data messages having destination addresses associated with the on-premises datacenter via the uplink 315 (i.e., to the local router interface 340 as a next hop via the uplink 315 ), data messages having destination addresses associated with the cloud provider services to the uplink 320 (i.e., to the local router interface 345 as a next hop via the uplink 320 ), and data messages destined for the public Internet (e.g., using a default route) via the uplink 325 (i.e., to the local router interface 350 as a next hop via the uplink 325 ).
  • the edge gateway 305 By routing each type of outgoing data through the associated uplink 315 - 325 , the edge gateway 305 applies the appropriate set of services (e.g., firewall, source network address translation (NAT), VPN, etc.) on the traffic.
  • the local router 330 uses its own routing table to route the outgoing traffic to the cloud provider underlay network via the interface 355 .
  • the edge gateway 305 applies source NAT to traffic sent from the virtual datacenter to endpoints via the public Internet (i.e., the uplink 325 is configured to perform source NAT on this traffic), but no such address translation is applied to traffic sent to either the on-premises datacenter or public cloud provider services (i.e., the uplinks 315 and 320 are not configured to perform source NAT on their traffic). For routing of the outbound traffic, this is not an issue.
  • the data messages are sent to the local router 330 , which routes them to the public cloud underlay 335 via the interface 355 based on their destination addresses.
  • these external entities when they send return traffic to the virtual datacenter, they may have the same destination address (or sets of destination addresses) but need to be routed to different interfaces of the edge gateway 305 so that the edge gateway can apply the appropriate set of services to different data messages. That is, the public cloud provider service and an on-premises workload might both send data messages to the same virtual datacenter workload using the same destination network address.
  • the public cloud underlay would route this traffic to the interface 355 of the local software router 330 , but this router would not be able to differentiate as to which of the edge gateway uplink interfaces should receive which data message using standard routing based on the destination network address.
  • some embodiments configure the local router to route some data messages (specifically, incoming data messages directed to the virtual datacenter workloads) based on the source address of the data messages rather than only the destination address.
  • These routes specify that any data message received at the local router that (i) has a destination address associated with the virtual datacenter workloads and (ii) has a source address associated with a particular external entity is routed to the edge gateway interface associated with that external entity.
  • FIG. 4 conceptually illustrates a portion of the routing table 400 configured for the local router 330 in some embodiments.
  • some of the routes in the routing table 400 are policy-based routes (i.e., routes that do not match only on destination network address) that match at least partly on the source network address of a received data message.
  • the first route specifies that any data message received at the local router that (i) has a source address associated with the cloud provider services and (ii) has a destination address associated with the virtual datacenter workloads is routed to a next hop of 10.10.2.2 (i.e., the edge gateway interface address associated with cloud provider services traffic).
  • the routing table also specifies that any traffic directed to 10.10.2.0/24 is output via the cloud provider services interface 345 . It should be noted that this may represent multiple routes if there are multiple addresses or subnets associated with the cloud provider services and/or the virtual datacenter workloads.
  • the second route specifies that any data message received at the local router that (i) has a source address associated with the on-premises datacenter and (ii) has a destination address associated with the virtual datacenter workloads is routed to a next hop of 10.10.1.2 (i.e., the edge gateway interface address associated with on-premises datacenter traffic).
  • the routing table also specifies that any traffic directed to 10.10.1.0/24 is output via the on-premises interface 340 .
  • this second route may represent multiple routes if there are multiple addresses or subnets associated with the on-premises datacenter and/or the virtual datacenter workloads.
  • the third route specifies that any data message received at the local router that has a destination address associated with the virtual datacenter workloads and any source address is routed to a next hop of 10.10.3.2 (i.e., the edge gateway interface address associated with public Internet traffic). Though not shown, in some embodiments the routing table also specifies that any traffic directed to 10.10.3.0/24 is output via the Internet traffic interface 350 .
  • This third route may also represent multiple routes if there are multiple addresses or subnets associated with the virtual datacenter workloads. In some embodiments, this is a different destination address than used for the first two routes because traffic from public Internet sources is sent to a public IP address and translated by the edge gateway.
  • the third route is configured with a lower priority than the first two routes so that data traffic directed to the workloads will be sent to one of the first two interfaces (for cloud provider services traffic or on-premises traffic) if matching either of those source addresses and to the public Internet interface for any other source addresses.
  • data traffic from different sources can be directed to the appropriate edge gateway interface and therefore have the appropriate services (firewall, NAT, VPN, etc.) applied.
  • the routing table 400 includes a default route routing data messages with any other destination addresses to a next hop of 10.10.4.1 (i.e., to be output via the interface 355 to a next hop of a router on the public cloud underlay network). This is the output for any traffic sent from the virtual datacenter workloads to the various external entities.
  • the local router 330 operates like a multiplexer for outgoing traffic from multiple different interfaces of the edge gateway and a demultiplexer for incoming traffic (spreading this traffic to the different interfaces of the edge gateway).
  • FIG. 5 conceptually illustrates a process 500 for configuring a virtual datacenter and the routing for data traffic that the virtual datacenter exchanges with external entities.
  • the process 500 is performed in part by the network manager and controller clusters (as well as the compute cluster) in the virtual datacenter in some embodiments. It should be understood that the process 500 is a conceptual process, and that (i) the operations shown may not be performed in the order shown and (ii) different network and/or compute managers or controllers may perform some of the different operations.
  • the edge gateway (and the routing table in the edge gateway) are configured by a first component (e.g., a cloud service application) while the local software router (and the routing table of the local software router) are configured by a second component (e.g., an agent running on the host computer on which the edge gateway and local software router execute).
  • the second component also configures the underlay router(s).
  • These components may be network management system components (e.g., they receive the configuration information from the network manager and/or controllers in the virtual datacenter).
  • the process 500 begins by configuring (at 505 ) virtual datacenter workloads on host computers of a VPC allocated to the tenant (i.e., the VPC allocated for hosting the virtual datacenter).
  • these host computers are segregated and allocated solely for the virtual datacenter in some embodiments, such that the compute and/or network manager/controller applications have access to the virtualization software of the host computers.
  • a compute controller configures various VMs and/or other workloads to execute on the host computers.
  • a network manager configures the virtualization software on these host computers to perform logical networking for data traffic sent to and from these workloads.
  • the process 500 also configures (at 510 ) an edge gateway to execute on a particular host computer of the VPC.
  • this edge gateway is a VM in some embodiments, but may also operate as a container or a datapath on a bare metal computer in other embodiments.
  • the edge gateway executes on a separate host computer from the other virtual datacenter workloads, while in other embodiments the edge gateway executes on the same host computer as at least a subset of the workloads.
  • the process 500 describes the configuration of a single edge gateway, it should be understood that in some embodiments one active edge gateway and one or more standby edge gateways are actually configured, often on different host computers for redundancy.
  • the active and standby gateways are mostly configured in the same manner, but other forwarding elements (e.g., on the other host computers) are configured to forward the relevant traffic ingressing and egressing the virtual datacenter to the host computer with the active edge gateway.
  • the process 500 also identifies (at 515 ) services for each external entity with which the virtual datacenter workloads communicate.
  • these services are determined based on user-specified configuration of different uplinks for the edge gateway.
  • the services can include VPN, firewall, NAT, load balancing, or other services.
  • the external entities may include one or more sets of cloud provider services (e.g., storage, etc.).
  • the process 500 then configures (at 520 ) a separate uplink interface of the edge gateway on a separate subnet for each external entity.
  • These subnets are not the subnet of the external entity, but rather are independent private subnets used by the edge gateway to forward data traffic to and receive data traffic from the local router that is also configured on the particular host computer with the edge gateway.
  • the process 500 also configures (at 525 ) the edge gateway to perform the identified services for data traffic ingressing and egressing via each uplink interface.
  • the process 500 configures (at 530 ) a routing table for the edge gateway to route data traffic directed to each external entity to a local software router via the corresponding interface.
  • these are standard routes that match on the destination network address, rather than the policy-based routes used for the return direction traffic at the local software router.
  • Each route for a set of network addresses associated with a particular external entity routes data traffic via one of the uplink interfaces of the edge gateway so that the edge gateway can perform the appropriate set of services on the data traffic.
  • the process 500 configures (at 535 ) a local software router (e.g., a virtual router) on the particular host computer to have (i) an interface with the public cloud underlay and (ii) separate interfaces corresponding to each edge gateway uplink interface.
  • a local software router e.g., a virtual router
  • Each of these latter interfaces is configured to be on the same subnet as the corresponding edge gateway uplink, while the former interface is configured to be on the same subnet as the public cloud underlay router to which the particular host computer connects.
  • the process 500 configures (at 540 ) the routing table of the local software router to route data messages from external entities (i.e., received from the public cloud underlay) to the different edge gateway interfaces at least in part based on source addresses of the data messages.
  • the routing table 400 shown in FIG. 4 illustrates examples of these routes that match on (i) the destination address mapping to the virtual datacenter workloads and (ii) different source addresses corresponding to the different external entities.
  • the routing table is also configured to route egressing traffic to the public cloud underlay network (e.g., using a default route).
  • FIG. 6 conceptually illustrates an electronic system 600 with which some embodiments of the invention are implemented.
  • the electronic system 600 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device.
  • Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.
  • Electronic system 600 includes a bus 605 , processing unit(s) 610 , a system memory 625 , a read-only memory 630 , a permanent storage device 635 , input devices 640 , and output devices 645 .
  • the bus 605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 600 .
  • the bus 605 communicatively connects the processing unit(s) 610 with the read-only memory 630 , the system memory 625 , and the permanent storage device 635 .
  • the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of the invention.
  • the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
  • the read-only-memory (ROM) 630 stores static data and instructions that are needed by the processing unit(s) 610 and other modules of the electronic system.
  • the permanent storage device 635 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 600 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 635 .
  • the system memory 625 is a read-and-write memory device. However, unlike storage device 635 , the system memory is a volatile read-and-write memory, such a random-access memory.
  • the system memory stores some of the instructions and data that the processor needs at runtime.
  • the invention's processes are stored in the system memory 625 , the permanent storage device 635 , and/or the read-only memory 630 . From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
  • the bus 605 also connects to the input and output devices 640 and 645 .
  • the input devices enable the user to communicate information and select commands to the electronic system.
  • the input devices 640 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
  • the output devices 645 display images generated by the electronic system.
  • the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
  • bus 605 also couples electronic system 600 to a network 665 through a network adapter (not shown).
  • the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 600 may be used in conjunction with the invention.
  • Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
  • computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks.
  • CD-ROM compact discs
  • CD-R recordable compact
  • the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
  • Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • integrated circuits execute instructions that are stored on the circuit itself.
  • the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
  • display or displaying means displaying on an electronic device.
  • the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
  • DCNs data compute nodes
  • addressable nodes may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
  • VMs in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.).
  • the tenant i.e., the owner of the VM
  • Some containers are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system.
  • the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers.
  • This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers.
  • Such containers are more lightweight than VMs.
  • Hypervisor kernel network interface modules in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads.
  • a hypervisor kernel network interface module is the vmknic module that is part of the ESXiTM hypervisor of VMware, Inc.
  • VMs virtual machines
  • examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules.
  • the example networks could include combinations of different types of DCNs in some embodiments.

Abstract

Some embodiments provide a method that configures a virtual datacenter that includes a set of workloads executing on hosts in a public cloud and an edge gateway executing on a particular host for handling data traffic between the workloads and different external entities having different sets of network addresses. The method configures a router to execute on the particular host to route data messages between the edge gateway and an underlay network of the public cloud. The router has at least two different interfaces for exchanging data messages with the edge gateway, each router interface corresponding to an interface of the edge gateway. The edge gateway interfaces enable the edge gateway to perform different sets of services on data messages between the workloads and the external entities. The method configures the router to route traffic received from the external entities and addressed to the workloads based on source network addresses.

Description

BACKGROUND
More and more enterprises have moved or are in the process of moving large portions of their computing workloads into various public clouds (e.g., Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, etc.). Some of these public clouds enable a tenant to configure a virtual datacenter within the public cloud, with a portion of the public cloud (i.e., a set of host computers) fully allocated to that tenant's virtual datacenter. The tenant will often want this virtual datacenter to be able to connect with various external components, such as the Internet, an on-premises network, etc. However, if different services are applied to traffic sent to and received from different external components, then the virtual datacenter will need to be able to differentiate between different types of traffic sent to the same destinations.
BRIEF SUMMARY
Some embodiments provide a method for configuring a virtual datacenter on a set of host computers in a public cloud, such that the virtual datacenter can communicate with different external entities. In some embodiments, a router between an edge gateway of the virtual datacenter and an underlay network of the public cloud (e.g., executing on the same host computer as the edge gateway) is configured to route different incoming data messages to different interfaces of the edge gateway based on different source addresses of the incoming data messages that are associated with different external entities. The edge gateway is configured to apply different sets of services to data messages received at these different interfaces, thereby enabling the edge gateway to apply different sets of services to data traffic received from different external entities.
In some embodiments, the virtual datacenter is defined within virtual private clouds (VPCs) of the public cloud. A VPC, in some embodiments, is a set of workloads that are allocated to the tenant of the public cloud (e.g., an enterprise) and that are isolated from workloads of other tenants. In some embodiments, for a virtual datacenter, the tenant VPC is allocated a set of physical host computers of the public cloud that only host workload data compute nodes (e.g., virtual machines (VMs), containers, etc.) that are part of the tenant virtual datacenter (i.e., the physical host computers are not shared with other tenants of the public cloud). Within the VPC, a tenant logical network is defined, to which both the management components and the endpoint workloads connect.
The workloads of the virtual datacenter, in some embodiments, include a set of network management components (e.g., network manager(s) and/or controller(s), compute manager(s), etc.), a set of network endpoints (e.g., on which applications operate), and an edge gateway (e.g., a virtual machine) that executes on a particular host computer and processes data traffic between the workloads of the virtual datacenter and the external entities. The network management components of the virtual datacenter manage the network endpoint workloads and configure the host computers (e.g., managed forwarding elements executing on the host computers) to implement a logical network for communication between the network endpoint workloads.
As the virtual datacenter operates on a set of host computers in a public cloud, the edge gateway of the virtual datacenter is an interface between the virtual datacenter and an underlay network of the public cloud (i.e., the public cloud physical network). The virtual datacenter workloads communicate with various external entities through this underlay network. Some embodiments configure a router between the edge gateway and the public cloud underlay network, executing on the same particular host computer as the edge gateway (e.g., in the virtualization software of that host computer).
As mentioned, the edge gateway is configured with multiple different interfaces corresponding to multiple different external entities with which the virtual datacenter workloads exchange traffic. Some embodiments configure the edge gateway to perform different sets of services on data traffic depending on which of these interfaces the traffic is being sent out (for outgoing traffic) of or received through (for incoming traffic). That is, the edge gateway performs different sets of services for traffic sent to or received from different external entities. These external entities can include the workloads of one or more on-premises datacenters (e.g., enterprise datacenters managed by the public cloud tenant), public cloud provider services (e.g., storage, etc.), public Internet traffic, and/or other entities.
In general, the virtual datacenter workloads, on-premises datacenter workloads, and public cloud provider services will each have their own separate associated sets of network addresses (i.e., one or more associated subnets), with network addresses outside of these subnets assumed to be associated with the public Internet traffic. When the edge gateway receives traffic from the virtual datacenter workloads directed to any of these external entities, the edge gateway routes the traffic to the local router via the interface associated with the destination address (e.g., a first interface associated with on-premises traffic, a second interface associated with cloud provider services traffic, and a third interface associated with Internet traffic). The edge gateway can then perform the correct set of services (e.g., firewall, network address translation (NAT), VPN, etc.) on the traffic. This traffic is sent to the local router through the correct interface, which uses its own routing table to route the outgoing traffic to the cloud provider underlay network.
In many cases, while the edge gateway applies source NAT to traffic directed to the public Internet, no such address translation is applied to traffic being sent to either the on-premises datacenter workloads or to public cloud provider services. While this is not a problem for routing outbound traffic, the local router needs to be able to route traffic from the on-premises datacenter workloads and the public cloud provider services to different interfaces of the edge gateway even when these data messages have the same destination address (i.e., a virtual datacenter workload address), so that the edge gateway receives the traffic through the appropriate interface and therefore applies the appropriate set of services to each data message.
Thus, some embodiments configure the local router to use source address-based routing for these incoming data messages. These routes, in some embodiments, specify that any data message received at the local router that (i) has a destination address associated with the virtual datacenter workloads and (ii) has a source address associated with a particular external entity is routed to the edge gateway interface associated with that external entity. That is, traffic directed to the virtual datacenter workloads having a source address associated with the on-premises datacenter is routed by the local router to the on-premises interface of the edge gateway, while traffic directed to the virtual datacenter workloads having a source address associated with the public cloud provider services is routed by the local router to the public cloud provider services interface of the edge gateway. Traffic received from the public internet does not have a specific associated set of network addresses, in some embodiments, and thus any traffic directed to the virtual datacenter destination address the source address of which is not associated with any of the other interfaces is routed to the public Internet interface of the edge gateway. The edge gateway can then apply the correct set of services to the incoming data messages based on the interface through which these data messages are received.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
FIG. 1 conceptually illustrates a tenant logical network for a virtual datacenter of some embodiments.
FIG. 2 conceptually illustrates the physical implementation of the virtual datacenter in a public cloud datacenter according to some embodiments.
FIG. 3 conceptually illustrates a host computer hosting an edge gateway.
FIG. 4 conceptually illustrates a portion of the routing table configured for a local router in some embodiments.
FIG. 5 conceptually illustrates a process for configuring a virtual datacenter and the routing for data traffic that the virtual datacenter exchanges with external entities.
FIG. 6 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
DETAILED DESCRIPTION
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide a method for configuring a virtual datacenter on a set of host computers in a public cloud, such that the virtual datacenter can communicate with different external entities. In some embodiments, a router between an edge gateway of the virtual datacenter and an underlay network of the public cloud (e.g., executing on the same host computer as the edge gateway) is configured to route different incoming data messages to different interfaces of the edge gateway based on different source addresses of the incoming data messages that are associated with different external entities. The edge gateway is configured to apply different sets of services to data messages received at these different interfaces, thereby enabling the edge gateway to apply different sets of services to data traffic received from different external entities.
In some embodiments, the virtual datacenter is defined within virtual private clouds (VPCs) of the public cloud. A VPC, in some embodiments, is a set of workloads that are allocated to the tenant of the public cloud (e.g., an enterprise) and that are isolated from workloads of other tenants. In some embodiments, for a virtual datacenter, the tenant VPC is allocated a set of physical host computers of the public cloud that only host workload data compute nodes (e.g., virtual machines (VMs), containers, etc.) that are part of the tenant virtual datacenter (i.e., the physical host computers are not shared with other tenants of the public cloud). Within the VPC, a tenant logical network is defined, to which both the management components and the endpoint workloads connect.
The workloads of the virtual datacenter, in some embodiments, include a set of network management components (e.g., network manager(s) and/or controller(s), compute manager(s), etc.), a set of network endpoints (e.g., on which applications operate), and an edge gateway (e.g., a virtual machine) that executes on a particular host computer and processes data traffic between the workloads of the virtual datacenter and the external entities. The network management components of the virtual datacenter manage the network endpoint workloads and configure the host computers (e.g., managed forwarding elements executing on the host computers) to implement a logical network for communication between the network endpoint workloads.
FIG. 1 conceptually illustrates a tenant logical network 100 for a virtual datacenter 105 of some embodiments. As shown, the logical network 100 of the virtual datacenter is defined to include a tier-0 (T0) logical router 110, a management tier-1 (T1) logical router 115, and a compute tier-1 (T1) logical router 120. The T0 logical router 110 (implemented in part as an edge gateway) handles traffic entering and exiting the virtual datacenter 105 (e.g., traffic sent to and from an on-premises datacenter, traffic sent to and from services provided by the public cloud provider, traffic between workloads at the virtual datacenter 105 and client devices connected to the public Internet, etc.). As shown, the T0 logical router 110 has multiple connections (i.e., multiple uplinks) to the underlay network 180 of the public cloud. In some embodiments, these different uplinks are used as interfaces for communicating different types of traffic with the public cloud underlay (i.e., traffic to and from different external entities).
In addition, in some embodiments, some or all traffic between the management portion of the logical network 100 and the network endpoint workload data compute nodes (DCNs) is sent through the T0 logical router 110 (e.g., as a logical path between the management T1 logical router 115 and the compute T1 logical router 120). The T0 logical router 110 may also be defined to provide certain services for data traffic that it processes (e.g., with different services defined for traffic with different external entities by defining different services on different uplinks).
The management and compute T1 logical routers 115 and 120 are sometimes referred to as the management gateway and compute gateway. In some embodiments, a typical virtual datacenter is defined with these two T1 logical routers connected to a T0 logical router, which segregates the management network segments from the compute network segments to which workload DCNs connect. In general, public network traffic from external client devices would not be allowed to connect to the management network but would (in certain cases) be allowed to connect to the compute network (e.g., if the compute network includes web servers for a public-facing application). Each of the T1 logical routers 115 and 120 may also apply services to traffic that it processes, whether that traffic is received from the T0 logical router 110 or received from one of the network segments underneath the T1 logical router.
In this example, the virtual datacenter logical network 100 includes three management logical switches 125-135 (also referred to as network segments) and two compute logical switches 140-145. One or more compute manager DCNs 150 connect to the first management logical switch 125 and one or more network manager and controller DCNs 155 connect to the second management logical switch 130. The DCNs shown here may be implemented in the public cloud as virtual machines (VMs), containers, or other types of machines, in different embodiments. In some embodiments, multiple compute manager DCNs 150 form a compute manager cluster connected to the logical switch 125, while multiple network manager DCNs 155 form a management plane cluster and multiple network controller DCNs 155 form a control plane cluster (both of which are connected to the same logical switch 130).
The virtual datacenter 105 also includes workload DCNs 160. These DCNs can host applications that are accessed by users (e.g., employees of the enterprise that owns and manages the virtual datacenter 105), external client devices (e.g., individuals accessing a web server through a public network), other DCNs (e.g., in the same virtual datacenter or different datacenters, such as an on-premises datacenter), or services provided by the public cloud. The workload DCNs 160 in this example connect to two logical switches 140 and 145 (e.g., because they implement different tiers of an application, or different applications altogether). These DCNs 160 can communicate with each other, with workload DCNs in other datacenters, etc. via the interfaces connected to these compute logical switches 140 and 145. In addition, in some embodiments (and as shown in this example), the workload DCNs 160 include a separate interface (e.g., in a different subnet) that connects to a management logical switch 135. The workload DCNs 160 communicate with the compute and network management DCNs 150 and 155 via this logical switch 135, without requiring this control traffic to be sent through the TO logical router 110.
FIG. 2 conceptually illustrates the physical implementation of the virtual datacenter 105 in a public cloud datacenter 200 according to some embodiments. As mentioned above, some embodiments implement a virtual datacenter such as that shown in FIG. 1 within a VPC of a public cloud datacenter. This figure shows that the virtual datacenter 105 is implemented in an isolated VPC 205 of the public cloud datacenter 200. In some embodiments, this VPC 205 is allocated not just a set of VMs or other DCNs that execute on host computers managed by the public cloud provider and potentially shared with other tenants of the public cloud, but rather a set of host computers 210-220 of the public cloud datacenter 200. This allows the management DCNs to manage the hypervisors and other software of the host computers 210-220 (e.g., so that these hypervisors implement the virtual datacenter logical network 100).
In different embodiments, the entire virtual datacenter may be implemented on a single host computer of the public datacenter 200 (which may host many VMs, containers, or other DCNs) or multiple different host computers. As shown, in this example, at least two host computers 210 and 215 execute workload and/or management VMs. Another host computer 220 executes an edge gateway 225 (e.g., as a VM, within a datapath, etc.). In some embodiments, this edge gateway 225 implements a centralized component of the TO logical router, such that all traffic between any external networks and the virtual datacenter is processed by the gateway datapath 225. Additional details regarding logical routers and their implementation can be found in U.S. Pat. No. 9,787,605, which is incorporated herein by reference.
The edge gateway 225 connects to a local router that is implemented in the hypervisor of the host computer 220 in some embodiments. This local router provides routing between the edge gateway uplinks and the public cloud underlay network 180. Though not shown, in some embodiments both an active edge gateway and a standby edge gateway are implemented in the virtual datacenter (e.g., on two different host computers).
FIG. 3 conceptually illustrates a host computer 300 hosting an edge gateway 305 in more detail. As noted, the virtual datacenter of some embodiments includes two edge gateways (e.g., on two separate host computers), with one as an active edge gateway and the other as a standby edge gateway. In some such embodiments, the host computers on which the active and standby edge gateways execute have a similar structure to that shown in the figure.
The edge gateway 305 is a VM, container, etc. in some embodiments. This edge gateway 305 may execute a set of virtual switches and/or virtual routers, a DPDK-based datapath, a single flow-based forwarding element (e.g., Open vSwitch), or other forwarding element that enables the edge gateway to perform forwarding for multiple logical elements in addition to performing any services associated with those logical elements (e.g., firewall, VPN, NAT, etc.). As shown, the edge gateway 305 includes four interfaces: one downlink 310 and three uplinks 315-325. The downlink is used for communication with virtual datacenter workloads (e.g., network management components, logical network endpoints, etc.) on the same host computer 300 or other host computers of the virtual datacenter.
Each of the uplinks 315-325 is used for sending data messages to and receiving data messages from a different external entity. Specifically, the first uplink 315 processes data traffic between the virtual datacenter workloads and one or more on-premises datacenters owned and/or managed by the public cloud tenant that owns the virtual datacenter. The second uplink 320 processes data traffic between the virtual datacenter workloads and one or more cloud provider services, such as storage services, authentication services, deep packet inspection services, and/or other services provided by the public cloud. Finally, the third uplink 325 processes data traffic between the virtual datacenter workloads and the public Internet. This could be traffic sent through the public Internet from various client devices or be used by the virtual datacenter workloads to contact other domains (e.g., other applications in other datacenters).
Some embodiments configure the multiple different uplinks for the edge gateway so that the edge gateway can perform different sets of services on the incoming and outgoing data traffic depending on the external entity with which the virtual datacenter workloads are communicating. For instance, VPN services might be used for communication with on-premises datacenter workloads, but not for communication with the provider services or Internet sources. Different firewall rules might be applied for the different external entities, and some embodiments perform source NAT (to convert the virtual datacenter workload network address to a public network address) for Internet traffic but not for cloud provider services data traffic or on-premises data traffic.
Each of the edge gateway uplinks 315-325 has a different network address and is on a different subnet in some embodiments. As shown, in this example, the on-premises uplink 315 is configured with an IP address of 10.10.1.2 and is connected to the subnet 10.10.1.0/24. The provider services uplink 320 is configured with an IP address of 10.10.2.2 and is connected to the subnet 10.10.2.0/24. Finally, the Internet traffic uplink 325 is configured with an IP address of 10.10.3.2 as is connected to the subnet 10.10.3.0/24.
A local software router 330 also executes on the host computer 300 (e.g., in virtualization software of the host computer 300) and processes data traffic between the edge gateway 305 and the public cloud underlay network 335. In some embodiments, the public cloud underlay network 335 is a layer-3 network (i.e., does not handle L2 traffic such as address resolution protocol (ARP) traffic, gratuitous ARP (GARP) traffic, etc.). As such, rather than a software virtual switch connecting to the underlay, a software router executing on the host computer 300 handles this communication. The edge gateway 305 does not communicate directly with the underlay 335 because the uplinks 315-325 are on their own respective subnets separate from the underlay and the underlay 335 would not necessarily be able to differentiate between traffic for the different uplinks. The local software router 330, as will be described, is configured to route traffic from different external entities to the different edge gateway uplinks 315-325.
The local software router 330 has three interfaces 340-350 that correspond to the three edge gateway uplinks 315-325. Each of these interfaces is on the same subnet as the corresponding edge gateway interface: the on-premises interface 340 of the router 330 is configured with an IP address of 10.10.1.1, the cloud provider services interface 345 is configured with an IP address of 10.10.2.1, and the public Internet traffic interface 350 is configured with an IP address of 10.10.3.1. The local software router 330 also has a single interface 355 to the public cloud underlay 335, which is on the same subnet as that underlay (i.e., as the underlay router(s)). In this example, the underlay interface 355 is configured with an IP address of 10.10.4.2, with the underlay having a subnet 10.10.4.0/24.
Typically, the virtual datacenter workloads, on-premises datacenter workloads, and public cloud provider services each have their own separate associated sets of network addresses. The logical network within the virtual datacenter has one or more subnets (e.g., a different subnet for each logical switch within the logical network) that are used to communicate with the public cloud provider services and the on-premises datacenter workloads. The on-premises datacenter workloads have their own subnets and the cloud provider services are also typically assigned network addresses in one or more separate subnets. Network addresses outside of these subnets can be assumed to be associated with public Internet traffic.
When the edge gateway 305 receives traffic from the virtual datacenter workloads directed to any of these various external entities, the edge gateway 305 routes the traffic to the local router via the interface 315-325 associated with the destination address. That is, the edge gateway routing table routes data messages having destination addresses associated with the virtual datacenter workloads via the downlink 310 (i.e., to a particular host computer in the virtual datacenter via the downlink 310), data messages having destination addresses associated with the on-premises datacenter via the uplink 315 (i.e., to the local router interface 340 as a next hop via the uplink 315), data messages having destination addresses associated with the cloud provider services to the uplink 320 (i.e., to the local router interface 345 as a next hop via the uplink 320), and data messages destined for the public Internet (e.g., using a default route) via the uplink 325 (i.e., to the local router interface 350 as a next hop via the uplink 325). By routing each type of outgoing data through the associated uplink 315-325, the edge gateway 305 applies the appropriate set of services (e.g., firewall, source network address translation (NAT), VPN, etc.) on the traffic. The local router 330 then uses its own routing table to route the outgoing traffic to the cloud provider underlay network via the interface 355.
As noted above, it is often the case that the edge gateway 305 applies source NAT to traffic sent from the virtual datacenter to endpoints via the public Internet (i.e., the uplink 325 is configured to perform source NAT on this traffic), but no such address translation is applied to traffic sent to either the on-premises datacenter or public cloud provider services (i.e., the uplinks 315 and 320 are not configured to perform source NAT on their traffic). For routing of the outbound traffic, this is not an issue. The data messages are sent to the local router 330, which routes them to the public cloud underlay 335 via the interface 355 based on their destination addresses. However, when these external entities send return traffic to the virtual datacenter, they may have the same destination address (or sets of destination addresses) but need to be routed to different interfaces of the edge gateway 305 so that the edge gateway can apply the appropriate set of services to different data messages. That is, the public cloud provider service and an on-premises workload might both send data messages to the same virtual datacenter workload using the same destination network address. The public cloud underlay would route this traffic to the interface 355 of the local software router 330, but this router would not be able to differentiate as to which of the edge gateway uplink interfaces should receive which data message using standard routing based on the destination network address.
Thus, some embodiments configure the local router to route some data messages (specifically, incoming data messages directed to the virtual datacenter workloads) based on the source address of the data messages rather than only the destination address. These routes, in some embodiments, specify that any data message received at the local router that (i) has a destination address associated with the virtual datacenter workloads and (ii) has a source address associated with a particular external entity is routed to the edge gateway interface associated with that external entity.
FIG. 4 conceptually illustrates a portion of the routing table 400 configured for the local router 330 in some embodiments. As shown, some of the routes in the routing table 400 are policy-based routes (i.e., routes that do not match only on destination network address) that match at least partly on the source network address of a received data message. The first route specifies that any data message received at the local router that (i) has a source address associated with the cloud provider services and (ii) has a destination address associated with the virtual datacenter workloads is routed to a next hop of 10.10.2.2 (i.e., the edge gateway interface address associated with cloud provider services traffic). Though not shown, in some embodiments the routing table also specifies that any traffic directed to 10.10.2.0/24 is output via the cloud provider services interface 345. It should be noted that this may represent multiple routes if there are multiple addresses or subnets associated with the cloud provider services and/or the virtual datacenter workloads.
The second route specifies that any data message received at the local router that (i) has a source address associated with the on-premises datacenter and (ii) has a destination address associated with the virtual datacenter workloads is routed to a next hop of 10.10.1.2 (i.e., the edge gateway interface address associated with on-premises datacenter traffic). Though not shown, in some embodiments the routing table also specifies that any traffic directed to 10.10.1.0/24 is output via the on-premises interface 340. Like the first route, this second route may represent multiple routes if there are multiple addresses or subnets associated with the on-premises datacenter and/or the virtual datacenter workloads.
The third route specifies that any data message received at the local router that has a destination address associated with the virtual datacenter workloads and any source address is routed to a next hop of 10.10.3.2 (i.e., the edge gateway interface address associated with public Internet traffic). Though not shown, in some embodiments the routing table also specifies that any traffic directed to 10.10.3.0/24 is output via the Internet traffic interface 350. This third route may also represent multiple routes if there are multiple addresses or subnets associated with the virtual datacenter workloads. In some embodiments, this is a different destination address than used for the first two routes because traffic from public Internet sources is sent to a public IP address and translated by the edge gateway. In addition, though priorities are not shown in the routing table, it should be understood that the third route is configured with a lower priority than the first two routes so that data traffic directed to the workloads will be sent to one of the first two interfaces (for cloud provider services traffic or on-premises traffic) if matching either of those source addresses and to the public Internet interface for any other source addresses. By using the source-based routing, data traffic from different sources can be directed to the appropriate edge gateway interface and therefore have the appropriate services (firewall, NAT, VPN, etc.) applied.
Finally, the routing table 400 includes a default route routing data messages with any other destination addresses to a next hop of 10.10.4.1 (i.e., to be output via the interface 355 to a next hop of a router on the public cloud underlay network). This is the output for any traffic sent from the virtual datacenter workloads to the various external entities. In this way, the local router 330 operates like a multiplexer for outgoing traffic from multiple different interfaces of the edge gateway and a demultiplexer for incoming traffic (spreading this traffic to the different interfaces of the edge gateway).
FIG. 5 conceptually illustrates a process 500 for configuring a virtual datacenter and the routing for data traffic that the virtual datacenter exchanges with external entities. The process 500 is performed in part by the network manager and controller clusters (as well as the compute cluster) in the virtual datacenter in some embodiments. It should be understood that the process 500 is a conceptual process, and that (i) the operations shown may not be performed in the order shown and (ii) different network and/or compute managers or controllers may perform some of the different operations. For instance, in some embodiments the edge gateway (and the routing table in the edge gateway) are configured by a first component (e.g., a cloud service application) while the local software router (and the routing table of the local software router) are configured by a second component (e.g., an agent running on the host computer on which the edge gateway and local software router execute). In some such embodiments, the second component also configures the underlay router(s). These components may be network management system components (e.g., they receive the configuration information from the network manager and/or controllers in the virtual datacenter).
As shown, the process 500 begins by configuring (at 505) virtual datacenter workloads on host computers of a VPC allocated to the tenant (i.e., the VPC allocated for hosting the virtual datacenter). As described above, these host computers are segregated and allocated solely for the virtual datacenter in some embodiments, such that the compute and/or network manager/controller applications have access to the virtualization software of the host computers. In some embodiments, a compute controller configures various VMs and/or other workloads to execute on the host computers. In addition, in some embodiments, a network manager configures the virtualization software on these host computers to perform logical networking for data traffic sent to and from these workloads.
The process 500 also configures (at 510) an edge gateway to execute on a particular host computer of the VPC. As indicated, this edge gateway is a VM in some embodiments, but may also operate as a container or a datapath on a bare metal computer in other embodiments. In some embodiments, the edge gateway executes on a separate host computer from the other virtual datacenter workloads, while in other embodiments the edge gateway executes on the same host computer as at least a subset of the workloads. In addition, while the process 500 describes the configuration of a single edge gateway, it should be understood that in some embodiments one active edge gateway and one or more standby edge gateways are actually configured, often on different host computers for redundancy. In this case, the active and standby gateways are mostly configured in the same manner, but other forwarding elements (e.g., on the other host computers) are configured to forward the relevant traffic ingressing and egressing the virtual datacenter to the host computer with the active edge gateway.
The process 500 also identifies (at 515) services for each external entity with which the virtual datacenter workloads communicate. In some embodiments, these services are determined based on user-specified configuration of different uplinks for the edge gateway. As noted, the services can include VPN, firewall, NAT, load balancing, or other services. The external entities may include one or more sets of cloud provider services (e.g., storage, etc.).
The process 500 then configures (at 520) a separate uplink interface of the edge gateway on a separate subnet for each external entity. These subnets, as shown in FIG. 3 , are not the subnet of the external entity, but rather are independent private subnets used by the edge gateway to forward data traffic to and receive data traffic from the local router that is also configured on the particular host computer with the edge gateway. The process 500 also configures (at 525) the edge gateway to perform the identified services for data traffic ingressing and egressing via each uplink interface.
The process 500 configures (at 530) a routing table for the edge gateway to route data traffic directed to each external entity to a local software router via the corresponding interface. In some embodiments, these are standard routes that match on the destination network address, rather than the policy-based routes used for the return direction traffic at the local software router. Each route for a set of network addresses associated with a particular external entity routes data traffic via one of the uplink interfaces of the edge gateway so that the edge gateway can perform the appropriate set of services on the data traffic.
Next, the process 500 configures (at 535) a local software router (e.g., a virtual router) on the particular host computer to have (i) an interface with the public cloud underlay and (ii) separate interfaces corresponding to each edge gateway uplink interface. Each of these latter interfaces is configured to be on the same subnet as the corresponding edge gateway uplink, while the former interface is configured to be on the same subnet as the public cloud underlay router to which the particular host computer connects.
Finally, the process 500 configures (at 540) the routing table of the local software router to route data messages from external entities (i.e., received from the public cloud underlay) to the different edge gateway interfaces at least in part based on source addresses of the data messages. The routing table 400 shown in FIG. 4 illustrates examples of these routes that match on (i) the destination address mapping to the virtual datacenter workloads and (ii) different source addresses corresponding to the different external entities. The routing table is also configured to route egressing traffic to the public cloud underlay network (e.g., using a default route).
FIG. 6 conceptually illustrates an electronic system 600 with which some embodiments of the invention are implemented. The electronic system 600 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 600 includes a bus 605, processing unit(s) 610, a system memory 625, a read-only memory 630, a permanent storage device 635, input devices 640, and output devices 645.
The bus 605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 600. For instance, the bus 605 communicatively connects the processing unit(s) 610 with the read-only memory 630, the system memory 625, and the permanent storage device 635.
From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
The read-only-memory (ROM) 630 stores static data and instructions that are needed by the processing unit(s) 610 and other modules of the electronic system. The permanent storage device 635, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 600 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 635.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 635, the system memory 625 is a read-and-write memory device. However, unlike storage device 635, the system memory is a volatile read-and-write memory, such a random-access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 625, the permanent storage device 635, and/or the read-only memory 630. From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 605 also connects to the input and output devices 640 and 645. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 640 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 645 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in FIG. 6 , bus 605 also couples electronic system 600 to a network 665 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 600 may be used in conjunction with the invention.
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.
Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.
It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including FIG. 5 ) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims (21)

We claim:
1. A method comprising:
configuring a virtual datacenter on a set of host computers in a public cloud, the virtual datacenter comprising (i) a set of workloads executing on the host computers and (ii) an edge gateway executing on a particular host computer for handling data traffic between the workloads and at least two different external entities having different sets of network addresses, wherein the set of workloads uses a same set of network addresses to communicate with the at least two different external entities;
configuring a router to execute on the particular host computer to route data messages between the edge gateway and an underlay network of the public cloud, the router having at least two different respective interfaces for exchanging data messages with the edge gateway, wherein each respective router interface corresponds to a respective interface of the edge gateway, the respective edge gateway interfaces enabling the edge gateway to perform different respective sets of services on data messages between the workloads and the respective external entities; and
configuring the router to route data messages received from the external entities via the underlay network and addressed to the workloads based on source network addresses of the data messages.
2. The method of claim 1, wherein the virtual datacenter workloads comprise (i) a set of network management components and (ii) a set of network endpoints connected by a logical network that is managed by the network management components of the virtual datacenter.
3. The method of claim 1, wherein:
the set of workloads is a first set of workloads; and
the external entities comprise (i) a second set of workloads at an on-premises datacenter and (ii) a set of cloud provider services.
4. The method of claim 3, wherein the external entities further comprise a set of public internet endpoints.
5. The method of claim 1, wherein:
the network addresses of the set of workloads are not translated for communication with first and second external entities; and
the network addresses of the set of workloads are translated by the edge gateway for communication with a third external entity.
6. The method of claim 1, wherein (i) data messages received from a first external entity directed to a particular workload and (ii) data messages received from a second external entity directed to the particular workload have a same destination address.
7. The method of claim 1, wherein the router is configured to (i) route data messages sent from the first external entity to a first interface of the edge gateway based on said data messages having source addresses associated with the first external entity and (ii) route data messages sent from the second external entity to a second interface of the edge gateway based on said data messages having source addresses associated with the second external entity.
8. The method of claim 7, wherein the router is configured to route data messages sent from a third external entity to a third interface of the edge gateway using a default route when (i) source addresses of said data messages are not associated with either the first or second external entities and (ii) destination addresses of said data messages are associated with the set of workloads.
9. The method of claim 7, wherein the edge gateway performs a first set of services on the data messages routed to the first interface and a second set of services on the data messages routed to the second interface.
10. The method of claim 1, wherein a set of services associated with a first edge gateway interface comprises at least one of firewall services and virtual private network (VPN) services.
11. The method of claim 1, wherein:
configuring the virtual datacenter comprises directing a first network management application to configure the edge gateway; and
configuring the router comprises directing a second network management application to configure the router.
12. A method comprising:
configuring a virtual datacenter on a set of host computers in a public cloud, the virtual datacenter comprising (i) a set of workloads executing on the host computers, (ii) an active edge gateway executing on a first host computer for handling data traffic between the workloads and at least two different external entities having different sets of network addresses, and (iii) a standby edge gateway executing on a second host computer for handling data traffic between the workloads and the different external entities if the active edge gateway fails;
configuring a first router to execute on the first host computer to route data messages between the active edge gateway and an underlay network of the public cloud, the first router having at least two different respective interfaces for exchanging data messages with the active edge gateway, wherein each respective router interface corresponds to a respective interface of the active edge gateway, the respective active edge gateway interfaces enabling the active edge gateway to perform different respective sets of services on data messages between the workloads and the respective external entities;
configuring the first router to route data messages received from the external entities via the underlay network and addressed to the workloads based on source network addresses of the data messages;
configuring a second router to execute on the second host computer to route data messages between the standby edge gateway and the underlay network of the public cloud, the second router having at least two different respective interfaces for exchanging data messages with the standby edge gateway, wherein each respective interface of the second router corresponds to a respective interface of the standby edge gateway; and
configuring the second router to route data messages received from the external entities via the underlay network and addressed to the workloads based on source network addresses of the data messages.
13. The method of claim 12, wherein:
the underlay network is configured to route data messages addressed to the workloads to the first router executing on the first host computer; and
if the active edge gateway fails, the underlay network is subsequently configured to route data messages addressed to the workloads to the second router executing on the second host computer.
14. A non-transitory machine-readable medium storing a program for execution by at least one processing unit, the program comprising sets of instructions for:
configuring a virtual datacenter on a set of host computers in a public cloud, the virtual datacenter comprising (i) a set of workloads executing on the host computers and (ii) an edge gateway executing on a particular host computer for handling data traffic between the workloads and at least two different external entities having different sets of network addresses, wherein the set of workloads uses a same set of network addresses to communicate with the at least two different external entities;
configuring a router to execute on the particular host computer to route data messages between the edge gateway and an underlay network of the public cloud, the router having at least two different respective interfaces for exchanging data messages with the edge gateway, wherein each respective router interface corresponds to a respective interface of the edge gateway, the respective edge gateway interfaces enabling the edge gateway to perform different respective sets of services on data messages between the workloads and the respective external entities; and
configuring the router to route data messages received from the external entities via the underlay network and addressed to the workloads based on source network addresses of the data messages.
15. The non-transitory machine-readable medium of claim 14, wherein:
the set of workloads is a first set of workloads; and
the external entities comprise (i) a second set of workloads at an on-premises datacenter, (ii) a set of cloud provider services, and (iii) a set of public internet endpoints.
16. The non-transitory machine-readable medium of claim 14, wherein:
the network addresses of the set of workloads are not translated for communication with first and second external entities; and
the network addresses of the set of workloads are translated by the edge gateway for communication with a third external entity.
17. The non-transitory machine-readable medium of claim 16, wherein:
data messages received from the first external entity directed to a particular workload and data messages received from the second external entity directed to the particular workload have a same destination address; and
the router is configured to (i) route data messages sent from the first external entity to a first interface of the edge gateway based on said data messages having source addresses associated with the first external entity and (ii) route data messages sent from the second external entity to a second interface of the edge gateway based on said data messages having source addresses associated with the second external entity.
18. The non-transitory machine-readable medium of claim 17, wherein the router is configured to route data messages sent from a third external entity to a third interface of the edge gateway using a default route when (i) source addresses of said data messages are not associated with either the first or second external entities and (ii) destination addresses of said data messages are associated with the set of workloads.
19. The non-transitory machine-readable medium of claim 17, wherein the edge gateway performs a first set of services on the data messages routed to the first interface and a second set of services on the data messages routed to the second interface.
20. A non-transitory machine-readable medium storing a program for execution by at least one processing unit, the program comprising sets of instructions for:
configuring a virtual datacenter on a set of host computers in a public cloud, the virtual datacenter comprising (i) a set of workloads executing on the host computers, (ii) an active edge gateway executing on a first host computer for handling data traffic between the workloads and at least two different external entities having different sets of network addresses, and (iii) a standby edge gateway executing on a second host computer for handling data traffic between the workloads and the different external entities if the active edge gateway fails;
configuring a first router to execute on the first host computer to route data messages between the active edge gateway and an underlay network of the public cloud, the first router having at least two different respective interfaces for exchanging data messages with the active edge gateway, wherein each respective router interface corresponds to a respective interface of the active edge gateway, the respective active edge gateway interfaces enabling the active edge gateway to perform different respective sets of services on data messages between the workloads and the respective external entities;
configuring the first router to route data messages received from the external entities via the underlay network and addressed to the workloads based on source network addresses of the data messages;
configuring a second router to execute on the second host computer to route data messages between the standby edge gateway and the underlay network of the public cloud, the second router having at least two different respective interfaces for exchanging data messages with the standby edge gateway, wherein each respective interface of the second router corresponds to a respective interface of the standby edge gateway; and
configuring the second router to route data messages received from the external entities via the underlay network and addressed to the workloads based on source network addresses of the data messages.
21. The non-transitory machine-readable medium of claim 20, wherein:
the underlay network is configured to route data messages addressed to the workloads to the first router executing on the first host computer; and
if the active edge gateway fails, the underlay network is subsequently configured to route data messages addressed to the workloads to the second router executing on the second host computer.
US17/366,676 2021-07-02 2021-07-02 Source-based routing for virtual datacenters Active US11729094B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/366,676 US11729094B2 (en) 2021-07-02 2021-07-02 Source-based routing for virtual datacenters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/366,676 US11729094B2 (en) 2021-07-02 2021-07-02 Source-based routing for virtual datacenters

Publications (2)

Publication Number Publication Date
US20230006920A1 US20230006920A1 (en) 2023-01-05
US11729094B2 true US11729094B2 (en) 2023-08-15

Family

ID=84785700

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/366,676 Active US11729094B2 (en) 2021-07-02 2021-07-02 Source-based routing for virtual datacenters

Country Status (1)

Country Link
US (1) US11729094B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11962493B2 (en) 2022-06-21 2024-04-16 VMware LLC Network address translation in active-active edge cluster

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11171878B1 (en) 2020-09-21 2021-11-09 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US11805051B2 (en) 2021-05-24 2023-10-31 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways

Citations (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110126197A1 (en) * 2009-11-25 2011-05-26 Novell, Inc. System and method for controlling cloud and virtualized data centers in an intelligent workload management system
US20110131338A1 (en) 2009-11-30 2011-06-02 At&T Mobility Ii Llc Service-based routing for mobile core network
US20120054624A1 (en) * 2010-08-27 2012-03-01 Owens Jr Kenneth Robert Systems and methods for a multi-tenant system providing virtual data centers in a cloud configuration
US20120110651A1 (en) * 2010-06-15 2012-05-03 Van Biljon Willem Robert Granting Access to a Cloud Computing Environment Using Names in a Virtual Computing Infrastructure
US20130044762A1 (en) 2011-08-17 2013-02-21 Martin Casado Packet processing in managed interconnection switching elements
US20130185413A1 (en) 2012-01-14 2013-07-18 International Business Machines Corporation Integrated Metering of Service Usage for Hybrid Clouds
US20130283364A1 (en) * 2012-04-24 2013-10-24 Cisco Technology, Inc. Distributed virtual switch architecture for a hybrid cloud
US20140282525A1 (en) 2013-03-15 2014-09-18 Gravitant, Inc. Creating, provisioning and managing virtual data centers
US20140334495A1 (en) 2013-05-07 2014-11-13 Equinix, Inc. Direct Connect Virtual Private Interface for a One to Many Connection with Multiple Virtual Private Clouds
US20140376367A1 (en) 2013-06-24 2014-12-25 Vmware, Inc. System and method for distribution of policy enforcement point
US20150113146A1 (en) 2012-08-17 2015-04-23 Hangzhou H3C Technologies Co., Ltd. Network Management with Network Virtualization based on Modular Quality of Service Control (MQC)
US20150193246A1 (en) * 2014-01-06 2015-07-09 Siegfried Luft Apparatus and method for data center virtualization
US20160105392A1 (en) 2014-10-13 2016-04-14 Vmware, Inc. Central namespace controller for multi-tenant cloud environments
US20160127202A1 (en) 2013-09-06 2016-05-05 Vmware, Inc. Placing a virtual edge gateway appliance on a host computing system
US20160170809A1 (en) 2006-04-17 2016-06-16 Vmware, Inc. Executing a multicomponent software application on a virtualized computer platform
US20160182336A1 (en) 2014-12-22 2016-06-23 Vmware, Inc. Hybrid cloud network monitoring system for tenant use
US20160234161A1 (en) 2015-02-07 2016-08-11 Vmware, Inc. Multi-subnet participation for network gateway in a cloud environment
US20170033924A1 (en) 2015-07-31 2017-02-02 Nicira, Inc. Distributed VPN Service
US20170063673A1 (en) 2015-08-28 2017-03-02 Vmware, Inc. Data center wan aggregation to optimize hybrid cloud connectivity
US20170195517A1 (en) 2015-12-30 2017-07-06 Wipro Limited Methods and systems for increasing quality and reliability of fax communications
US9755960B2 (en) 2013-09-30 2017-09-05 Juniper Networks, Inc. Session-aware service chaining within computer networks
US20170353351A1 (en) 2016-06-02 2017-12-07 Alibaba Group Holding Limited Method and network infrastructure for a direct public traffic connection within a datacenter
US9876672B2 (en) 2007-09-26 2018-01-23 Nicira, Inc. Network operating system for managing and securing networks
US9935880B2 (en) 2012-01-12 2018-04-03 Telefonaktiebolaget Lm Ericsson (Publ) Systems and methods for scalable and resilient load balancing
US20180287902A1 (en) 2017-03-29 2018-10-04 Juniper Networks, Inc. Multi-cluster dashboard for distributed virtualization infrastructure element monitoring and policy control
US20180295036A1 (en) 2017-04-07 2018-10-11 Nicira, Inc. Application/context-based management of virtual networks using customizable workflows
US20180332001A1 (en) 2016-02-08 2018-11-15 Miguel Redondo Ferrero Federated virtual datacenter apparatus
US20190104051A1 (en) 2017-10-02 2019-04-04 Nicira, Inc. Measurement based routing through multiple public clouds
US20190149463A1 (en) 2017-11-14 2019-05-16 Versa Networks, Inc. Method and system for providing automatic router assignment in a virtual environment
US20190327112A1 (en) 2018-04-24 2019-10-24 Microsoft Technology Licensing, Llc Dynamic scaling of virtual private network connections
US20190342179A1 (en) * 2018-05-07 2019-11-07 Servicenow, Inc. Discovery and Management of Devices
CN111478850A (en) 2020-02-29 2020-07-31 新华三大数据技术有限公司 Gateway adjusting method and device
US10735263B1 (en) 2017-10-19 2020-08-04 Atlassian Pty Ltd Systems and methods for automatically configuring virtual networks
US10754696B1 (en) 2017-07-20 2020-08-25 EMC IP Holding Company LLC Scale out capacity load-balancing for backup appliances
US20210067439A1 (en) 2019-08-26 2021-03-04 Vmware, Inc. Forwarding element with physical and virtual data planes
US20210067468A1 (en) 2019-08-27 2021-03-04 Vmware, Inc. Alleviating congestion in a virtual network deployed over public clouds for an entity
US20210075727A1 (en) 2018-09-26 2021-03-11 Amazon Technologies, Inc. Multi-account gateway
US20210112034A1 (en) 2019-10-15 2021-04-15 Cisco Technology, Inc. Dynamic discovery of peer network devices across a wide area network
US20210126860A1 (en) 2019-10-28 2021-04-29 Vmware, Inc. Managing forwarding elements at edge nodes connected to a virtual network
US20210136140A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Using service containers to implement service chains
US11005710B2 (en) 2015-08-18 2021-05-11 Microsoft Technology Licensing, Llc Data center resource tracking
US11005963B2 (en) 2015-08-28 2021-05-11 Vmware, Inc. Pre-fetch cache population for WAN optimization
US20210314388A1 (en) 2020-04-01 2021-10-07 Vmware, Inc. Virtual load-balanced service object
US11171878B1 (en) 2020-09-21 2021-11-09 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US20210359948A1 (en) 2020-05-15 2021-11-18 Equinix, Inc. Virtual gateways in a cloud exchange
US11240203B1 (en) 2018-12-07 2022-02-01 Amazon Technologies, Inc. Network segmentation by automatically generated security groups
US20220311707A1 (en) 2021-03-25 2022-09-29 Vmware, Inc. Connectivity between virtual datacenters
US20220377020A1 (en) 2021-05-24 2022-11-24 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
WO2022250735A1 (en) 2021-05-24 2022-12-01 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways

Patent Citations (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160170809A1 (en) 2006-04-17 2016-06-16 Vmware, Inc. Executing a multicomponent software application on a virtualized computer platform
US9876672B2 (en) 2007-09-26 2018-01-23 Nicira, Inc. Network operating system for managing and securing networks
US20110126197A1 (en) * 2009-11-25 2011-05-26 Novell, Inc. System and method for controlling cloud and virtualized data centers in an intelligent workload management system
US20110131338A1 (en) 2009-11-30 2011-06-02 At&T Mobility Ii Llc Service-based routing for mobile core network
US20120110651A1 (en) * 2010-06-15 2012-05-03 Van Biljon Willem Robert Granting Access to a Cloud Computing Environment Using Names in a Virtual Computing Infrastructure
US20120054624A1 (en) * 2010-08-27 2012-03-01 Owens Jr Kenneth Robert Systems and methods for a multi-tenant system providing virtual data centers in a cloud configuration
US10193708B2 (en) 2011-08-17 2019-01-29 Nicira, Inc. Multi-domain interconnect
US10931481B2 (en) 2011-08-17 2021-02-23 Nicira, Inc. Multi-domain interconnect
WO2013026050A1 (en) 2011-08-17 2013-02-21 Nicira, Inc. Hierarchical controller clusters for interconnecting different logical domains
US20130044751A1 (en) 2011-08-17 2013-02-21 Martin Casado Packet processing in managed interconnection switching elements
US20130044764A1 (en) 2011-08-17 2013-02-21 Martin Casado Generating flows for managed interconnection switches
US20130044761A1 (en) 2011-08-17 2013-02-21 Teemu Koponen Hierarchical controller clusters for interconnecting two or more logical datapath sets
US20130142203A1 (en) 2011-08-17 2013-06-06 Nicira, Inc. Multi-domain interconnect
US20130044762A1 (en) 2011-08-17 2013-02-21 Martin Casado Packet processing in managed interconnection switching elements
US20130044763A1 (en) 2011-08-17 2013-02-21 Teemu Koponen Packet processing in federated network
US8830835B2 (en) 2011-08-17 2014-09-09 Nicira, Inc. Generating flows for managed interconnection switches
US20210184898A1 (en) 2011-08-17 2021-06-17 Nicira, Inc. Flow generation from second level controller to first level controller to managed switching element
US20130044641A1 (en) 2011-08-17 2013-02-21 Teemu Koponen Federating interconnection switching element network to two or more levels
US20190149360A1 (en) 2011-08-17 2019-05-16 Nicira, Inc. Multi-domain interconnect
US8964767B2 (en) 2011-08-17 2015-02-24 Nicira, Inc. Packet processing in federated network
US20130044752A1 (en) 2011-08-17 2013-02-21 Teemu Koponen Flow generation from second level controller to first level controller to managed switching element
US9444651B2 (en) 2011-08-17 2016-09-13 Nicira, Inc. Flow generation from second level controller to first level controller to managed switching element
US9137052B2 (en) 2011-08-17 2015-09-15 Nicira, Inc. Federating interconnection switching element network to two or more levels
US9209998B2 (en) 2011-08-17 2015-12-08 Nicira, Inc. Packet processing in managed interconnection switching elements
US9288081B2 (en) 2011-08-17 2016-03-15 Nicira, Inc. Connecting unmanaged segmented networks by managing interconnection switching elements
US10091028B2 (en) 2011-08-17 2018-10-02 Nicira, Inc. Hierarchical controller clusters for interconnecting two or more logical datapath sets
US9935880B2 (en) 2012-01-12 2018-04-03 Telefonaktiebolaget Lm Ericsson (Publ) Systems and methods for scalable and resilient load balancing
US20130185413A1 (en) 2012-01-14 2013-07-18 International Business Machines Corporation Integrated Metering of Service Usage for Hybrid Clouds
US20130283364A1 (en) * 2012-04-24 2013-10-24 Cisco Technology, Inc. Distributed virtual switch architecture for a hybrid cloud
US20150113146A1 (en) 2012-08-17 2015-04-23 Hangzhou H3C Technologies Co., Ltd. Network Management with Network Virtualization based on Modular Quality of Service Control (MQC)
US20140282525A1 (en) 2013-03-15 2014-09-18 Gravitant, Inc. Creating, provisioning and managing virtual data centers
US20140334495A1 (en) 2013-05-07 2014-11-13 Equinix, Inc. Direct Connect Virtual Private Interface for a One to Many Connection with Multiple Virtual Private Clouds
US20140376367A1 (en) 2013-06-24 2014-12-25 Vmware, Inc. System and method for distribution of policy enforcement point
US20160127202A1 (en) 2013-09-06 2016-05-05 Vmware, Inc. Placing a virtual edge gateway appliance on a host computing system
US9755960B2 (en) 2013-09-30 2017-09-05 Juniper Networks, Inc. Session-aware service chaining within computer networks
US20150193246A1 (en) * 2014-01-06 2015-07-09 Siegfried Luft Apparatus and method for data center virtualization
US20160105392A1 (en) 2014-10-13 2016-04-14 Vmware, Inc. Central namespace controller for multi-tenant cloud environments
US20160182336A1 (en) 2014-12-22 2016-06-23 Vmware, Inc. Hybrid cloud network monitoring system for tenant use
US20160234161A1 (en) 2015-02-07 2016-08-11 Vmware, Inc. Multi-subnet participation for network gateway in a cloud environment
US20170033924A1 (en) 2015-07-31 2017-02-02 Nicira, Inc. Distributed VPN Service
US11005710B2 (en) 2015-08-18 2021-05-11 Microsoft Technology Licensing, Llc Data center resource tracking
US11005963B2 (en) 2015-08-28 2021-05-11 Vmware, Inc. Pre-fetch cache population for WAN optimization
US20170063673A1 (en) 2015-08-28 2017-03-02 Vmware, Inc. Data center wan aggregation to optimize hybrid cloud connectivity
US20170195517A1 (en) 2015-12-30 2017-07-06 Wipro Limited Methods and systems for increasing quality and reliability of fax communications
US20180332001A1 (en) 2016-02-08 2018-11-15 Miguel Redondo Ferrero Federated virtual datacenter apparatus
US20170353351A1 (en) 2016-06-02 2017-12-07 Alibaba Group Holding Limited Method and network infrastructure for a direct public traffic connection within a datacenter
US20180287902A1 (en) 2017-03-29 2018-10-04 Juniper Networks, Inc. Multi-cluster dashboard for distributed virtualization infrastructure element monitoring and policy control
US20180295036A1 (en) 2017-04-07 2018-10-11 Nicira, Inc. Application/context-based management of virtual networks using customizable workflows
US10754696B1 (en) 2017-07-20 2020-08-25 EMC IP Holding Company LLC Scale out capacity load-balancing for backup appliances
US20190104051A1 (en) 2017-10-02 2019-04-04 Nicira, Inc. Measurement based routing through multiple public clouds
US10735263B1 (en) 2017-10-19 2020-08-04 Atlassian Pty Ltd Systems and methods for automatically configuring virtual networks
US20190149463A1 (en) 2017-11-14 2019-05-16 Versa Networks, Inc. Method and system for providing automatic router assignment in a virtual environment
US20190327112A1 (en) 2018-04-24 2019-10-24 Microsoft Technology Licensing, Llc Dynamic scaling of virtual private network connections
US20190342179A1 (en) * 2018-05-07 2019-11-07 Servicenow, Inc. Discovery and Management of Devices
US20210075727A1 (en) 2018-09-26 2021-03-11 Amazon Technologies, Inc. Multi-account gateway
US11240203B1 (en) 2018-12-07 2022-02-01 Amazon Technologies, Inc. Network segmentation by automatically generated security groups
US20210067439A1 (en) 2019-08-26 2021-03-04 Vmware, Inc. Forwarding element with physical and virtual data planes
US11212238B2 (en) 2019-08-27 2021-12-28 Vmware, Inc. Providing recommendations for implementing virtual networks
US20210067375A1 (en) 2019-08-27 2021-03-04 Vmware, Inc. Having a remote device use a shared virtual network to access a dedicated virtual network defined over public clouds
US20210067468A1 (en) 2019-08-27 2021-03-04 Vmware, Inc. Alleviating congestion in a virtual network deployed over public clouds for an entity
US20210112034A1 (en) 2019-10-15 2021-04-15 Cisco Technology, Inc. Dynamic discovery of peer network devices across a wide area network
US20210126860A1 (en) 2019-10-28 2021-04-29 Vmware, Inc. Managing forwarding elements at edge nodes connected to a virtual network
US20210136140A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Using service containers to implement service chains
CN111478850A (en) 2020-02-29 2020-07-31 新华三大数据技术有限公司 Gateway adjusting method and device
US20210314388A1 (en) 2020-04-01 2021-10-07 Vmware, Inc. Virtual load-balanced service object
US20210359948A1 (en) 2020-05-15 2021-11-18 Equinix, Inc. Virtual gateways in a cloud exchange
US20220311714A1 (en) 2020-09-21 2022-09-29 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US20220094666A1 (en) 2020-09-21 2022-03-24 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
WO2022060464A1 (en) 2020-09-21 2022-03-24 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US11362992B2 (en) 2020-09-21 2022-06-14 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US11171878B1 (en) 2020-09-21 2021-11-09 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US20220311707A1 (en) 2021-03-25 2022-09-29 Vmware, Inc. Connectivity between virtual datacenters
WO2022250735A1 (en) 2021-05-24 2022-12-01 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US20220377020A1 (en) 2021-05-24 2022-11-24 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US20220377009A1 (en) 2021-05-24 2022-11-24 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
US20220377021A1 (en) 2021-05-24 2022-11-24 Vmware, Inc. Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Non-Published Commonly Owned U.S. Appl. No. 17/091,734 (H053.02), filed Nov. 6, 2020, 42 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/212,662 (H063), filed Mar. 25, 2021, 37 pages, VMware, Inc.
Non-published commonly owned U.S. Appl. No. 17/344,956 (H061.01), filed Jun. 11, 2021, 61 pages, VMware, Inc.
Non-published commonly owned U.S. Appl. No. 17/344,958 (H061.02), filed Jun. 11, 2021, 62 pages, VMware, Inc.
Non-published commonly owned U.S. Appl. No. 17/344,959 (H061.03), filed Jun. 11, 2021, 61 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/365,960, filed Jul. 1, 2021, 32 pages, VMware, Inc.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11962493B2 (en) 2022-06-21 2024-04-16 VMware LLC Network address translation in active-active edge cluster

Also Published As

Publication number Publication date
US20230006920A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US10862753B2 (en) High availability for stateful services in public cloud logical networks
US10601705B2 (en) Failover of centralized routers in public cloud logical networks
US10911397B2 (en) Agent for implementing layer 2 communication on layer 3 underlay network
US11347537B2 (en) Logical processing for containers
US10868760B2 (en) System and method for managing public IP addresses for virtual data centers
US20240031307A1 (en) Provisioning network services in a software defined data center
US10567482B2 (en) Accessing endpoints in logical networks and public cloud service providers native networks using a single network interface and a single routing table
US11736391B2 (en) Defining routing domain for distributed packet processing
US11695591B2 (en) In-band management interface with user space datapath
EP3669504B1 (en) High availability for stateful services in public cloud logical networks
US11729094B2 (en) Source-based routing for virtual datacenters
US20200084179A1 (en) Method of translating a logical switch into a set of network addresses
US11606290B2 (en) Connectivity between virtual datacenters
US9832112B2 (en) Using different TCP/IP stacks for different hypervisor services
US9729679B2 (en) Using different TCP/IP stacks for different tenants on a multi-tenant host
US11736436B2 (en) Identifying routes with indirect addressing in a datacenter
US10091125B2 (en) Using different TCP/IP stacks with separately allocated resources
US9940180B2 (en) Using loopback interfaces of multiple TCP/IP stacks for communication between processes
US11496437B2 (en) Selective ARP proxy
US11962564B2 (en) Anycast address for network address translation at edge
US20240007386A1 (en) Route aggregation for virtual datacenter gateway
US11962493B2 (en) Network address translation in active-active edge cluster

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: VMWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARUMUGAM, GANES KUMAR;NATARAJAN, VIJAI COIMBATORE;KANAKARAJU, HARISH;SIGNING DATES FROM 20210825 TO 20210920;REEL/FRAME:058199/0870

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: VMWARE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:067102/0395

Effective date: 20231121