CN116648691A - Layer 2network using access control lists in virtualized cloud environments - Google Patents
Layer 2network using access control lists in virtualized cloud environments Download PDFInfo
- Publication number
- CN116648691A CN116648691A CN202180088348.9A CN202180088348A CN116648691A CN 116648691 A CN116648691 A CN 116648691A CN 202180088348 A CN202180088348 A CN 202180088348A CN 116648691 A CN116648691 A CN 116648691A
- Authority
- CN
- China
- Prior art keywords
- layer
- vcn
- virtual
- virtual network
- nvd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004891 communication Methods 0.000 claims abstract description 121
- 230000006854 communication Effects 0.000 claims abstract description 121
- 238000000034 method Methods 0.000 claims abstract description 113
- 238000003860 storage Methods 0.000 claims description 58
- 230000008569 process Effects 0.000 description 59
- 230000006870 function Effects 0.000 description 58
- 238000013507 mapping Methods 0.000 description 51
- 238000012545 processing Methods 0.000 description 48
- 238000010586 diagram Methods 0.000 description 31
- 230000006855 networking Effects 0.000 description 31
- 238000005538 encapsulation Methods 0.000 description 23
- 238000007726 management method Methods 0.000 description 18
- 230000004044 response Effects 0.000 description 18
- 239000002184 metal Substances 0.000 description 13
- 238000012544 monitoring process Methods 0.000 description 9
- 238000012986 modification Methods 0.000 description 8
- 230000004048 modification Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 7
- 238000013519 translation Methods 0.000 description 7
- 230000000670 limiting effect Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 230000005012 migration Effects 0.000 description 4
- 238000013508 migration Methods 0.000 description 4
- 235000008694 Humulus lupulus Nutrition 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 238000002507 cathodic stripping potentiometry Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000002955 isolation Methods 0.000 description 3
- 230000010076 replication Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 239000000758 substrate Substances 0.000 description 3
- 238000012384 transportation and delivery Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000005641 tunneling Effects 0.000 description 2
- 241000721662 Juniperus Species 0.000 description 1
- 101150027802 L2 gene Proteins 0.000 description 1
- 241000699666 Mus <mouse, genus> Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000002567 autonomic effect Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 108010011222 cyclo(Arg-Pro) Proteins 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000000786 liquid-assisted grinding Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000011330 nucleic acid test Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 231100000572 poisoning Toxicity 0.000 description 1
- 230000000607 poisoning effect Effects 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000004366 reverse phase liquid chromatography Methods 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Techniques for communication in an L2 virtual network are described. In an example, an L2 virtual network includes a plurality of L2 compute instances hosted on a set of host machines and a plurality of L2 virtual network interfaces and L2 virtual switches hosted on a set of network virtualization devices. The L2 virtual network interface emulates an L2 port of an L2 virtual network. Access Control List (ACL) information applicable to the L2 port is sent to the network virtualization device hosting the L2 virtual network interface.
Description
Cross Reference to Related Applications
This international patent application claims priority from U.S. patent application Ser. No.17/494,720 entitled "LAYER-2NETWORKING USING ACCESS CONTROL LISTS IN A VIRTUALIZED CLOUD ENVIRONMENT" filed on 5 10 months 2021, which claims the benefit of U.S. provisional patent application Ser. No.63/132,377 entitled "LAYER-2NETWORKING IN A VIRTUALIZED CLOUD ENVIRONMENT" filed on 12 months 2020, the contents of which are incorporated herein by reference in their entirety for all purposes.
Background
Cloud computing provides on-demand availability of computing resources. Cloud computing may be based on a data center accessible to users via the internet. Cloud computing may provide infrastructure as a service (IaaS). A virtual network may be created for use by a user. However, these virtual networks have limitations that limit their functionality and value. Thus, further improvements are desired.
Disclosure of Invention
The present disclosure relates to virtualized cloud environments. Techniques to provide layer 2 networking functionality in virtualized cloud environments are described. Layer 2 functionality is provided in addition to and along with layer 3 networking functionality provided by the virtualized cloud environment.
Some embodiments of the present disclosure relate to providing layer 2 Virtual Local Area Networks (VLANs) to customers in a private network, such as a customer's Virtual Cloud Network (VCN). Different compute instances are connected in a layer 2 VLAN. The perception to the customer is that a single switch connects the simulations of computing instances. In fact, this emulated switch is implemented as an infinitely scalable distributed switch that includes a collection of local switches. More specifically, each computing instance executes on a host machine connected to a Network Virtualization Device (NVD). For each computing instance on a host connected to the NVD, the NVD hosts a layer 2 Virtual Network Interface Card (VNIC) and a local switch associated with the computing instance. The layer 2VNIC represents a port of a computing instance on a layer 2 VLAN. The local switch connects the VNIC to other VNICs (e.g., other ports) associated with other compute instances of the layer 2 VLAN. Various layer 2 network services are supported, including, for example, using layer 2 Access Control Lists (ACLs).
Various embodiments are described herein, including methods, systems, non-transitory computer-readable storage media storing programs, code, or instructions executable by one or more processors, and the like.
Drawings
FIG. 1 is a high-level diagram of a distributed environment, illustrating a virtual or overlay cloud network hosted by a cloud service provider infrastructure, according to some embodiments.
Fig. 2 depicts a simplified architectural diagram of physical components in a physical network within a CSPI, in accordance with some embodiments.
FIG. 3 illustrates an example arrangement within a CSPI in which a host machine is connected to multiple Network Virtualization Devices (NVDs), in accordance with certain embodiments.
FIG. 4 depicts connectivity between a host machine and an NVD for providing I/O virtualization to support multi-tenancy (NVD) in accordance with certain embodiments.
Fig. 5 depicts a simplified block diagram of a physical network provided by a CSPI, in accordance with some embodiments.
FIG. 6 is a schematic diagram of a computing network, according to some embodiments.
Fig. 7 is a logical and hardware schematic of a VLAN according to some embodiments.
Fig. 8 is a logical schematic of a plurality of connected L2 VLANs, in accordance with some embodiments.
Fig. 9 is a logical schematic of a plurality of connected L2 VLANs and subnets 900 according to some embodiments.
Fig. 10 is a schematic diagram of intra-VLAN communication and learning within a VLAN, in accordance with some embodiments.
Fig. 11 is a schematic diagram of a VLAN according to some embodiments.
Fig. 12 is a flow diagram illustrating a process 1200 for intra-VLAN communication in accordance with some embodiments.
FIG. 13 illustrates an example environment suitable for defining ACLs for L2 virtual networks, in accordance with certain embodiments.
Fig. 14 illustrates an example ACL technique in a VLAN according to some embodiments.
Fig. 15 is a flow diagram illustrating a process for distributing ACL information in a layer 2 virtual network, in accordance with some embodiments.
Fig. 16 is a flow diagram illustrating a process for determining the applicability of ACL information to an L2 VNIC, in accordance with some embodiments.
FIG. 17 is a flow chart illustrating a process for enforcing an ACL, in accordance with some embodiments.
FIG. 18 is a flow chart illustrating a process for enforcing an ACL, in accordance with some embodiments.
Fig. 19 is a block diagram illustrating one mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
Fig. 20 is a block diagram illustrating another mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
Fig. 21 is a block diagram illustrating another mode for implementing a cloud infrastructure as a service system in accordance with at least one embodiment.
Fig. 22 is a block diagram illustrating another mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
FIG. 23 is a block diagram illustrating an example computer system in accordance with at least one embodiment.
Detailed Description
In the following description, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. It may be evident, however, that the various embodiments may be practiced without these specific details. The drawings and description are not intended to be limiting. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
A-Example virtual networking architecture
The term cloud service is generally used to refer to services provided by a Cloud Service Provider (CSP) to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP. Typically, the servers and systems that make up the CSP infrastructure are separate from the customer's own in-house deployment servers and systems. Thus, customers can utilize cloud services provided by CSPs without purchasing separate hardware and software resources for the services. Cloud services are designed to provide subscribing customers with simple, extensible access to applications and computing resources without requiring the customers to invest in purchasing infrastructure for providing the services.
There are several cloud service providers that offer various types of cloud services. There are various different types or models of cloud services, including software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS), and the like.
A customer may subscribe to one or more cloud services provided by the CSP. The customer may be any entity, such as an individual, organization, business, etc. When a customer subscribes to or registers for a service provided by the CSP, a lease or account will be created for the customer. The customer may then access one or more cloud resources of the subscription associated with the account via this account.
As described above, infrastructure as a service (IaaS) is a specific type of cloud computing service. In the IaaS model, CSPs provide infrastructure (referred to as cloud service provider infrastructure or CSPI) that can be used by customers to build their own customizable networks and deploy customer resources. Thus, the customer's resources and network are hosted in a distributed environment by the CSP's provided infrastructure. This is in contrast to traditional computing, where the customer's resources and network are hosted by the customer's provided infrastructure.
The CSPI may include high performance computing resources, including various host machines, memory resources, and network resources, that form an interconnection of a physical network, also referred to as a baseboard network or an underlay network. Resources in the CSPI may be spread over one or more data centers, which may be geographically spread over one or more geographic regions. Virtualization software may be executed by these physical resources to provide a virtualized distributed environment. Virtualization creates an overlay network (also referred to as a software-based network, a software-defined network, or a virtual network) on a physical network. The CSPI physical network provides an underlying foundation for creating one or more overlay or virtual networks over the physical network. The virtual or overlay network may include one or more Virtual Cloud Networks (VCNs). Virtual networks are implemented using software virtualization techniques (e.g., a hypervisor, functions performed by a Network Virtualization Device (NVD) (e.g., a smartNIC), a top-of-rack (TOR) switch, a smart TOR that implements one or more functions performed by the NVD, and other mechanisms) to create a layer of network abstraction that can run over a physical network. Virtual networks may take many forms, including peer-to-peer networks, IP networks, and the like. The virtual network is typically either a layer 3IP network or a layer 2VLAN. This method of virtual or overlay networking is often referred to as a virtual or overlay 3 network. Examples of protocols developed for virtual networks include IP-in-IP (or Generic Routing Encapsulation (GRE)), virtual extensible local area networks (VXLAN-IETF RFC 7348), virtual Private Networks (VPNs) (e.g., MPLS layer 3 virtual private networks (RFC 4364)), NSX of VMware, GENEVE, and the like.
For IaaS, the infrastructure provided by CSP (CSPI) may be configured to provide virtualized computing resources over a public network (e.g., the internet). In the IaaS model, cloud computing service providers may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., hypervisor layer), etc.). In some cases, the IaaS provider may also offer various services to accompany those infrastructure components (e.g., billing, monitoring, logging, security, load balancing, clustering, etc.). Thus, as these services may be policy driven, iaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. CSPI provides a collection of infrastructure and complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted distributed environment. CSPI provides high performance computing resources and capabilities as well as storage capacity in flexible virtual networks that are securely accessible from a variety of networking locations, such as from a customer's in-house deployment network. When a customer subscribes to or registers for an IaaS service provided by the CSP, the lease created for that customer is a secure and sequestered partition within the CSPI in which the customer can create, organize and manage their cloud resources.
Customers may build their own virtual network using the computing, memory, and networking resources provided by the CSPI. One or more customer resources or workloads, such as computing instances, may be deployed on these virtual networks. For example, a customer may use resources provided by the CSPI to build one or more customizable and private virtual networks, referred to as Virtual Cloud Networks (VCNs). A customer may deploy one or more customer resources, such as computing instances, on a customer VCN. The computing instances may take the form of virtual machines, bare metal instances, and the like. Thus, CSPI provides a collection of infrastructure and complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available virtual hosted environment. Clients do not manage or control the underlying physical resources provided by the CSPI, but may control the operating system, storage devices, and deployed applications; and may have limited control over selected networking components (e.g., firewalls).
CSP may provide a console that enables clients and network administrators to use CSPI resources to configure, access, and manage resources deployed in the cloud. In some embodiments, the console provides a web-based user interface that may be used to access and manage the CSPI. In some implementations, the console is a web-based application provided by the CSP.
The CSPI may support single-lease or multi-lease architectures. In a single tenancy architecture, software (e.g., applications, databases) or hardware components (e.g., host machines or servers) serve a single customer or tenant. In a multi-tenancy architecture, software or hardware components serve multiple customers or tenants. Thus, in a multi-tenancy architecture, the CSPI resources are shared among multiple customers or tenants. In the multi-tenancy case, precautions are taken and safeguards are implemented in the CSPI to ensure that each tenant's data is isolated and remains invisible to other tenants.
In a physical network, a network endpoint (endpoint) refers to a computing device or system that connects to and communicates back and forth with the physical network to which it is connected. Network endpoints in a physical network may be connected to a Local Area Network (LAN), wide Area Network (WAN), or other type of physical network. Examples of traditional endpoints in a physical network include modems, hubs, bridges, switches, routers and other network devices, physical computers (or host machines), and the like. Each physical device in the physical network has a fixed network address that can be used to communicate with the device. This fixed network address may be a layer 2 address (e.g., MAC address), a fixed layer 3 address (e.g., IP address), etc. In a virtualized environment or virtual network, endpoints may include various virtual endpoints, such as virtual machines hosted by components of a physical network (e.g., by physical host machines). These endpoints in the virtual network are addressed by overlay addresses, such as overlay 2 addresses (e.g., overlay MAC addresses) and overlay 3 addresses (e.g., overlay IP addresses). Network coverage enables flexibility by allowing a network administrator to move around an overlay address associated with a network endpoint using software management (e.g., via software implementing a control plane for a virtual network). Thus, unlike a physical network, in a virtual network, an overlay address (e.g., an overlay IP address) may be moved from one endpoint to another endpoint using network management software. Because the virtual network builds on top of the physical network, communication between components in the virtual network involves the virtual network and the underlying physical network. To facilitate such communications, components of the CSPI are configured to learn and store mappings that map overlay addresses in the virtual network to actual physical addresses in the baseboard network, and vice versa. These mappings are then used to facilitate communications. Customer traffic is encapsulated to facilitate routing in the virtual network.
Thus, a physical address (e.g., a physical IP address) is associated with a component in the physical network, and an overlay address (e.g., an overlay IP address) is associated with an entity in the virtual network. Both the physical IP address and the overlay IP address are types of real IP addresses. These are separate from the virtual IP addresses, which map to multiple real IP addresses. The virtual IP address provides a one-to-many mapping between the virtual IP address and a plurality of real IP addresses.
The cloud infrastructure or CSPI is physically hosted in one or more data centers in one or more regions of the world. The CSPI may include components in a physical or substrate network and virtualized components located in a virtual network built upon the physical network components (e.g., virtual networks, computing instances, virtual machines, etc.). In certain embodiments, the CSPI is organized and hosted in the domain, region, and availability domains. A region is typically a localized geographic area containing one or more data centers. Regions are generally independent of each other and can be far apart, e.g., across countries or even continents. For example, a first region may be in australia, another in japan, another in india, etc. The CSPI resources are divided between regions such that each region has its own independent subset of CSPI resources. Each region may provide a set of core infrastructure services and resources, such as computing resources (e.g., bare machine servers, virtual machines, containers, and related infrastructure, etc.); storage resources (e.g., block volume storage, file storage, object storage, archive storage); network resources (e.g., virtual Cloud Network (VCN), load balancing resources, connections to an in-premise network), database resources; edge networking resources (e.g., DNS); and access to management and monitoring resources, etc. Each region typically has multiple paths connecting it to other regions in the field.
In general, an application is deployed in an area where it is most frequently used (i.e., on the infrastructure associated with the area) because resources in the vicinity are used faster than resources in the distance. Applications may also be deployed in different areas for various reasons, such as redundancy to mitigate risk of regional-wide events (such as large weather systems or earthquakes) to meet different requirements of legal jurisdictions, tax domains, and other business or social standards, and so forth.
Data centers within a region may be further organized and subdivided into Availability Domains (ADs). The availability domain may correspond to one or more data centers located within the region. A region may be comprised of one or more availability domains. In such a distributed environment, the CSPI resources are either region-specific, such as a Virtual Cloud Network (VCN), or availability domain-specific, such as computing instances.
ADs within a region are isolated from each other, have fault tolerance capability, and are configured such that they are highly unlikely to fail simultaneously. This is achieved by the ADs not sharing critical infrastructure resources (such as networking, physical cables, cable paths, cable entry points, etc.) so that a failure at one AD within a region is less likely to affect the availability of other ADs within the same region. ADs within the same region may be connected to each other through low latency, high bandwidth networks, which makes it possible to provide high availability connections for other networks (e.g., the internet, customer's on-premise networks, etc.) and build replication systems in multiple ADs to achieve high availability and disaster recovery. Cloud services use multiple ADs to ensure high availability and prevent resource failures. As the infrastructure provided by IaaS providers grows, more regions and ADs and additional capacity can be added. Traffic between availability domains is typically encrypted.
In some embodiments, regions are grouped into domains. A domain is a logical collection of regions. The domains are isolated from each other and do not share any data. Regions in the same domain may communicate with each other, but regions in different domains may not. The customer's lease or account with the CSP exists in a single area and may be spread across one or more regions belonging to that area. Typically, when a customer subscribes to an IaaS service, a lease or account is created for the customer in a region designated by the customer in the domain (referred to as the "home" region). The customer may extend the customer's lease to one or more other areas within the domain. The customer cannot access areas that are not in the area of the customer's rental agency.
The IaaS provider may provide multiple domains, each domain catering to a particular set of customers or users. For example, business fields may be provided for business clients. As another example, a domain may be provided for a particular country for clients within that country. As yet another example, government fields may be provided for governments and the like. For example, a government domain may cater to a particular government and may have a higher level of security than a business domain. For example, oracle cloud infrastructure (Oracle Cloud Infrastructure, OCI) currently provides a field for commercial regions, and two fields (e.g., fedwamp-authorized and IL 5-authorized) for government cloud regions.
In some embodiments, an AD may be subdivided into one or more fault domains. A fault domain is a grouping of infrastructure resources within an AD to provide counteraffinity. The failure domain allows for the distribution of computing instances such that they are not located on the same physical hardware within a single AD. This is called counteraffinity. A failure domain refers to a collection of hardware components (computers, switches, etc.) that share a single point of failure. The computing pool is logically divided into fault domains. Thus, a hardware failure or computing hardware maintenance event affecting one failure domain does not affect instances in other failure domains. The number of fault domains for each AD may vary depending on the embodiment. For example, in some embodiments, each AD contains three fault domains. The failure domain acts as a logical data center within the AD.
When a customer subscribes to the IaaS service, resources from the CSPI are provisioned to the customer and associated with the customer's lease. Clients can use these provisioned resources to build private networks and deploy resources on these networks. Customer networks hosted in the cloud by CSPI are referred to as Virtual Cloud Networks (VCNs). A customer may set up one or more Virtual Cloud Networks (VCNs) using CSPI resources allocated for the customer. VCNs are virtual or software defined private networks. Customer resources deployed in a customer's VCN may include computing instances (e.g., virtual machines, bare metal instances) and other resources. These computing instances may represent various customer workloads, such as applications, load balancers, databases, and the like. Computing instances deployed on a VCN may communicate with publicly accessible endpoints ("public endpoints"), with other instances in the same VCN or other VCNs (e.g., other VCNs of the customer or VCNs not belonging to the customer), with customer's in-house deployment data centers or networks, and with service endpoints and other types of endpoints through a public network such as the internet.
CSP may use CSPI to provide various services. In some cases, the clients of the CSPI themselves may act like service providers and provide services using CSPI resources. The service provider may expose a service endpoint featuring identifying information (e.g., IP address, DNS name, and port). The customer's resources (e.g., computing instances) may use a particular service by accessing service endpoints exposed by the service for that particular service. These service endpoints are typically endpoints that a user can publicly access via a public communications network, such as the internet, using a public IP address associated with the endpoint. Publicly accessible network endpoints are sometimes referred to as public endpoints.
In some embodiments, a service provider may expose a service via an endpoint for the service (sometimes referred to as a service endpoint). The customer of the service may then use this service endpoint to access the service. In some embodiments, a service endpoint that provides a service may be accessed by multiple clients that intend to consume the service. In other embodiments, a dedicated service endpoint may be provided for a customer such that only the customer may use the dedicated service endpoint to access a service.
In some embodiments, when the VCN is created, it is associated with a private overlay classless inter-domain routing (CIDR) address space, which is a series of private overlay IP addresses (e.g., 10.0/16) assigned to the VCN. The VCN includes associated subnets, routing tables, and gateways. The VCNs reside within a single region, but may span one or more or all of the availability domains of the region. The gateway is a virtual interface configured for the VCN and enables communication of traffic between the VCN and one or more endpoints external to the VCN. One or more different types of gateways may be configured for the VCN to enable communications to and from different types of endpoints.
The VCN may be subdivided into one or more subnetworks, such as one or more subnetworks. Thus, a subnet is a configured unit or subdivision that can be created within a VCN. The VCN may have one or more subnets. Each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and represent a subset of the address space within the address space of the VCN.
Each computing instance is associated with a Virtual Network Interface Card (VNIC), which enables the computing instance to participate in a subnet of the VCN. VNICs are logical representations of physical Network Interface Cards (NICs). Generally, a VNIC is an interface between an entity (e.g., a computing instance, a service) and a virtual network. The VNICs exist in a subnet with one or more associated IP addresses and associated security rules or policies. The VNICs correspond to layer 2 ports on the switch. The VNICs are attached to the computing instance and to a subnet within the VCN. The VNICs associated with the computing instance enable the computing instance to be part of a subnet of the VCN and to communicate (e.g., send and receive packets) with endpoints that are on the same subnet as the computing instance, with endpoints in a different subnet in the VCN, or with endpoints that are external to the VCN. Thus, the VNICs associated with the computing instance determine how the computing instance connects with endpoints internal and external to the VCN. When a computing instance is created and added to a subnet within the VCN, a VNIC for the computing instance is created and associated with the computing instance. For a subnet that includes a set of computing instances, the subnet contains VNICs corresponding to the set of computing instances, each VNIC attached to a computing instance within the set of computing instances.
Each computing instance is assigned a private overlay IP address via the VNIC associated with the computing instance. This private overlay network IP address is assigned to the VNIC associated with the computing instance when the computing instance is created and is used to route traffic to and from the computing instance. All VNICs in a given subnetwork use the same routing table, security list, and DHCP options. As described above, each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and represent a subset of the address space within the address space of the VCN. For a VNIC on a particular subnet of a VCN, the private overlay IP address assigned to that VNIC is an address from a contiguous range of overlay IP addresses allocated for the subnet.
In some embodiments, in addition to private overlay IP addresses, the computing instance may optionally be assigned additional overlay IP addresses, such as, for example, one or more public IP addresses if in a public subnet. The plurality of addresses are assigned either on the same VNIC or on a plurality of VNICs associated with the computing instance. However, each instance has a master VNIC that is created during instance startup and is associated with an overlay private IP address assigned to the instance—this master VNIC cannot be deleted. Additional VNICs, referred to as secondary VNICs, may be added to existing instances in the same availability domain as the primary VNIC. All VNICs are in the same availability domain as this example. The auxiliary VNICs may be located in a subnet in the same VCN as the main VNIC or in a different subnet in the same VCN or a different VCN.
If the computing instance is in a public subnet, it may optionally be assigned a public IP address. When creating a subnet, the subnet may be designated as either a public subnet or a private subnet. A private subnet means that resources (e.g., compute instances) and associated VNICs in the subnet cannot have a public overlay IP address. A public subnet means that resources in a subnet and associated VNICs may have a public IP address. A customer may specify that a subnet exists in a single availability domain or multiple availability domains in a cross-regional or domain.
As described above, the VCN may be subdivided into one or more subnets. In some embodiments, a Virtual Router (VR) configured for a VCN (referred to as a VCN VR or simply VR) enables communication between subnets of the VCN. For a subnet within a VCN, VR means a logical gateway for that subnet that enables that subnet (i.e., the computing instance on that subnet) to communicate with endpoints on other subnets within the VCN as well as other endpoints outside the VCN. The VCN VR is a logical entity configured to route traffic between VNICs in the VCN and virtual gateways ("gateways") associated with the VCN. The gateway is further described below with respect to fig. 1. VCN VR is a layer 3/IP layer concept. In one embodiment, there is one VCN VR for the VCN, where the VCN VR has a potentially unlimited number of ports addressed by the IP address, one port for each subnet of the VCN. In this way, the VCN VR has a different IP address for each subnet in the VCN to which the VCN VR is attached. The VR is also connected to various gateways configured for the VCN. In some embodiments, a particular overlay IP address in the overlay IP address range for a subnet is reserved for a port of a VCN VR for that subnet. Consider, for example, that a VCN has two subnets, with associated address ranges of 10.0/16 and 10.1/16, respectively. For the first subnet in the VCN with an address range of 10.0/16, addresses within this range are reserved for ports of the VCN VR for that subnet. In some cases, the first IP address within range may be reserved for VCN VR. For example, for a subnet covering an IP address range of 10.0/16, an IP address of 10.0.0.1 may be reserved for ports of the VCN VR for that subnet. For a second subnet in the same VCN with an address range of 10.1/16, the VCN VR may have a port for the second subnet with an IP address of 10.1.0.1. The VCN VR has a different IP address for each subnet in the VCN.
In some other embodiments, each subnet within the VCN may have its own associated VR that is addressable by the subnet using a reserved or default IP address associated with the VR. For example, the reserved or default IP address may be the first IP address in the range of IP addresses associated with the subnet. The VNICs in the subnet may use this default or reserved IP address to communicate (e.g., send and receive packets) with the VR associated with the subnet. In such an embodiment, the VR is the entry/exit point of the subnet. The VR associated with a subnet within the VCN may communicate with other VR associated with other subnets within the VCN. The VR may also communicate with a gateway associated with the VCN. The VR functions of the subnetwork are run on or performed by one or more NVDs that perform VNIC functions for VNICs in the subnetwork.
The VCN may be configured with routing tables, security rules, and DHCP options. The routing table is a virtual routing table for the VCN and includes rules for routing traffic from a subnet within the VCN to a destination outside the VCN through a gateway or specially configured instance. The routing tables of the VCNs may be customized to control how packets are forwarded/routed to and from the VCNs. DHCP options refer to configuration information that is automatically provided to an instance at instance start-up.
The security rules configured for the VCN represent overlay firewall rules for the VCN. Security rules may include ingress and egress rules and specify the type of traffic (e.g., protocol and port based) that is allowed to enter and exit the VCN instance. The client may choose whether a given rule is stateful or stateless. For example, a client may allow incoming SSH traffic from anywhere to a collection of instances by setting state entry rules with source CIDR 0.0.0.0/0 and destination TCP ports 22. The security rules may be implemented using a network security group or security list. A network security group consists of a set of security rules that apply only to the resources in the group. In another aspect, the security list includes rules applicable to all resources in any subnet that uses the security list. The VCN may be provided with a default security list with default security rules. The DHCP options configured for the VCN provide configuration information that is automatically provided to the instances in the VCN at instance start-up.
In some embodiments, configuration information for the VCN is determined and stored by the VCN control plane. For example, configuration information for a VCN may include information about: address ranges associated with the VCN, subnets and associated information within the VCN, one or more VRs associated with the VCN, computing instances in the VCN and associated VNICs, NVDs (e.g., VNICs, VRs, gateways) that perform various virtualized network functions associated with the VCN, status information for the VCN, and other VCN related information. In certain embodiments, the VCN distribution service publishes configuration information stored by the VCN control plane or portion thereof to the NVD. The distributed information may be used to update information (e.g., forwarding tables, routing tables, etc.) stored and used by the NVD to forward packets to or from computing instances in the VCN.
In some embodiments, the creation of VCNs and subnets is handled by the VCN Control Plane (CP) and the launching of compute instances is handled by the compute control plane. The compute control plane is responsible for allocating physical resources for the compute instance and then invoking the VCN control plane to create and attach the VNICs to the compute instance. The VCN CP also sends the VCN data map to a VCN data plane configured to perform packet forwarding and routing functions. In some embodiments, the VCN CP provides a distribution service responsible for providing updates to the VCN data plane. Examples of VCN control planes are also depicted in fig. 17, 18, 19, and 20 (see reference numerals 1716, 1816, 1916, and 2016) and described below.
A customer may create one or more VCNs using resources hosted by the CSPI. Computing instances deployed on a client VCN may communicate with different endpoints. These endpoints may include endpoints hosted by the CSPI and endpoints external to the CSPI.
Various different architectures for implementing cloud-based services using CSPI are depicted in fig. 1, 2, 3, 4, 5, 17, 18, 19, and 21 and described below. Fig. 1 is a high-level diagram of a distributed environment 100, illustrating an overlay or customer VCN hosted by a CSPI, in accordance with certain embodiments. The distributed environment depicted in fig. 1 includes a plurality of components in an overlay network. The distributed environment 100 depicted in FIG. 1 is only an example and is not intended to unduly limit the scope of the claimed embodiments. Many variations, alternatives, and modifications are possible. For example, in some embodiments, the distributed environment depicted in fig. 1 may have more or fewer systems or components than those shown in fig. 1, may combine two or more systems, or may have different system configurations or arrangements.
As shown in the example depicted in fig. 1, distributed environment 100 includes CSPI 101 that provides services and resources that customers can subscribe to and use to build their Virtual Cloud Network (VCN). In some embodiments, CSPI 101 provides IaaS services to subscribing clients. Data centers within CSPI 101 may be organized into one or more regions. An example zone "zone US"102 is shown in fig. 1. The customer has configured a customer VCN 104 for the region 102. A customer may deploy various computing instances on the VCN 104, where the computing instances may include virtual machine or bare machine instances. Examples of instances include applications, databases, load balancers, and the like.
In the embodiment depicted in fig. 1, customer VCN 104 includes two subnets, namely, "subnet-1" and "subnet-2," each having its own CIDR IP address range. In FIG. 1, the overlay IP address range for subnet-1 is 10.0/16 and the address range for subnet-2 is 10.1/16.VCN virtual router 105 represents a logical gateway for the VCN that enables communication between the subnetworks of VCN 104 and with other endpoints external to the VCN. The VCN VR 105 is configured to route traffic between the VNICs in the VCN 104 and gateways associated with the VCN 104. The VCN VR 105 provides a port for each subnet of the VCN 104. For example, VR 105 may provide a port for subnet-1 with IP address 10.0.0.1 and a port for subnet-2 with IP address 10.1.0.1.
Multiple computing instances may be deployed on each subnet, where the computing instances may be virtual machine instances and/or bare machine instances. Computing instances in a subnet may be hosted by one or more host machines within CSPI 101. The computing instance participates in the subnet via the VNIC associated with the computing instance. For example, as shown in fig. 1, computing instance C1 becomes part of subnet-1 via the VNIC associated with the computing instance. Likewise, computing instance C2 becomes part of subnet-1 via the VNIC associated with C2. In a similar manner, multiple computing instances (which may be virtual machine instances or bare machine instances) may be part of subnet-1. Each computing instance is assigned a private overlay IP address and a MAC address via its associated VNIC. For example, in fig. 1, the overlay IP address of computing instance C1 is 10.0.0.2, the mac address is M1, and the private overlay IP address of computing instance C2 is 10.0.0.3, the mac address is M2. Each compute instance in subnet-1 (including compute instance C1 and C2) has a default route to VCN VR 105 using IP address 10.0.0.1, which is the IP address for the port of VCN VR 105 for subnet-1.
Multiple computing instances may be deployed on subnet-2, including virtual machine instances and/or bare machine instances. For example, as shown in fig. 1, computing instances D1 and D2 become part of subnet-2 via VNICs associated with the respective computing instances. In the embodiment shown in fig. 1, the overlay IP address of computing instance D1 is 10.1.0.2, the mac address is MM1, and the private overlay IP address of computing instance D2 is 10.1.0.3, the mac address is MM2. Each compute instance in subnet-2 (including compute instances D1 and D2) has a default route to VCN VR 105 using IP address 10.1.0.1, which is the IP address for the port of VCN VR 105 for subnet-2.
The VCN a 104 may also include one or more load balancers. For example, a load balancer may be provided for a subnet and may be configured to load balance traffic across multiple compute instances on the subnet. A load balancer may also be provided to load balance traffic across subnets in the VCN.
A particular computing instance deployed on VCN 104 may communicate with a variety of different endpoints. These endpoints may include endpoints hosted by CSPI 200 and endpoints external to CSPI 200. Endpoints hosted by CSPI 101 may include: endpoints on the same subnet as a particular computing instance (e.g., communications between two computing instances in subnet-1); endpoints located on different subnets but within the same VCN (e.g., communications between a compute instance in subnet-1 and a compute instance in subnet-2); endpoints in different VCNs in the same region (e.g., communication between a compute instance in subnet-1 and an endpoint in a VCN in the same region 106 or 110, communication between a compute instance in subnet-1 and an endpoint in a service mesh point 110 in the same region); or endpoints in VCNs in different regions (e.g., communications between computing instances in subnet-1 and endpoints in VCNs in different regions 108). Computing instances in a subnet hosted by CSPI 101 may also communicate with endpoints that are not hosted by CSPI 101 (i.e., external to CSPI 101). These external endpoints include endpoints in customer's on-premise network 116, endpoints in other remote cloud-hosted networks 118, public endpoints 114 accessible via a public network (such as the internet), and other endpoints.
Communication between computing instances on the same subnet is facilitated using VNICs associated with the source computing instance and the destination computing instance. For example, compute instance C1 in subnet-1 may want to send a packet to compute instance C2 in subnet-1. For a packet that originates from a source computing instance and whose destination is another computing instance in the same subnet, the packet is first processed by the VNIC associated with the source computing instance. The processing performed by the VNICs associated with the source computing instance may include determining destination information for the packet from a packet header, identifying any policies (e.g., security lists) configured for the VNICs associated with the source computing instance, determining a next hop for the packet, performing any packet encapsulation/decapsulation functions as needed, and then forwarding/routing the packet to the next hop for the purpose of facilitating communication of the packet to its intended destination. When the destination computing instance and the source computing instance are located in the same subnet, the VNIC associated with the source computing instance is configured to identify the VNIC associated with the destination computing instance and forward the packet to the VNIC for processing. The VNIC associated with the destination computing instance is then executed and the packet is forwarded to the destination computing instance.
For packets to be transmitted from computing instances in a subnet to endpoints in different subnets in the same VCN, communication is facilitated by VNICs associated with source and destination computing instances and VCN VR. For example, if computing instance C1 in subnet-1 in FIG. 1 wants to send a packet to computing instance D1 in subnet-2, then the packet is first processed by the VNIC associated with computing instance C1. The VNIC associated with computing instance C1 is configured to route packets to VCN VR 105 using a default route or port 10.0.0.1 of the VCN VR. The VCN VR 105 is configured to route packets to subnet-2 using port 10.1.0.1. The VNIC associated with D1 then receives and processes the packet and the VNIC forwards the packet to computing instance D1.
For packets to be communicated from a computing instance in VCN 104 to an endpoint external to VCN 104, communication is facilitated by a VNIC associated with the source computing instance, VCN VR 105, and a gateway associated with VCN 104. One or more types of gateways may be associated with VCN 104. A gateway is an interface between a VCN and another endpoint that is external to the VCN. The gateway is a layer 3/IP layer concept and enables the VCN to communicate with endpoints external to the VCN. Thus, the gateway facilitates traffic flow between the VCN and other VCNs or networks. Various different types of gateways may be configured for the VCN to facilitate different types of communications with different types of endpoints. Depending on the gateway, the communication may be through a public network (e.g., the internet) or through a private network. Various communication protocols may be used for these communications.
For example, computing instance C1 may want to communicate with endpoints external to VCN 104. The packet may be first processed by the VNIC associated with the source computing instance C1. The VNIC processing determines that the destination of the packet is outside of subnet-1 of C1. The VNIC associated with C1 may forward the packet to the VCN VR 105 for VCN 104. The VCN VR 105 then processes the packet and, as part of the processing, determines a particular gateway associated with the VCN 104 as the next hop for the packet based on the destination of the packet. The VCN VR 105 may then forward the packet to the particular identified gateway. For example, if the destination is an endpoint within a customer's in-premise network, the packet may be forwarded by the VCN VR 105 to a Dynamic Routing Gateway (DRG) gateway 122 configured for the VCN 104. The packet may then be forwarded from the gateway to the next hop to facilitate delivery of the packet to its final intended destination.
Various different types of gateways may be configured for the VCN. An example of a gateway that may be configured for a VCN is depicted in fig. 1 and described below. Examples of gateways associated with VCNs are also depicted in fig. 17, 18, 19, and 20 (e.g., gateways referenced by reference numerals 1734, 1736, 1738, 1834, 1836, 1838, 1934, 1936, 1938, 2034, 2036, and 2038) and described below. As shown in the embodiment depicted in fig. 1, dynamic Routing Gateway (DRG) 122 may be added to or associated with customer VCN 104 and provide a path for private network traffic communications between customer VCN 104 and another endpoint, which may be customer's on-premise network 116, VCN 108 in a different region of CSPI 101, or other remote cloud network 118 not hosted by CSPI 101. The customer in-house deployment network 116 may be a customer network or customer data center built using the customer's resources. Access to the customer in-house deployment network 116 is typically very limited. For customers having both customer in-premise network 116 and one or more VCNs 104 deployed or hosted by CSPI 101 in the cloud, customers may want their in-premise network 116 and their cloud-based VCNs 104 to be able to communicate with each other. This enables customers to build an extended hybrid environment, including customers' VCNs 104 hosted by CSPI 101 and their on-premise network 116.DRG 122 enables such communication. To enable such communications, a communication channel 124 is provided in which one endpoint of the channel is located in customer on-premise network 116 and the other endpoint is located in CSPI 101 and connected to customer VCN 104. The communication channel 124 may be over a public communication network (such as the internet) or a private communication network. Various different communication protocols may be used, such as IPsec VPN technology on a public communication network (such as the internet), fastConnect technology using a private network instead of Oracle of a public network, etc. The devices or equipment in the customer-premises deployment network 116 that form one endpoint of the communication channel 124 are referred to as Customer Premise Equipment (CPE), such as CPE 126 depicted in fig. 1. On the CSPI 101 side, the endpoint may be a host machine executing DRG 122.
In some embodiments, a remote peer-to-peer connection (RPC) may be added to the DRG, which allows a customer to peer one VCN with another VCN in a different locale. Using such RPCs, customer VCN 104 may connect with VCN 108 in another region using DRG 122. DRG 122 may also be used to communicate with other remote cloud networks 118 (such as Microsoft Azure cloud, amazon AWS cloud, etc.) that are not hosted by CSPI 101.
As shown in fig. 1, the customer VCN 104 may be configured with an Internet Gateway (IGW) 120 that enables computing instances on the VCN 104 to communicate with a public endpoint 114 that is accessible over a public network, such as the internet. IGW 1120 is a gateway that connects the VCN to a public network such as the internet. IGW 120 enables public subnets within a VCN, such as VCN 104, where resources in the public subnets have public overlay IP addresses, to directly access public endpoints 112 on public network 114, such as the internet. Using IGW 120, a connection may be initiated from a subnet within VCN 104 or from the internet.
Network Address Translation (NAT) gateway 128 may be configured for the customer's VCN 104 and enable cloud resources in the customer's VCN that do not have a private public overlay IP address to access the internet and do so without exposing those resources to direct incoming internet connections (e.g., L4-L7 connections). This enables private subnets within the VCN (such as private subnet-1 in VCN 104) to privately access public endpoints on the internet. In NAT gateways, connections to the public internet can only be initiated from the private subnetwork, and not from the internet.
In some embodiments, a Serving Gateway (SGW) 126 may be configured for the customer VCN 104 and provide a path for private network traffic between the VCN 104 and service endpoints supported in the services network 110. In some embodiments, the services network 110 may be provided by a CSP and may provide various services. An example of such a service network is the Oracle service network, which provides various services available to customers. For example, a computing instance (e.g., database system) in a private subnet of the client VCN 104 may backup data to a service endpoint (e.g., object store) without requiring a public IP address or access to the internet. In some embodiments, the VCN may have only one SGW and the connection may be initiated only from a subnet within the VCN and not from the serving network 110. If the VCN is peer to peer with another, resources in the other VCN typically cannot access the SGW. Resources in an on-premise network that Connect to a VCN with FastConnect or VPN Connect may also use a service gateway configured for that VCN.
In some embodiments, SGW 126 uses the concept of a service-generic-free inter-domain routing (CIDR) tag, which is a string that represents all regional public IP address ranges for a service or group of services of interest. Customers use the service CIDR tag when they configure the SGW and associated routing rules to control traffic to the service. If the public IP address of the service changes in the future, the client can optionally use it in configuring security rules without having to adjust them.
A local peer-to-peer gateway (LPG) 132 is a gateway that may be added to a customer VCN 104 and enable the VCN 104 to peer with another VCN in the same region. Peer-to-peer refers to the VCN communicating using a private IP address, traffic need not be routed through a public network (such as the internet) or through the customer's on-premise network 116. In the preferred embodiment, the VCN has a separate LPG for each peer it establishes. Local peer-to-peer or VCN peer-to-peer is a common practice for establishing network connectivity between different applications or infrastructure management functions.
A service provider, such as the provider of a service in the services network 110, may provide access to the service using different access models. According to the public access model, services may be exposed as public endpoints publicly accessible by computing instances in the client VCN via a public network (such as the internet), and/or may be privately accessible via SGW 126. The service may be accessed as a private IP endpoint in a private subnet in the client's VCN according to a particular private access model. This is known as Private Endpoint (PE) access and enables a service provider to expose its services as instances in a customer's private network. The private endpoint resources represent services within the customer's VCN. Each PE appears as a VNIC (referred to as a PE-VNIC, having one or more private IPs) in a subnet selected by the customer in the customer's VCN. Thus, the PE provides a way to use the VNIC to present services in a private customer VCN subnet. Since the endpoints are exposed as VNICs, all features associated with the VNICs (such as routing rules, security lists, etc.) may now be used for the PE VNICs.
Service providers may register their services to enable access through the PE. The provider may associate policies with the service that limit the visibility of the service to customer leases. A provider may register multiple services under a single virtual IP address (VIP), especially for multi-tenant services. There may be multiple such private endpoints (in multiple VCNs) representing the same service.
The computing instance in the private subnet may then access the service using the private IP address or service DNS name of the PE VNIC. The computing instance in the client VCN may access the service by sending traffic to the private IP address of the PE in the client VCN. The Private Access Gateway (PAGW) 130 is a gateway resource that may be attached to a service provider VCN (e.g., a VCN in the service network 110) that acts as an ingress/egress point for all traffic from/to the customer subnet private endpoint. The PAGW 130 enables the provider to extend the number of PE connections without utilizing its internal IP address resources. The provider need only configure one PAGW for any number of services registered in a single VCN. The provider may represent the service as a private endpoint in multiple VCNs of one or more customers. From the customer's perspective, the PE VNICs are not attached to the customer's instance, but rather appear to be attached to the service with which the customer wishes to interact. Traffic destined for the private endpoint is routed to the service via the PAGW 130. These are called customer-to-service private connections (C2S connections).
The PE concept can also be used to extend private access for services to customer's internal networks and data centers by allowing traffic to flow through the FastConnect/IPsec links and private endpoints in the customer's VCN. Private access to services can also be extended to the customer's peer VCN by allowing traffic to flow between LPG 132 and PEs in the customer's VCN.
The customer may control routing in the VCN at the subnet level, so the customer may specify which subnets in the customer's VCN (such as VCN 104) use each gateway. The routing table of the VCN is used to decide whether to allow traffic to leave the VCN through a particular gateway. For example, in a particular example, a routing table for a common subnet within customer VCN 104 may send non-local traffic through IGW 120. Routing tables for private subnets within the same customer VCN 104 may send traffic destined for CSP services through SGW 126. All remaining traffic may be sent via NAT gateway 128. The routing table only controls traffic out of the VCN.
The security list associated with the VCN is used to control traffic entering the VCN via the gateway via the inbound connection. All resources in the subnetwork use the same routing table and security list. The security list may be used to control the particular type of traffic that is allowed to enter and exit instances in the sub-network of the VCN. Security list rules may include ingress (inbound) and egress (outbound) rules. For example, an ingress rule may specify an allowed source address range, while an egress rule may specify an allowed destination address range. The security rules may specify a particular protocol (e.g., TCP, ICMP), a particular port (e.g., 22 for SSH, 3389 for Windows RDP), etc. In some implementations, the operating system of the instance can enforce its own firewall rules that conform to the security list rules. Rules may be stateful (e.g., track connections and automatically allow responses without explicit security list rules for response traffic) or stateless.
Accesses from a customer's VCN (i.e., through resources or computing instances deployed on the VCN 104) may be categorized as public, private, or private. Public access refers to an access model that uses public IP addresses or NATs to access public endpoints. Private access enables customer workloads in the VCN 104 (e.g., resources in a private subnet) with private IP addresses to access services without traversing a public network such as the internet. In some embodiments, CSPI 101 enables a customer VCN workload with a private IP address to access (the public service endpoint of) a service using a service gateway. Thus, the service gateway provides a private access model by establishing a virtual link between the customer's VCN and the public endpoint of a service residing outside the customer's private network.
In addition, the CSPI may provide private public access using techniques such as FastConnect public peering, where an on-customer deployment instance may access one or more services in a customer's VCN using a FastConnect connection without traversing a public network such as the internet. The CSPI may also provide private access using FastConnect private peering, where an on-premise instance with a private IP address may access the customer's VCN workload using FastConnect connection. FastConnect is a network connectivity alternative to connecting customers' in-house networks to the CSPI and its services using the public internet. FastConnect provides a simple, flexible, and economical way to create private and private connections with higher bandwidth options and a more reliable and consistent network experience than Internet-based connections.
FIG. 1 and the accompanying description above describe various virtualized components in an example virtual network. As described above, the virtual network is built on the underlying physical or substrate network. Fig. 2 depicts a simplified architectural diagram of physical components in a physical network within CSPI 200 that provides an underlying layer for a virtual network, in accordance with some embodiments. As shown, CSPI 200 provides a distributed environment including components and resources (e.g., computing, memory, and network resources) provided by a Cloud Service Provider (CSP). These components and resources are used to provide cloud services (e.g., iaaS services) to subscribing clients (i.e., clients that have subscribed to one or more services provided by CSPs). Clients are provisioned with a subset of the resources (e.g., computing, memory, and network resources) of CSPI 200 based on the services subscribed to by the clients. The customer may then build its own cloud-based (i.e., CSPI-hosted) customizable and private virtual network using the physical computing, memory, and networking resources provided by CSPI 200. As indicated previously, these customer networks are referred to as Virtual Cloud Networks (VCNs). Clients may deploy one or more client resources, such as computing instances, on these client VCNs. The computing instance may be in the form of a virtual machine, a bare metal instance, or the like. CSPI 200 provides a collection of infrastructure and complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted environment.
In the example embodiment depicted in fig. 2, the physical components of CSPI 200 include one or more physical host machines or physical servers (e.g., 202, 206, 208), network Virtualization Devices (NVDs) (e.g., 210, 212), top of rack (TOR) switches (e.g., 214, 216), and physical networks (e.g., 218), as well as switches in physical network 218. The physical host machine or server may host and execute various computing instances that participate in one or more subnets of the VCN. The computing instances may include virtual machine instances and bare machine instances. For example, the various computing instances depicted in fig. 1 may be hosted by the physical host machine depicted in fig. 2. The virtual machine computing instances in the VCN may be executed by one host machine or a plurality of different host machines. The physical host machine may also host a virtual host machine, a container-based host or function, or the like. The VNICs and VCN VRs depicted in fig. 1 may be performed by the NVD depicted in fig. 2. The gateway depicted in fig. 1 may be performed by the host machine and/or NVD depicted in fig. 2.
The host machine or server may execute a hypervisor (also referred to as a virtual machine monitor or VMM) that creates and enables virtualized environments on the host machine. Virtualized or virtualized environments facilitate cloud-based computing. One or more computing instances may be created, executed, and managed on a host machine by a hypervisor on the host machine. The hypervisor on the host machine enables the physical computing resources (e.g., computing, memory, and network resources) of the host machine to be shared among the various computing instances executed by the host machine.
For example, as depicted in FIG. 2, host machines 202 and 208 execute hypervisors 260 and 266, respectively. These hypervisors may be implemented using software, firmware, or hardware, or a combination thereof. Typically, a hypervisor is a process or software layer that sits on top of the Operating System (OS) of the host machine, which in turn executes on the hardware processor of the host machine. The hypervisor provides a virtualized environment by enabling the physical computing resources of the host machine (e.g., processing resources such as processors/cores, memory resources, network resources) to be shared among the various virtual machine computing instances executed by the host machine. For example, in fig. 2, hypervisor 260 may be located above the OS of host machine 202 and enable computing resources (e.g., processing, memory, and network resources) of host machine 202 to be shared among computing instances (e.g., virtual machines) executed by host machine 202. The virtual machine may have its own operating system (referred to as a guest operating system), which may be the same as or different from the OS of the host machine. The operating system of a virtual machine executed by a host machine may be the same as or different from the operating system of another virtual machine executed by the same host machine. Thus, the hypervisor enables multiple operating systems to be executed simultaneously while sharing the same computing resources of the host machine. The host machines depicted in fig. 2 may have the same or different types of hypervisors.
The computing instance may be a virtual machine instance or a bare machine instance. In FIG. 2, computing instance 268 on host machine 202 and computing instance 274 on host machine 208 are examples of virtual machine instances. The host machine 206 is an example of a bare metal instance provided to a customer.
In some cases, an entire host machine may be provisioned to a single customer, and one or more computing instances (or virtual or bare machine instances) hosted by the host machine all belong to the same customer. In other cases, the host machine may be shared among multiple guests (i.e., multiple tenants). In such a multi-tenancy scenario, the host machine may host virtual machine computing instances belonging to different guests. These computing instances may be members of different VCNs for different customers. In some embodiments, bare metal computing instances are hosted by bare metal servers without hypervisors. When supplying a bare metal computing instance, a single customer or tenant maintains control of the physical CPU, memory, and network interfaces of the host machine hosting the bare metal instance, and the host machine is not shared with other customers or tenants.
As previously described, each computing instance that is part of a VCN is associated with a VNIC that enables the computing instance to be a member of a subnet of the VCN. The VNICs associated with the computing instances facilitate communication of packets or frames to and from the computing instances. The VNIC is associated with a computing instance when the computing instance is created. In some embodiments, for a computing instance executed by a host machine, a VNIC associated with the computing instance is executed by an NVD connected to the host machine. For example, in fig. 2, host machine 202 executes virtual machine computing instance 268 associated with VNIC 276, and VNIC 276 is executed by NVD 210 connected to host machine 202. As another example, bare metal instances 272 hosted by host machine 206 are associated with VNICs 280 that are executed by NVDs 212 connected to host machine 206. As yet another example, the VNICs 284 are associated with computing instances 274 that are executed by the host machine 208, and the VNICs 284 are executed by NVDs 212 connected to the host machine 208.
For a computing instance hosted by a host machine, an NVD connected to the host machine also executes a VCN VR corresponding to the VCN of which the computing instance is a member. For example, in the embodiment depicted in fig. 2, NVD 210 executes VCN VR 277 corresponding to the VCN of which computing instance 268 is a member. NVD 212 may also execute one or more VCN VRs 283 corresponding to VCNs corresponding to computing instances hosted by host machines 206 and 208.
The host machine may include one or more Network Interface Cards (NICs) that enable the host machine to connect to other devices. A NIC on a host machine may provide one or more ports (or interfaces) that enable the host machine to communicatively connect to another device. For example, the host machine may connect to the NVD using one or more ports (or interfaces) provided on the host machine and on the NVD. The host machine may also be connected to other devices (such as another host machine).
For example, in fig. 2, host machine 202 is connected to NVD 210 using link 220, link 220 extending between port 234 provided by NIC 232 of host machine 202 and port 236 of NVD 210. The host machine 206 is connected to the NVD 212 using a link 224, the link 224 extending between a port 246 provided by the NIC 244 of the host machine 206 and a port 248 of the NVD 212. Host machine 208 is connected to NVD 212 using link 226, link 226 extending between port 252 provided by NIC 250 of host machine 208 and port 254 of NVD 212.
The NVD in turn is connected via communication links to top of rack (TOR) switches that are connected to a physical network 218 (also referred to as a switch fabric). In certain embodiments, the links between the host machine and the NVD and between the NVD and the TOR switch are Ethernet links. For example, in fig. 2, NVDs 210 and 212 are connected to TOR switches 214 and 216 using links 228 and 230, respectively. In some embodiments, links 220, 224, 226, 228, and 230 are ethernet links. The collection of host machines and NVDs connected to TOR is sometimes referred to as a rack (rack).
The physical network 218 provides a communication architecture that enables TOR switches to communicate with each other. The physical network 218 may be a multi-layer network. In some embodiments, the physical network 218 is a multi-layer Clos network of switches, where TOR switches 214 and 216 represent leaf level nodes of the multi-layer and multi-node physical switching network 218. Different Clos network configurations are possible, including but not limited to layer 2 networks, layer 3 networks, layer 4 networks, layer 5 networks, and general "n" layer networks. An example of a Clos network is depicted in fig. 5 and described below.
There may be a variety of different connection configurations between the host machine and the NVD, such as a one-to-one configuration, a many-to-one configuration, a one-to-many configuration, and the like. In one-to-one configuration implementations, each host machine is connected to its own separate NVD. For example, in fig. 2, host machine 202 is connected to NVD 210 via NIC 232 of host machine 202. In a many-to-one configuration, multiple host machines are connected to one NVD. For example, in fig. 2, host machines 206 and 208 are connected to the same NVD 212 via NICs 244 and 250, respectively.
In a one-to-many configuration, one host machine is connected to multiple NVDs. FIG. 3 shows an example within CSPI 300 where a host machine is connected to multiple NVDs. As shown in fig. 3, host machine 302 includes a Network Interface Card (NIC) 304 that includes a plurality of ports 306 and 308. Host machine 300 is connected to first NVD 310 via port 306 and link 320, and to second NVD 312 via port 308 and link 322. Ports 306 and 308 may be ethernet ports and links 320 and 322 between host machine 302 and NVDs 310 and 312 may be ethernet links. The NVD 310 is in turn connected to a first TOR switch 314 and the NVD 312 is connected to a second TOR switch 316. The links between NVDs 310 and 312 and TOR switches 314 and 316 may be ethernet links. TOR switches 314 and 316 represent layer 0 switching devices in a multi-layer physical network 318.
The arrangement depicted in fig. 3 provides two separate physical network paths from the physical switch network 318 to the host machine 302: the first path passes through TOR switch 314 to NVD 310 to host machine 302 and the second path passes through TOR switch 316 to NVD 312 to host machine 302. The separate path provides enhanced availability (referred to as high availability) of the host machine 302. If one of the paths (e.g., a link in one of the paths is broken) or one of the devices (e.g., a particular NVD is not running) is in question, then the other path may be used for communication with host machine 302.
In the configuration depicted in fig. 3, the host machine connects to two different NVDs using two different ports provided by the NIC of the host machine. In other embodiments, the host machine may include multiple NICs that enable the host machine to connect to multiple NVDs.
Referring back to fig. 2, an nvd is a physical device or component that performs one or more network and/or storage virtualization functions. An NVD may be any device having one or more processing units (e.g., CPU, network Processing Unit (NPU), FPGA, packet processing pipeline, etc.), memory (including cache), and ports. Various virtualization functions may be performed by software/firmware executed by one or more processing units of the NVD.
NVD may be implemented in a variety of different forms. For example, in certain embodiments, the NVD is implemented as an interface card called a smart NIC or a smart NIC with an on-board embedded processor. A smart NIC is a device independent of the NIC on the host machine. In fig. 2, NVDs 210 and 212 may be implemented as smart nics connected to host machine 202 and host machines 206 and 208, respectively.
However, the smart nic is only one example of an NVD implementation. Various other implementations are possible. For example, in some other implementations, the NVD or one or more functions performed by the NVD may be incorporated into or performed by one or more host machines, one or more TOR switches, and other components of CSPI 200. For example, the NVD may be implemented in a host machine, where the functions performed by the NVD are performed by the host machine. As another example, the NVD may be part of a TOR switch, or the TOR switch may be configured to perform functions performed by the NVD, which enables the TOR switch to perform various complex packet conversions for the public cloud. TOR performing the function of NVD is sometimes referred to as intelligent TOR. In other embodiments where a Virtual Machine (VM) instance is provided to the client instead of a Bare Metal (BM) instance, the functions performed by the NVD may be implemented within the hypervisor of the host machine. In some other implementations, some of the functionality of the NVD may be offloaded to a centralized service running on a set of host machines.
In some embodiments, such as when implemented as a smart nic as shown in fig. 2, the NVD may include a plurality of physical ports that enable it to connect to one or more host machines and one or more TOR switches. Ports on NVD may be classified as host-oriented ports (also referred to as "south ports") or network-oriented or TOR-oriented ports (also referred to as "north ports"). The host-facing port of the NVD is a port for connecting the NVD to a host machine. Examples of host-facing ports in fig. 2 include port 236 on NVD 210 and ports 248 and 254 on NVD 212. The network-facing port of the NVD is a port for connecting the NVD to the TOR switch. Examples of network-facing ports in fig. 2 include port 256 on NVD 210 and port 258 on NVD 212. As shown in fig. 2, NVD 210 connects to TOR switch 214 using link 228 extending from port 256 of NVD 210 to TOR switch 214. Similarly, NVD 212 connects to TOR switch 216 using link 230 extending from port 258 of NVD 212 to TOR switch 216.
The NVD receives packets and frames (e.g., packets and frames generated by computing instances hosted by the host machine) from the host machine via the host-oriented ports, and after performing the necessary packet processing, the packets and frames may be forwarded to the TOR switch via the network-oriented ports of the NVD. The NVD may receive packets and frames from the TOR switch via the network-oriented ports of the NVD, and after performing the necessary packet processing, may forward the packets and frames to the host machine via the host-oriented ports of the NVD.
In some embodiments, there may be multiple ports and associated links between the NVD and the TOR switch. These ports and links may be aggregated to form a link aggregation group (referred to as LAG) of multiple ports or links. Link aggregation allows multiple physical links between two endpoints (e.g., between NVD and TOR switches) to be considered a single logical link. All physical links in a given LAG may operate in full duplex mode at the same speed. LAG helps to increase the bandwidth and reliability of the connection between two endpoints. If one of the physical links in the LAG fails, traffic will be dynamically and transparently reassigned to one of the other physical links in the LAG. The aggregated physical link delivers a higher bandwidth than each individual link. The multiple ports associated with the LAG are considered to be a single logical port. Traffic may be load balanced among the multiple physical links of the LAG. One or more LAGs may be configured between the two endpoints. The two endpoints may be located between the NVD and TOR switches, between the host machine and the NVD, and so on.
The NVD implements or performs network virtualization functions. These functions are performed by software/firmware executed by the NVD. Examples of network virtualization functions include, but are not limited to: packet encapsulation and decapsulation functions; a function for creating a VCN network; functions for implementing network policies, such as VCN security list (firewall) functionality; a function to facilitate routing and forwarding of packets to and from a compute instance in the VCN; etc. In some embodiments, upon receiving a packet, the NVD is configured to execute a packet processing pipeline to process the packet and determine how to forward or route the packet. As part of this packet processing pipeline, the NVD may perform one or more virtual functions associated with the overlay network, such as performing VNICs associated with cis in the VCN, performing Virtual Routers (VR) associated with the VCN, encapsulation and decapsulation of packets to facilitate forwarding or routing in the virtual network, execution of certain gateways (e.g., local peer gateways), implementation of security lists, network security groups, network Address Translation (NAT) functionality (e.g., translating public IP to private IP on a host-by-host basis), throttling functions, and other functions.
In some embodiments, the packet processing data path in the NVD may include a plurality of packet pipelines, each pipeline being comprised of a series of packet transform stages. In some embodiments, after receiving a packet, the packet is parsed and classified into a single pipeline. The packets are then processed in a linear fashion, stage by stage, until the packets are either discarded or sent out over the NVD interface. These stages provide basic functional packet processing building blocks (e.g., validate headers, force throttling, insert new layer 2 headers, force L4 firewalls, VCN encapsulation/decapsulation, etc.) so that new pipelines can be built by combining existing stages and new functionality can be added by creating new stages and inserting them into existing pipelines.
The NVD may perform control plane and data plane functions corresponding to the control plane and data plane of the VCN. Examples of VCN control planes are also depicted in fig. 17, 18, 19, and 20 (see reference numerals 1716, 1816, 1916, and 2016) and described below. Examples of VCN data planes are depicted in fig. 17, 18, 19, and 20 (see reference numerals 1718, 1818, 1918, and 2018) and described below. The control plane functions include functions for configuring the network (e.g., setting up routing and routing tables, configuring VNICs, etc.) that control how data is forwarded. In some embodiments, a VCN control plane is provided that centrally computes all overlay-to-baseboard mappings and publishes them to NVDs and virtual network edge devices (such as various gateways, such as DRG, SGW, IGW, etc.). Firewall rules may also be published using the same mechanism. In certain embodiments, the NVD only obtains a mapping related to the NVD. The data plane functions include functions to actually route/forward packets based on a configuration using control plane settings. The VCN data plane is implemented by encapsulating the customer's network packets before they traverse the baseboard network. Encapsulation/decapsulation functionality is implemented on the NVD. In certain embodiments, the NVD is configured to intercept all network packets in and out of the host machine and perform network virtualization functions.
As indicated above, the NVD performs various virtualization functions, including VNICs and VCN VR. The NVD may execute a VNIC associated with a computing instance hosted by one or more host machines connected to the VNIC. For example, as depicted in fig. 2, NVD 210 performs the functionality of VNIC 276 associated with computing instance 268 hosted by host machine 202 connected to NVD 210. As another example, NVD 212 executes VNICs 280 associated with bare metal computing instances 272 hosted by host machine 206 and executes VNICs 284 associated with computing instances 274 hosted by host machine 208. The host machine may host computing instances belonging to different VCNs (belonging to different customers), and an NVD connected to the host machine may execute a VNIC corresponding to the computing instance (i.e., perform VNIC-related functionality).
The NVD also executes a VCN virtual router corresponding to the VCN of the computing instance. For example, in the embodiment depicted in fig. 2, NVD 210 executes VCN VR 277 corresponding to the VCN to which computing instance 268 belongs. NVD 212 executes one or more VCN VRs 283 corresponding to one or more VCNs to which computing instances hosted by host machines 206 and 208 belong. In some embodiments, the VCN VR corresponding to the VCN is executed by all NVDs connected to a host machine hosting at least one computing instance belonging to the VCN. If a host machine hosts computing instances belonging to different VCNs, then an NVD connected to the host machine may execute VCN VR corresponding to those different VCNs.
In addition to the VNICs and VCN VRs, the NVD may execute various software (e.g., daemons) and include one or more hardware components that facilitate various network virtualization functions performed by the NVD. For simplicity, these various components are grouped together as a "packet processing component" shown in fig. 2. For example, NVD 210 includes a packet processing component 286 and NVD 212 includes a packet processing component 288. For example, a packet processing component for an NVD may include a packet processor configured to interact with ports and hardware interfaces of the NVD to monitor all packets received by and transmitted using the NVD and store network information. The network information may include, for example, network flow information and per-flow information (e.g., per-flow statistics) identifying different network flows handled by the NVD. In some embodiments, network flow information may be stored on a per VNIC basis. The packet processor may perform packet-by-packet manipulation and implement stateful NAT and L4 Firewalls (FWs). As another example, the packet processing component may include a replication agent configured to replicate information stored by the NVD to one or more different replication target repositories. As yet another example, the packet processing component may include a logging agent configured to perform a logging function of the NVD. The packet processing component may also include software for monitoring the performance and health of the NVD and possibly also the status and health of other components connected to the NVD.
FIG. 1 illustrates components of an example virtual or overlay network, including a VCN, a subnet within the VCN, a computing instance deployed on the subnet, a VNIC associated with the computing instance, a VR for the VCN, and a set of gateways configured for the VCN. The overlay component depicted in fig. 1 may be executed or hosted by one or more of the physical components depicted in fig. 2. For example, computing instances in a VCN may be executed or hosted by one or more host machines depicted in fig. 2. For a computing instance hosted by a host machine, a VNIC associated with the computing instance is typically executed by an NVD connected to the host machine (i.e., VNIC functionality is provided by an NVD connected to the host machine). The VCN VR functions for a VCN are performed by all NVDs connected to a host machine that hosts or executes computing instances that are part of the VCN. The gateway associated with the VCN may be implemented by one or more different types of NVDs. For example, some gateways may be implemented by a smart nic, while other gateways may be implemented by one or more host machines or other implementations of NVDs.
As described above, the computing instances in the client VCN may communicate with various different endpoints, where the endpoints may be within the same subnet as the source computing instance, in different subnets but within the same VCN as the source computing instance, or with endpoints external to the VCN of the source computing instance. These communications are facilitated using a VNIC associated with the computing instance, a VCN VR, and a gateway associated with the VCN.
For communication between two computing instances on the same subnet in a VCN, the VNICs associated with the source and destination computing instances are used to facilitate the communication. The source and destination computing instances may be hosted by the same host machine or by different host machines. Packets originating from a source computing instance may be forwarded from a host machine hosting the source computing instance to an NVD connected to the host machine. On the NVD, packets are processed using a packet processing pipeline, which may include execution of VNICs associated with the source computing instance. Because the destination endpoints for the packets are located within the same subnet, execution of the VNICs associated with the source computing instance causes the packets to be forwarded to the NVD executing the VNICs associated with the destination computing instance, which then processes the packets and forwards them to the destination computing instance. VNICs associated with source and destination computing instances may execute on the same NVD (e.g., when the source and destination computing instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination computing instances are hosted by different host machines connected to the different NVDs). The VNIC may use the routing/forwarding table stored by the NVD to determine the next hop for the packet.
For packets to be transferred from a computing instance in a subnet to an endpoint in a different subnet in the same VCN, packets originating from a source computing instance are transferred from a host machine hosting the source computing instance to an NVD connected to the host machine. On the NVD, packets are processed using a packet processing pipeline, which may include execution of one or more VNICs and VR associated with the VCN. For example, as part of a packet processing pipeline, the NVD executes or invokes functionality (also referred to as executing VNICs) of a VNIC associated with the source computing instance. The functionality performed by the VNIC may include looking at the VLAN tag on the packet. The VCN VR functionality is next invoked and executed by the NVD because the destination of the packet is outside the subnet. The VCN VR then routes the packet to an NVD that executes the VNIC associated with the destination computing instance. The VNIC associated with the destination computing instance then processes the packet and forwards the packet to the destination computing instance. VNICs associated with source and destination computing instances may execute on the same NVD (e.g., when the source and destination computing instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination computing instances are hosted by different host machines connected to the different NVDs).
If the destination for the packet is outside of the VCN of the source computing instance, the packet originating from the source computing instance is transmitted from the host machine hosting the source computing instance to an NVD connected to the host machine. The NVD executes the VNIC associated with the source computing instance. Since the destination endpoint of the packet is outside the VCN, the packet is then processed by the VCN VR for that VCN. The NVD invokes VCN VR functionality, which causes the packet to be forwarded to the NVD executing the appropriate gateway associated with the VCN. For example, if the destination is an endpoint within a customer's in-premise network, the packet may be forwarded by the VCN VR to the NVD executing a DRG gateway configured for the VCN. The VCN VR may be executed on the same NVD as the NVD executing the VNIC associated with the source computing instance, or by a different NVD. The gateway may be implemented by an NVD, which may be a smart NIC, a host machine, or other NVD implementation. The packet is then processed by the gateway and forwarded to the next hop, which facilitates delivery of the packet to its intended destination endpoint. For example, in the embodiment depicted in fig. 2, packets originating from computing instance 268 may be transmitted from host machine 202 to NVD 210 over link 220 (using NIC 232). On NVD 210, VNIC 276 is invoked because it is the VNIC associated with source computing instance 268. VNIC 276 is configured to examine the information encapsulated in the packet and determine the next hop for forwarding the packet in order to facilitate delivery of the packet to its intended destination endpoint, and then forward the packet to the determined next hop.
Computing instances deployed on a VCN may communicate with a variety of different endpoints. These endpoints may include endpoints hosted by CSPI 200 and endpoints external to CSPI 200. Endpoints hosted by CSPI 200 may include instances in the same VCN or other VCNs, which may be customer VCNs or VCNs that do not belong to customers. Communication between endpoints hosted by CSPI 200 may be performed through physical network 218. The computing instance may also communicate with endpoints that are not hosted by CSPI 200 or external to CSPI 200. Examples of such endpoints include endpoints within a customer's in-house network or data centers, or public endpoints accessible through a public network such as the internet. Communication with endpoints external to CSPI 200 may be performed over a public network (e.g., the internet) (not shown in fig. 2) or a private network (not shown in fig. 2) using various communication protocols.
The architecture of CSPI 200 depicted in fig. 2 is merely an example and is not intended to be limiting. In alternative embodiments, variations, alternatives, and modifications are possible. For example, in some embodiments, CSPI 200 may have more or fewer systems or components than those shown in fig. 2, may combine two or more systems, or may have different system configurations or arrangements. The systems, subsystems, and other components depicted in fig. 2 may be implemented in software (e.g., code, instructions, programs) executed by one or more processing units (e.g., processors, cores) of the respective system, using hardware, or a combination thereof. The software may be stored on a non-transitory storage medium (e.g., on a memory device).
FIG. 4 depicts a connection between a host machine and an NVD for providing I/O virtualization to support multi-tenancy in accordance with certain embodiments. As depicted in fig. 4, host machine 402 executes hypervisor 404 that provides a virtualized environment. The host machine 402 executes two virtual machine instances, VM1406 belonging to guest/tenant #1 and VM2 408 belonging to guest/tenant # 2. Host machine 402 includes a physical NIC 410 connected to NVD 412 via link 414. Each computing instance is attached to a VNIC executed by NVD 412. In the embodiment in FIG. 4, VM1406 is attached to VNIC-VM1 420 and VM2 408 is attached to VNIC-VM2422.
As shown in fig. 4, NIC 410 includes two logical NICs, logical NIC a 416 and logical NIC B418. Each virtual machine is attached to its own logical NIC and is configured to work with its own logical NIC. For example, VM1406 is attached to logical NIC A416 and VM2 408 is attached to logical NIC B418. Although the host machine 402 includes only one physical NIC 410 shared by multiple tenants, each tenant's virtual machine believes that they have their own host machine and network card because of the logical NIC.
In some embodiments, each logical NIC is assigned its own VLAN ID. Thus, a specific VLAN ID is assigned to logical NIC a 416 for tenant #1, and a separate VLAN ID is assigned to logical NIC B418 for tenant # 2. When a packet is transferred from VM1406, a label assigned to tenant #1 is appended to the packet by the hypervisor, which is then transferred from host machine 402 to NVD 412 over link 414. In a similar manner, when a packet is transmitted from VM2 408, a label assigned to tenant #2 is appended to the packet by the hypervisor, and the packet is then transmitted from host machine 402 to NVD 412 over link 414. Thus, packets 424 transmitted from host machine 402 to NVD 412 have associated labels 426 identifying the particular tenant and associated VM. On the NVD, for a packet 424 received from host machine 402, a tag 426 associated with the packet is used to determine whether the packet is processed by VNIC-VM1 420 or VNIC-VM2422. The packets are then processed by the corresponding VNICs. The configuration depicted in fig. 4 enables each tenant's computing instance to trust that they own host machine and NIC. The arrangement depicted in FIG. 4 provides I/O virtualization to support multi-tenancy.
Fig. 5 depicts a simplified block diagram of a physical network 500 according to some embodiments. The embodiment depicted in fig. 5 is structured as a Clos network. Clos networks are a specific type of network topology designed to provide connection redundancy while maintaining high split bandwidth and maximum resource utilization. Clos networks are a type of non-blocking, multi-stage or multi-layer switching network, where the number of stages or layers may be two, three, four, five, etc. The embodiment depicted in fig. 5 is a 3-layer network, including layer 1, layer 2, and layer 3.TOR switches 504 represent layer 0 switches in a Clos network. One or more NVDs are connected to the TOR switch. Layer 0 switches are also known as edge devices of a physical network. The layer 0 switch is connected to a layer 1 switch, also known as a leaf switch. In the embodiment depicted in fig. 5, a set of "n" layer 0TOR switches is connected to a set of "n" layer 1 switches and forms a group (pod). Each layer 0 switch in a cluster is interconnected to all layer 1 switches in the cluster, but there is no connectivity of switches between clusters. In some embodiments, two clusters are referred to as blocks. Each block is served by or connected to a set of "n" layer 2 switches (sometimes referred to as backbone switches). There may be several blocks in the physical network topology. The layer 2 switches are in turn connected to "n" layer 3 switches (sometimes referred to as super trunk switches). Communication of packets over the physical network 500 is typically performed using one or more layer 3 communication protocols. Typically, all layers of the physical network (except the TOR layer) are n-way redundant, thus allowing for high availability. Policies may be specified for the clusters and blocks to control the visibility of switches to each other in the physical network, enabling extension (scale) of the physical network.
The Clos network is characterized by a fixed maximum number of hops from one layer 0 switch to another layer 0 switch (or from an NVD connected to a layer 0 switch to another NVD connected to a layer 0 switch). For example, in a 3-layer Clos network, a maximum of seven hops are required for packets to reach from one NVD to another, with the source and target NVDs connected to the leaf layers of the Clos network. Also, in a 4-layer Clos network, a maximum of nine hops are required for packets to reach from one NVD to another, with the source and target NVDs connected to the leaf layers of the Clos network. Thus, the Clos network architecture maintains consistent latency throughout the network, which is important for communication between and within the data center. The Clos topology is horizontally scalable and cost-effective. The bandwidth/throughput capacity of the network can be easily increased by adding more switches (e.g., more leaf switches and backbone switches) at each layer and by increasing the number of links between switches at adjacent layers.
In some embodiments, each resource within the CSPI is assigned a unique identifier called a Cloud Identifier (CID). This identifier is included as part of the information of the resource and may be used to manage the resource, e.g., via a console or through an API. An example syntax for CID is:
ocid1.<RESOURCE TYPE>.<REALM>.[REGION][.FUTURE USE].<UNIQUE ID>
Wherein,,
ocid1: a text string indicating a version of the CID;
resource type: types of resources (e.g., instance, volume, VCN, subnet, user, group, etc.);
realm: the domain in which the resource is located. Exemplary values are "c1" for the business domain, "c2" for the government cloud domain, or "c3" for the federal government cloud domain, etc. Each domain may have its own domain name;
region: the region where the resource is located. If the region is not suitable for the resource, then this portion may be empty;
future use: reserved for future use.
unique ID: a unique portion of the ID. The format may vary depending on the type of resource or service.
B-Example layer 2VLAN architecture
This section describes techniques for providing layer 2 networking functionality in virtualized cloud environments. Layer 2 functionality is provided in addition to and along with layer 3 networking functionality provided by the virtualized cloud environment. In certain embodiments, virtual layer 2 and layer 3 functionality is provided by Oracle Cloud Infrastructure (OCI) offered by Oracle corporation.
After introducing layer 2 network functions, this section describes layer 2 implementations of VLANs. Thereafter, a description of the layer 2VLAN service is provided, including ACLs.
Introduction to the invention
The number of enterprise customers transitioning their on-premise applications to a cloud environment provided by a Cloud Service Provider (CSP) continues to increase rapidly. However, many of these customers soon realize that the roads that transition to the cloud environment can be very rough, requiring the customers to reconfigure and redesign their existing applications to make them operational in the cloud environment. This is because applications written for an in-premise environment often depend on the functionality of the physical network for monitoring, availability, and extension. Thus, these in-premise applications require reconstruction and redesign before they can operate in a cloud environment.
There are several reasons why an in-premise application cannot easily transition to a cloud environment. One of the main reasons is that current cloud virtual networks operate at layer 3 of the OSI model, e.g. at the IP layer, and do not provide layer 2 capabilities required by the application. The layer 3 based routing or forwarding includes determining where to send the packet (e.g., to which client instance) based on information contained in the layer 3 header of the packet (e.g., based on the destination IP address contained in the layer 3 header of the packet). To facilitate this, the location of the IP address in the virtualized cloud network is determined by a centralized control and orchestration system or controller. These may include, for example, IP addresses associated with customer entities or resources in the virtualized cloud environment.
Many customers run applications in their in-house deployment environments that have stringent requirements for layer 2 networking functionality, and current cloud products and IaaS service providers currently do not address these issues. For example, traffic of current cloud products is routed using layer 3 protocols using layer 3 headers and does not support layer 2 features required by applications. These layer 2 features may include, for example, address Resolution Protocol (ARP) processing, media Access Control (MAC) address learning and layer 2 broadcast capabilities, layer 2 (MAC-based) forwarding, layer 2 networking fabric, and others. By providing virtualized layer 2 networking functionality in a virtualized cloud network, as described in this disclosure, a customer can now seamlessly migrate its legacy applications to the cloud environment without any substantial reconfiguration or redesign. For example, the virtualized layer 2 networking capability described herein enables such applications (e.g., VMware vSphere, vCenter, vSAN, and NSX-T components) to communicate at layer 2 as in an in-premise environment. These applications can run the same version and configuration in the public cloud, enabling customers to use their legacy in-deployment applications, including prior knowledge, tools, and procedures associated with legacy applications. Customers can also access native cloud services (e.g., data centers (SDDCs) using VMware software definitions) from their applications.
As another example, there are several legacy in-premise applications (e.g., enterprise clustering software applications, network virtual appliances) that require layer 2 broadcast support for failover. Example applications include Fortinet FortiGate, IBM QRadar, palo Alto firewall, cisco ASA, juniper SRX and Oracle RAC (true application cluster). By providing virtualized layer 2 networks in virtualized public clouds as described in this disclosure, these applications can now run unchanged in the virtualized public cloud environment. As described herein, virtualized layer 2 networking functionality commensurate with an in-house deployment is provided. The virtualized layer 2 networking functionality described in this disclosure supports legacy layer 2 networks. This includes support for customer-defined VLANs and unicast, broadcast and multicast layer 2 traffic functions. Layer 2 based packet routing and forwarding includes using the layer 2 protocol and using information contained in the layer 2 header of the packet, e.g., routing or forwarding the packet based on the destination MAC address contained in the layer 2 header. Protocols used by enterprise applications (e.g., clustering software applications), such as ARP, gratuitous Address Resolution Protocol (GARP), and Reverse Address Resolution Protocol (RARP), may now also work in cloud environments.
There are several reasons that a traditional virtualized cloud infrastructure supports virtualized layer 3 networking and not layer 2 networking. Layer 2 networks generally cannot be extended like layer 3 networks. The layer 2 network control protocol does not have the level of complexity desired for extension. For example, the layer 3 network does not have to worry about the packet looping problem that the layer 2 network must address. IP packets (i.e., layer 3 packets) have the concept of time-to-live (TTL) while layer 2 packets do not. The IP addresses contained in the layer 3 packet have a topology concept such as subnet, CIDR range, etc., while the layer 2 address (e.g., MAC address) does not. Layer 3IP networks have built-in tools that facilitate troubleshooting, such as ping, traceroute for locating path information, etc. Such tools are not available for layer 2. The layer 3 network supports multipath, which is not available in the layer 2 network. Due to the lack of complex control protocols (e.g., border Gateway Protocol (BGP) and Open Shortest Path First (OSPF)) that are dedicated to exchanging information between entities in a network, layer 2 networks must rely on broadcast and multicast to learn the network sequentially, which can have an adverse impact on network performance. As the network changes, the learning process for layer 2 must be repeated, while layer 3 is not required. For these reasons and others, cloud IaaS service providers are more desirous of providing infrastructure that operates at layer 3 than layer 2.
However, despite the several drawbacks, many in-house deployment applications still require layer 2 functionality. For example, assume a virtualized cloud configuration, where a customer (customer 1) has two instances in a virtual network "V", instance A with IP1 and instance B with IP2, where the instances may be compute instances (e.g., bare metal, virtual machine, or container) or service instances (such as load balancer, nfs mount points, or other service instances). Virtual network V is a unique address space that is isolated from other virtual networks and the underlying physical network. For example, such isolation may be accomplished using various techniques including packet encapsulation or NAT. For this reason, the IP address of an instance in the virtual network for a client is different from the address in the physical network hosting it. A centralized SDN (software defined network) control plane is provided that knows the physical IP and virtual interfaces of all virtual IP addresses. When a packet is sent from instance a to the destination of IP2 in virtual network V, the virtual network SDN stack needs to know where IP2 is located. It must know this in advance so that it can send packets to IP in the physical network hosting virtual IP address IP2 for V. The location of the virtual IP address may be modified in the cloud, thereby changing the relationship between the physical IP and the virtual IP address. Whenever a virtual IP address is to be moved (e.g., an IP address associated with a virtual machine is moved to another virtual machine or a virtual machine is migrated to a new physical host), an API call must be made to the SDN control plane to let the controller know that the IP is moving so that it can update all participants in the SDN stack, including the packet processor (data plane). However, some application classes do not make such API calls. Examples include various in-premise applications, applications provided by various virtualized software vendors (such as VMware, etc.). Facilitating the value of virtual layer 2 networks in virtualized cloud environments enables support of applications that are not programmed to make such API calls or applications that rely on other layer 2 networking features, such as support of non-IP layer 3 and MAC learning.
The virtual layer 2 network creates a broadcast domain, wherein learning is performed by members of the broadcast domain. In the virtual layer 2 domain, there can be any IP on any MAC on any host within this layer 2 domain, and the system will learn to use standard layer 2 networking protocols, and the system will virtualize these networking primitives without requiring explicit notification by the central controller of the MAC and IP locations in the virtual layer 2 network. This enables applications requiring low latency failover to run, applications requiring support of broadcast or multicast protocols to multiple nodes, and legacy applications that do not know how to make API calls to SDN control planes or API endpoints to determine the location of IP addresses and MAC addresses. Thus, there is a need to provide layer 2 networking capabilities in virtualized cloud environments in order to be able to support functionality that is not available at the IP layer 3 level.
Another technical advantage of providing virtual layer 2 in a virtualized cloud environment is that it enables support of a variety of different layer 3 protocols (such as IPV4, IPV 6), including non-IP protocols. For example, various non-IP protocols may be supported, such as IPX, appleTalk, etc. Because existing cloud IaaS providers do not provide layer 2 functionality in their virtualized cloud networks, they cannot support these non-IP protocols. By providing the layer 2 networking functionality described in this disclosure, support may be provided for protocols at layer 3 as well as for applications that require and rely on the availability of layer 2 level functionality.
Using the techniques described in this disclosure, both layer 3 and layer 2 functionality are provided in the virtualized cloud infrastructure. As previously mentioned, layer 3 based networking provides some efficiencies, particularly suited for extensions, which layer 2 networking does not provide. Providing layer 2 functionality outside of layer 3 functionality allows for leveraging such efficiencies provided by layer 3 (e.g., providing a more scalable solution) while providing layer 2 functionality in a more scalable manner. For example, virtualized layer 3 avoids using broadcast for learning purposes. By providing layer 3 to increase efficiency while providing virtualized layer 2 to enable applications that require it and those that cannot run without layer 2 functionality, and supporting non-IP protocols, etc., full flexibility in virtualized cloud environments is provided to customers.
The client itself has a hybrid environment in which a layer 2 environment exists with a layer 3 environment, and the virtualized cloud environment can now support both environments. The client may have a layer 3 network such as a subnet and/or a layer 2 network such as a VLAN and the two environments may communicate with each other in a virtualized cloud environment.
Virtualized cloud environments also need to support multi-tenancy. Multi-tenancy makes provisioning both layer 3 and layer 2 functionality in the same virtualized cloud environment technically difficult and complex. For example, a layer 2 broadcast domain must be managed across many different clients in the cloud provider's infrastructure. Embodiments described in the present disclosure overcome these technical problems.
For virtualization providers (e.g., VMware), a virtualized layer 2 network that emulates a physical layer 2 network allows the workload to run unchanged. The applications provided by such virtualization providers may then run on the virtualized layer 2 network provided by the cloud infrastructure. For example, such an application may include a collection of instances that need to run on a layer 2 network. When customers want to promote and transfer such applications from their on-premise environment to the virtualized cloud environment, they cannot just acquire the application and run it in the cloud because those applications rely on the underlying layer 2 network (e.g., layer 2 network features are used to perform migration of virtual machines, or move locations where MAC and IP addresses are located), which is not provided by the currently virtualized cloud provider. For these reasons, such applications cannot run natively in virtualized cloud environments. Using the techniques described herein, a cloud provider provides a virtualized layer 2 network in addition to a virtualized layer 3 network. Such an application stack can now run unchanged in the cloud environment and nested virtualization can run in the cloud environment. The client can now run and manage its own layer 2 application in the cloud. The application provider does not have to make any changes to its software to facilitate this. Such legacy applications or workloads (e.g., legacy load balancers, legacy applications, KVM, openstack, clustering software) can now run unchanged in the virtualized cloud environment.
By providing the virtualized layer 2 functionality described herein, the virtualized cloud environment can now support a variety of layer 3 protocols, including non-IP protocols. Ethernet, for example, may support a variety of different ethertypes (a field in the layer 2 header that tells the type of layer 3 packet being sent; tells what protocol is expected at layer 3), including various non-IP protocols. EtherType is a two octet field in an ethernet frame. It is used to indicate which protocol is encapsulated in the payload of the frame and used at the receiving end by the data link layer to determine how to process the payload. EtherType also serves as the basis for 802.1QVLAN tagging, encapsulating packets from VLANs for traffic multiplexing with other VLANs through ethernet trunks. Examples of EtherType include IPV4, IPV6, address Resolution Protocol (ARP), appleTalk, IPX, and the like. A cloud network supporting layer 2 protocols may support any protocol at layer 3. In a similar manner, when the cloud infrastructure provides support for layer 3 protocols, it may support various protocols at layer 4, such as TCP, UDP, ICMP, etc. When virtualization is provided at layer 3, the network may not be affected by the layer 4 protocol. Similarly, when providing virtualization at layer 2, the network may not be affected by the layer 3 protocol. This technique can be extended to support any layer 2 network type, including FDDI, infiniband, etc.
Thus, many applications written for physical networks (especially applications working with clusters of computer nodes sharing a broadcast domain) use layer 2 features that are not supported by L3 virtual networks. The following six examples highlight the complications that may result from not providing layer 2 networking capability:
(1) MAC and IP are assigned without a pre-API call. Network appliances and hypervisors (such as VMware) are not built for cloud virtual networks. They assume they can use MAC as long as it is unique and either obtain dynamic addresses from DHCP servers or use any IP assigned to the cluster. There is often no mechanism by which they can be configured to inform the control plane about the assignment of these layer 2 and layer 3 addresses. If the locations of the MAC and IP are not known, the layer 3 virtual network does not know where to send the traffic.
(2) Low latency reassignment of MACs and IPs for high availability and live migration. Many in-house deployment applications use ARP to reassign the IP and MAC to achieve high availability-when an instance in a cluster or HA pair stops responding, the new active instance will send Gratuitous ARP (GARP) to reassign the service IP to its MAC or send Reverse ARP (RARP) to reassign the service MAC to its interface. This is also important when live migration of instances on a hypervisor: the new host must send the RARP after the guest migration in order to send guest traffic to the new host. Assignment is accomplished not only without API calls, but also requires very low latency (sub-millisecond). This cannot be done by HTTPS calls to REST endpoints.
(3) Interface multiplexing is performed by MAC addresses. When a hypervisor hosts multiple virtual machines on a single host, all of these virtual machines are on the same network, and the guest interfaces are distinguished by their MACs. This requires multiple MACs to be supported on the same virtual interface.
(4) VLAN support. A single physical virtual machine host would need to be located on multiple broadcast domains as indicated using VLAN tags. For example, VMware ESX uses VLANs for traffic splitting (e.g., a guest virtual machine may communicate on one VLAN, store on another VLAN, and host a virtual machine on yet another VLAN).
(5) Use of broadcast and multicast traffic. ARP requires L2 broadcasting, and there are examples where in-house deployment applications use broadcast and multicast traffic for both trunking and HA applications.
(6) non-IP traffic is supported. Since the L3 network requires IPv4 or IPv6 headers for communication, it does not work with any L3 protocol other than IP. L2 virtualization means that the network within a VLAN may be independent of the L3 protocol-the L3 header may be IPv4, IPv6, IPX or anything else-even completely absent.
Layer 2VLAN implementation
As disclosed herein, a layer 2 (L2) network may be created within a cloud network. This virtual L2 network comprises one or several layer 2 virtual networks, such as virtualized L2 VLANs, referred to herein as VLANs. Each VLAN may include multiple compute instances, each of which may be associated with at least one L2 virtual network interface (e.g., L2 VNIC) and an L2 virtual switch. In some embodiments, each pair of L2 virtual network interfaces and L2 virtual switches is hosted on an NVD. The NVD may host a plurality of such pairs, with each pair being associated with a different computing instance. The set of L2 virtual switches represents a single L2 switch of the simulation of VLANs. The L2 virtual network interface represents a collection of L2 ports on a single L2 switch that is emulated. The VLAN may connect to other VLANs, layer 3 (L3) networks, in-house deployment networks, and/or other networks via VLAN Switching and Routing Services (VSRS) (also referred to herein as Real Virtual Routers (RVRs) or L2 VSRS). Examples of this architecture are described below.
Referring now to FIG. 6, a schematic diagram of one embodiment of a computing network is shown. VCN 602 resides in CSPI 601. The VCN 602 includes a plurality of gateways that connect the VCN 602 to other networks. These gateways include DRGs 604 that can connect the VCN 602 to, for example, an on-premise network (such as an on-premise data center 606). The gateway may also include gateway 600, which may include, for example, an LPG for connecting VCN 602 to another VCN, and/or an IGW and/or NAT gateway for connecting VCN 602 to the internet. The gateways of VCN 602 may also include a service gateway 610 that may connect VCN 602 with a service network 612. The service network 612 may include one or several databases and/or stores, including, for example, an autonomic database 614 and/or an object store 616. The service network may comprise a concept network comprising an aggregation of IP ranges, which may be public IP ranges, for example. In some embodiments, these IP ranges may cover some or all of the public services provided by the CSPI 601 provider. These services may be accessed through an internet gateway or NAT gateway, for example. In some embodiments, the service network provides a way for services in the service network to access services in the service network from the local area through a dedicated gateway (service gateway) for that purpose. In some embodiments, the backend of these services may be implemented in, for example, their own private network. In some embodiments, the service network 612 may include further additional databases.
The VCN 602 may include multiple virtual networks. Each of these networks may include one or several computing instances that may communicate within their respective networks, between networks, or outside of VCN 602. One of the virtual networks of VCN 602 is L3 subnet 620. L3 subnetwork 620 is a subdivision of the cells of the configuration created within VCN 602. Subnet 620 may include a virtual layer 3 network in a virtualized cloud environment of VCN 602, which VCN 602 is hosted on an underlying physical network of CPSI 601. Although fig. 6 depicts a single subnet 620, the VCN 602 may have one or more subnets. Each subnet within the VCN 602 can be associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and that represent a subset of the address space within the VCN's address space. In some embodiments, this IP address space may be isolated from the address space associated with CPSI 601.
The subnet 620 includes one or more compute instances, and specifically includes a first compute instance 622-a and a second compute instance 622-B. Computing instances 622-a, 622-B may communicate with each other within subnet 620, or they may communicate with other instances, devices, and/or networks outside of subnet 620. Communications external to subnet 620 are enabled by Virtual Router (VR) 624. VR 624 enables communication between subnet 620 and other networks of VCN 602. For subnet 620, VR 624 represents a logical gateway that enables subnet 620 (e.g., compute instances 622-A, 622-B) to communicate with endpoints on other networks within VCN 602 as well as with other endpoints outside of VCN 602.
VCN 602 may also include additional networks and in particular may include one or several L2 VLANs (referred to herein as VLANs), which are examples of virtual L2 networks. The one or several VLANs may each include a virtual layer 2 network located in the cloud environment of VCN 602 and/or hosted by the underlying physical network of CPSI 601. In the embodiment of fig. 6, VCN 602 includes VLAN a 630 and VLAN B640. Each VLAN 630, 640 within the VCN 602 may be associated with a continuous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other networks in the VCN (such as other subnets or VLANs in the VCN) and represent a subset of the address space within the address space of the VCN. In some embodiments, this IP address space of the VLAN may be isolated from the address space associated with CPSI 601. Each of VLANs 630, 640 may include one or several compute instances, and in particular, VLAN a 630 may include, for example, a first compute instance 632-a and a second compute instance 632-B. In some embodiments, VLAN a 630 may include additional computing instances. VLAN B640 may include, for example, a first computing instance 642-a and a second computing instance 642-B. Each of the computing instances 632-A, 632-B, 642-A, 642-B may have an IP address and a MAC address. These addresses may be assigned or generated in any desired manner. In some embodiments, these addresses may be within the CIDR of the VLAN of the compute instance, and in some embodiments, these addresses may be any addresses. In embodiments where the computing instance of the VLAN communicates with endpoints outside of the VLAN, one or both of these addresses are from VLAN CIDR, while when all communications are within the VLAN, these addresses are not limited to addresses within VLAN CIDR. Unlike networks whose addresses are assigned by the control plane, the IP and/or MAC addresses of the compute instances in a VLAN may be assigned by the user/customer of that VLAN, and these IP and/or MAC addresses may then be discovered and/or learned by the compute instances in the VLAN according to the process for learning discussed below.
Each VLAN may include a VLAN Switching and Routing Service (VSRS), and in particular, VLAN a 630 includes VSRS a 634 and VLAN B640 includes VSRS B644. Each VSRS 634, 644 participates in layer 2 switching and local learning within the VLAN and also performs all necessary layer 3 network functions including ARP, NDP and routing. The VSRS performs ARP (which is a layer 2 protocol) because the VSRS must map IP to MAC.
In these cloud-based VLANs, each virtual interface or virtual gateway may be associated with one or more Media Access Control (MAC) addresses, which may be virtual MAC addresses. Within a VLAN, one or several computing instances 632-A, 632-B, 642-A, 642-B (which may be bare metal, VM, or container, for example, and/or one or several service instances) may communicate directly with each other via a virtual switch. Communication outside of the VLAN is enabled via VSRSs 634, 644, such as communication with other VLANs or with an L3 network. VSRSs 634, 644 are distributed services that provide layer 3 functionality, such as IP routing, for VLAN networks. In some embodiments, VSRSs 634, 644 are horizontally scalable, highly available routing services that may be located at the intersection of an IP network and an L2 network and participate in IP routing and L2 learning within the cloud-based L2 domain.
VSRSs 634, 644 may be distributed across multiple nodes within the infrastructure, and the VSRSs 634, 644 functionality may be scalable, particularly horizontally scalable. In some embodiments, each node implementing the functionality of VSRSs 634, 644 shares and replicates the functionality of routers and/or switches with each other. Further, these nodes may present themselves to all instances in VLANs 630, 640 as a single VSRS 634, 644. VSRSs 634, 644 may be implemented on any virtualized device within CSPI 601, and in particular within a virtual network. Thus, in some embodiments, VSRSs 634, 644 may be implemented on any virtual network virtualization device (including NICs, smartnics, switches, intelligent switches, or general purpose computing hosts).
VSRSs 634, 644 may be services residing on one or several hardware nodes supporting a cloud network, such as, for example, one or several servers, such as, for example, one or several x86 servers, or one or several networking devices supporting a cloud network, such as, for example, one or several NICs and in particular, one or several smartnics. In some embodiments, VSRS 634, 644 can be implemented on a server farm. Thus, VSRSs 634, 644 may be services distributed across a cluster of nodes, which may be a centrally managed cluster or may be distributed to the edges of virtual networking executors that participate in and share L2 and L3 learning and evaluate routing and security policies. In some embodiments, each VSRS instance may update other VSRS instances with new mapping information, as this new mapping information is learned by the VSRS instance. For example, when a VSRS instance learns the IP, interface, and/or MAC mapping of one or more CIs in its VLAN, the VSRS instance may provide this updated information to other VSRS instances within the VCN. Via such cross-updating, VSRS instances associated with the first VLAN may be aware of mappings for CIs in other VLANs (in some embodiments, CIs in other VLANs within VCN 602), including IP, interface, and/or MAC mappings. These updates can be greatly accelerated when the VSRS resides on a server farm and/or is distributed across a cluster of nodes.
In some embodiments, VSRSs 634, 644 may also host one or several higher level services necessary for networking, including but not limited to: a DHCP relay; DHCP (escrow); DHCPv6; neighbor discovery protocols (such as IPv6 neighbor discovery protocol); a DNS; hosting DNSv6; SLAAC for IPv 6; NTP; a metadata service; block store (blockstore) mount points. In some embodiments, the VSRS may support one or several Network Address Translation (NAT) functions to translate between network address spaces. In some embodiments, the VSRS may incorporate anti-spoofing, anti-MAC spoofing, ARP cache poisoning protection for IPv4, IPv6 Route Advertisement (RA) protection, DHCP protection, packet filtering using Access Control Lists (ACLs); and/or reverse path forwarding checking. Functions that VSRS may implement include, for example, ARP, GARP, packet filter (ACL), DHCP relay, and/or IP routing protocols. For example, VSRSs 634, 644 may learn MAC addresses, invalidate expired MAC addresses, handle movement of MAC addresses, audit (vet) MAC address information, handle flooding of MAC information, handling of storm control, loop prevention, layer 2 multicasting via protocols in the cloud such as IGMP for example, statistics gathering including logs, statistics using SNMP, monitoring, and/or gathering and using statistics for broadcasting, total traffic, bits, spanning tree packets, etc.
Within the virtual network, VSRSs 634, 644 may appear as different instantiations. In some embodiments, each of these instantiations of a VSRS may be associated with a VLAN 630, 640, and in some embodiments, each VLAN 630, 640 may have an instantiation of a VSRS 634, 644. In some embodiments, each instantiation of VSRS 634, 644 may have a corresponding one or several unique tables of VLANs 630, 640 associated with each instantiation of VSRS 634, 644. Each instantiation of VSRS 634, 644 may generate and/or plan a unique table associated with that instantiation of VSRS 634, 644. Thus, while a single service may provide VSRS 634, 644 functionality for one or several cloud networks, each instantiation of VSRS 634, 644 within a cloud network may have unique layer 2 and layer 3 forwarding tables, while multiple such customer networks may have overlapping layer 2 and layer 3 forwarding tables.
In some embodiments, VSRSs 634, 644 may support conflicting VLAN and IP space across multiple tenants. This may include having multiple tenants on the same VSRS 634, 644. In some embodiments, some or all of these tenants may choose to use some or all of the following: the same IP address space, the same MAC space, and the same VLAN space. This may provide great flexibility for the user to select an address. In some embodiments, such multi-tenancy is supported via providing a different virtual network for each tenant, which is a private network within the cloud network. Each virtual network is assigned a unique identifier. Similarly, in some embodiments, each host may have a unique identifier, and/or each virtual interface or virtual gateway may have a unique identifier. In some embodiments, these unique identifiers, and in particular, the unique identifiers of the virtual network of the tenant, may be encoded in each communication. By providing each virtual network with a unique identifier and including it in communication, a single instantiation of VSRS 634, 644 can serve multiple tenants with overlapping addresses and/or namespaces.
VSRSs 634, 644 may perform these switching and/or routing functions to facilitate and/or enable creation of and/or communication with L2 networks within VLANs 630, 640. This VLAN 630, 640 may be found within the cloud computing environment, and more particularly within a virtual network in the cloud computing environment.
For example, each of VLANs 630, 640 includes multiple compute instances 632-A, 632-B, 642-A, 642-B. VSRSs 634, 644 enable communication between compute instances in one VLAN 630, 640 and compute instances in the other VLAN 630, 640 or in subnet 620. In some embodiments, VSRSs 634, 644 enable communication between a compute instance in one VLAN 630, 640 and another VCN, another network external to the VCN (including the internet, an in-house deployment data center, etc.). In such an embodiment, for example, a compute instance (such as compute instance 632-a) may send a communication to an endpoint outside of the VLAN (in this instance, an endpoint outside of VLAN a 630). The computing instance (632-a) may send a communication to VSRS a 634, which VSRS a 634 may direct the communication to a router 624, 644 or gateway 604, 608, 610 communicatively coupled with the desired endpoint. The router 624, 644 or gateway 604, 608, 610 communicatively coupled with the desired endpoint may receive communications from the computing instance (632-a) and may direct the communications to the desired endpoint.
Referring now to fig. 7, a logical and hardware schematic of VLAN 700 is shown. As seen, VLAN 700 includes a plurality of endpoints, specifically, a plurality of compute instances and VSRS. Multiple Computing Instances (CIs) are instantiated on one or several host machines. In some embodiments, this may be a one-to-one relationship such that each CI is instantiated on a unique host machine, and/or in some embodiments, this may be a many-to-one relationship such that multiple CIs are instantiated on a single common host machine. In various embodiments, the CIs may be layer 2 CIs by being configured to communicate with each other using an L2 protocol. FIG. 7 depicts a scenario in which some CIs are instantiated on unique host machines, and in which some CIs share a common host machine. As seen in fig. 7, example 1 (CI 1) 704-a is instantiated on host machine 1 702-a, example 2 (CI 2) 704-B is instantiated on host machine 2-B, and example 3 (CI 3) 704-A C and example 4 (CI 4) 704-D are instantiated on a common host machine 702-C.
Each of the CIs 704-A, 704-B, 704-C, 704-D is communicatively coupled with the other CIs 704-A, 704-B, 704-C, 704-D in the VLAN 700 and with the VSRS 714. Specifically, each of the CIs 704-A, 704-B, 704-C, 704-D is connected to the other CIs 704-A, 704-B, 704-C, 704-D in VLAN 700 and to VSRS 714 via the L2 VNICs and switches. Each CI 704-A, 704-B, 704-C, 704-D is associated with a unique L2VNIC and switch. The switch may be a local L2 virtual switch and is uniquely associated with and deployed for the L2 VNIC. Specifically, CI1 704-A is associated with L2VNIC 1 708-A and switch 1 710-A, CI2704-B is associated with L2VNIC 2 708-B and switch 710-B, CI3 704-C is associated with L2VNIC 3 708-C and switch 3 710-C, and CI4 704-D is associated with L2VNIC 4 708-D and switch 4 710-D.
In some embodiments, each L2VNIC 708 and its associated switch 710 may be instantiated on NVD 706. Such instantiation may be a one-to-one relationship such that a single L2VNIC 708 and its associated switch 710 are instantiated on a unique NVD 706, or a many-to-one relationship such that multiple L2 VNICs 708 and their associated switches 710 are instantiated on a single common NVD 706. Specifically, L2VNIC 1 708-A and switch 1-710-A are instantiated on NVD 1 706-A, L2VNIC 2 708-B and switch 2-710-B are instantiated on NVD 2, and L2VNIC 3 708-C and switch 3 710-C, and L2VNIC 4 708-D and switch 710-D are instantiated on a common NVD (i.e., NVD 706-C).
In some embodiments, VSRS 714 may support conflicting VLANs and IP space across multiple tenants. This may include having multiple tenants on the same VSRS 714. In some embodiments, some or all of these tenants may choose to use some or all of the following: the same IP address space, the same MAC space, and the same VLAN space. This may provide great flexibility for the user to select an address. In some embodiments, such multi-tenancy is supported by providing a different virtual network for each tenant, which is a private network within the cloud network. Each virtual network (e.g., each VLAN or VCN) is assigned a unique identifier, such as a VCN identifier, which may be a VLAN identifier. This unique identifier may be selected by, for example, the control plane, and in particular by the control plane of the CSPI. In some embodiments, this unique VLAN identifier may comprise one or several bits, which may be included and/or used in the packet encapsulation. Similarly, in some embodiments, each host may have a unique identifier, and/or each virtual interface or virtual gateway may have a unique identifier. In some embodiments, these unique identifiers, and in particular, the unique identifiers of the virtual network of the tenant, may be encoded in each communication. By providing each virtual network with a unique identifier and including it in the communication, a single instantiation of a VSRS can serve multiple tenants with overlapping addresses and/or namespaces. In some embodiments, VSRS 714 may determine which tenant the packet belongs to based on the VCN identifier and/or VLAN identifier associated with the communication and specifically within the VCN header of the communication. In embodiments disclosed herein, communications leaving or entering a VLAN may have a VCN header that may include a VLAN identifier. Based on the VCN header containing the VLAN identifier, VSRS 714 may determine the lease or, in other words, the recipient VSRS may determine which VLAN and/or which tenant to send the communication to. Further, each compute instance (e.g., L2 compute instance) belonging to a VLAN is assigned a unique interface identifier that identifies the L2VNIC associated with the compute instance. The interface identifier may be included in traffic from and/or to the computer instance (e.g., by being included in a header of the frame) and may be used by the NVD to identify the L2VNIC associated with the computing instance. In other words, the interface identifier may uniquely indicate the computing instance and/or its associated L2 VNIC.
As indicated in fig. 7, switches 710-a, 710-B, 710-C,710-D may together form an L2 distributed switch 712, also referred to herein as distributed switch 712. From the customer's perspective, each switch 710-A, 710-B, 710-C,710-D in the L2 distributed switch 712 is a single switch that connects to all CIs in a VLAN. However, the L2 distributed switch 712, which emulates the user experience of a single switch, is infinitely scalable and includes a collection of local switches (e.g., switches 710-a, 710-B, 710-C,710-D in the illustrative example of fig. 7). As shown in fig. 7, each CI executes on a host machine connected to the NVD. For each CI on a host connected to the NVD, the NVD hosts a layer 2VNIC and a local switch associated with the computing instance (e.g., an L2 virtual switch, local to the NVD, associated with the layer 2VNIC, and a member or component of the L2 distributed switch 712). The layer 2VNIC represents a port of a computing instance on a layer 2 VLAN. The local switch connects the VNIC to other VNICs (e.g., other ports) associated with other compute instances of the layer 2 VLAN.
Each of the CIs 704-A, 704-B, 704-C, 704-D may communicate with other ones of the CIs 704-A, 704-B, 704-C, 704-D in the VLAN 700, or with the VSRS 714. One of the CIs 704-A, 704-B, 704-C, 704-D sends a frame to another one of the CIs 704-A, 704-B, 704-C, 704-D or to the VSRS 714 by sending the frame to the recipient's MAC address and interface identifier in the CIs 704-A, 704-B, 704-C, 704-D or to the VSRS 714. The MAC address and interface identifier may be included in the header of the frame. As explained above, the interface identifier may indicate the recipient of the CIs 704-a, 704-B, 704-C, 704-D or the L2 VNIC of the VSRS 714.
In one embodiment, CI 1-A may be a source CI, L2 VNIC 708-A may be a source L2 VNIC, and switch 710-A may be a source L2 virtual switch. In this embodiment, CI3 704-C may be a destination CI and L2 VNIC 3 708-C may be a destination L2 VNIC. The source CI may send frames with a source MAC address and a destination MAC address. This frame may be intercepted by NVD 706-A, instantiating the source VNIC and the source switch.
For VLAN 700, the L2 VNICs 708-A, 708-B, 708-C, 708-D may each learn the mapping of the MAC address to the interface identifier of the L2 VNIC. This mapping may be learned based on frames and/or communications received from within VLAN 700. Based on this previously determined mapping, the source VNIC may determine an interface identifier of a destination interface associated with a destination CI within the VLAN and may encapsulate the frame. In some embodiments, this encapsulation may comprise a GENEVE encapsulation, and in particular an L2 GENEVE encapsulation, that includes an L2 (Ethernet) header of the encapsulated frame. The encapsulated frame may identify a destination MAC, a destination interface identifier, a source MAC, and a source interface identifier.
The source VNIC may pass the encapsulated frame to the source switch, which may direct the frame to the destination VNIC. Upon receiving the frame, the destination VNIC may decapsulate the frame and may then provide the frame to the destination CI.
Referring now to fig. 8, a logical schematic of a plurality of connected L2 VLANs 800 is shown. In the particular embodiment depicted in fig. 8, both VLANs are located in the same VCN. As seen, the plurality of connected L2 VLANs 800 may include a first VLAN (VLAN a 802-a) and a second VLAN (VLAN B802-B). Each of these VLANs 802-a, 802-B may include one or several CIs, each of which may have an associated L2VNIC and an associated L2 virtual switch. In addition, each of these VLANs 802-A, 802-B may include a VSRS.
Specifically, VLAN A802-A may include instance 1 804-A connected to L2VNIC 1 806-A and switch 1 808-A, instance 2 804-B connected to L2VNIC 2 806-B and switch 808-B, and instance 3 804-C connected to L2VNIC 3 806-C and switch 3 808-C. VLAN B802-B may include instance 4 804-D connected to L2VNIC 4 806-D and switch 4 808-D, instance 5 804-E connected to L2VNIC 5-E and switch 808-E, and instance 6 804-F connected to L2VNIC 6 806-F and switch 3 808-F. VLAN A802-A may also include VSRS A810-A and VLAN B802-B may include VSRS B810-B. Each of the CIs 804-A, 804-B, 804-C of VLAN A802-A may be communicatively coupled to VSRS A810-A, and each of the CIs 804-D, 804-E, 804-F of VLAN B802-B may be communicatively coupled to VSRS B810-B.
VLAN a 802-a may be communicatively coupled to VLAN B802-B via their respective VSRSs 810-a, 810-B. Each VSRS may also be coupled to a gateway 812, and gateway 812 may provide access to other networks outside of the VCNs in which VLANs 802-A, 802-B are located for CIs 804-A, 804-B, 804-C, 804-D, 804-E, 804-F in each VLAN 802-A, 802-B. In some embodiments, these networks may include, for example, one or several in-house networks, another VCN, a service network, a public network such as the Internet, and so forth.
Each of the CIs 804-A, 804-B, 804-C in VLAN A802-A may communicate with the CIs 804-D, 804-E, 804-F in VLAN B802-B via the VSRS 810A, 810-B of each VLAN 802-A, 802-B. For example, one of the CIs 804-A, 804-B, 804-C, 804-D, 804-E, 804-F in one of the VLANs 802-A, 802-B may send frames to the CIs 804-A, 804-B, 804-C, 804-D, 804-E, 804-F in the other one of the VLANs 802-A, 802-B. This frame may leave the source VLAN via the source VLAN's VSRS and may enter the destination VLAN and be routed to the destination CI via the destination VSRS.
In one embodiment, CI 1 804-A may be a source CI, VNIC 806-A may be a source VNIC, and switch 808-A may be a source switch. In this embodiment, CI 5804-E may be a destination CI and L2 VNIC 5 806-E may be a destination VNIC. VSRS A810-A may be a source VSRS identified as a SVSRS and VSRS B810-B may be a destination VSRS identified as a DVSRS.
The source CI may send frames with MAC addresses. This frame may be intercepted by the NVD instantiating the source VNIC and the source switch. The source VNIC encapsulates the frame. In some embodiments, this package may comprise a GENEVE package, and in particular an L2 GENEVE package. The encapsulated frame may identify the destination address of the destination CI. In some embodiments, this destination address may also include the destination address of the destination VSRS. The destination address of the destination CI may include a destination IP address, a destination MAC of the destination CI, and/or a destination interface identifier of a destination VNIC associated with the destination CI. The destination address of the destination VSRS may include an IP address of the destination VSRS, an interface identifier of a destination VNIC associated with the destination VSRS, and/or a MAC address of the destination VSRS.
The source VSRS may receive the frame from the source switch, may look up the VNIC mapping from a destination address of the frame, which may be a destination IP address, and may forward the packet to the destination VSRS. The destination VSRS may receive the frame. Based on the destination address contained in the frame, the destination VSRS may forward the frame to the destination VNIC. The destination VNIC may receive and decapsulate the frame and may then provide the frame to the destination CI.
Referring now to fig. 9, a logical schematic of a plurality of connected L2 VLANs and subnets 900 is shown. In the particular embodiment shown in fig. 9, both the VLAN and the subnet are located in the same VCN. This is indicated as virtual routers and VSRS of VLANs and subnets being connected directly, rather than through a gateway.
As seen, this may include a first VLAN (VLAN a 902-a), a second VLAN (VLAN B902-B), and a subnet 930. Each of these VLANs 902-a, 902-B may include one or several CIs, each of which may have an associated L2VNIC and an associated L2 switch. In addition, each of these VLANs 902-A, 902-B may include a VSRS. Likewise, the subnetwork 930, which may be an L3 subnetwork, may include one or several CIs, each CI may have an associated L3 VNIC, and the L3 subnetwork 930 may include the virtual router 916.
Specifically, VLAN A902-A may include instance 1-904-A connected to L2VNIC 1 906-A and switch 1 908-A, instance 2-904-B connected to L2VNIC 2 906-B and switch 908-B, and instance 3-904-C connected to L2VNIC 3 906-C and switch 3 908-C. VLAN B902-B may include instance 4 904-D connected to L2VNIC 4 906-D and switch 4 908-D, instance 5 904-E connected to L2VNIC 5 906-E and switch 908-E, and instance 6 904-F connected to L2VNIC 6 906-F and switch 3 908-F. VLAN a 902-a may also include VSRS a 910-a and VLAN B902-B may include VSRS B910-B. Each of the CIs 904-A, 904-B, 904-C of VLAN A902-A may be communicatively coupled to VSRS A910-A, and each of the CIs 904-D, 904-E, 904-F of VLAN B902-B may be communicatively coupled to VSRS B910-B. The L3 subnetwork 930 may include one or several CIs, and in particular may include instance 7 904-G communicatively coupled to L3 VNIC 7 906-G. L3 subnetwork 930 may include virtual router 916.
VLAN a 902-a may be communicatively coupled to VLAN B902-B via their respective VSRSs 910-a, 910-B. L3 subnetwork 930 may be communicatively coupled with VLAN a 902-a and VLAN B902-B via virtual router 916. Each virtual router 916 and VSRS instance 910-A, 910-B may likewise be coupled to a gateway 912, and gateway 912 may provide access to other networks outside of the VCNs where VLANs 902-A, 902-B and subnet 930 are located for CIs 904-A, 904-B, 904-C, 904-D, 904-E, 904-F, 904-G in each VLAN 902-A, 902-B and subnet 930. In some embodiments, these networks may include, for example, one or several in-house networks, another VCN, a service network, a public network such as the Internet, and so forth.
Each VSRS instance 910-a, 910-B may provide an egress path for frames leaving an associated VLAN 902-a, 902-B and an ingress path for frames entering an associated VLAN 902-a, 902-B. From the VSRS instances 910-a, 910-B of VLANs 902-a, 902-B, frames may be sent to any desired endpoint, including an L2 endpoint (such as an L2 CI in another VLAN on the same VCN or a different VCN or network) and/or an L3 endpoint (such as an L3 CI in a subnet on the same VCN or a different VCN or network).
In one embodiment, CI 1 904-A may be a source CI, VNIC 906-A may be a source VNIC, and switch 908-A may be a source switch. In this embodiment, CI 7904-G may be a destination CI and VNIC 7-G may be a destination VNIC. VSRS A910-A may be a source VSRS identified as SVSRS and Virtual Router (VR) 916 may be a destination VR.
The source CI may send frames with MAC addresses. This frame may be intercepted by the NVD instantiating the source VNIC and the source switch. The source VNIC encapsulates the frame. In some embodiments, this encapsulation may include a gene encapsulation, and in particular an L2 gene encapsulation. The encapsulated frame may identify the destination address of the destination CI. In some embodiments, this destination address may also include the destination address of the VSRS of the VLAN of the source CI. The destination address of the destination CI may include a destination IP address, a destination MAC of the destination CI, and/or a destination interface identification of a destination VNIC of the destination CI.
The source VSRS may receive the frame from the source switch, may look up the VNIC mapping from the destination address of the frame, which may be the destination IP address, and may forward the frame to the destination VR. The destination VR may receive the frame. Based on the destination address contained in the frame, the destination VR may forward the frame to the destination VNIC. The destination VNIC may receive and decapsulate the frame and may then provide the frame to the destination CI.
Learning through L2 VNICs and/or L2 virtual switches within a virtual L2 network
Referring now to fig. 10, a schematic diagram of one embodiment of intra-VLAN communication and learning within a VLAN 1000 is shown. The learning herein is specific to how the L2 VNICs, the VSRSs of the VLANs of the source CIs, and/or the L2 virtual switches learn the association between MAC addresses and the L2 VNICs/VSRS VNICs (more specifically, the association between MAC addresses associated with L2 computing instances or VSRSs and identifiers associated with L2 VNICs or VSRS VNICs of these L2 computing instances). Generally, learning is based on inlet flow. For one aspect of interface-to-MAC address learning, such learning may be implemented differently than L2 computing instances to learn a learning process (e.g., ARP process) of the destination MAC address. These two learning processes (e.g., the learning process of the L2 VNIC/L2 virtual switch and the learning process of the L2 computing instance) are illustrated in fig. 12 as a joint implementation.
As seen, VLAN 1000 includes computing instance 1 1000-a communicatively coupled with NVD 1 1001-a, NVD 1 1001-a instantiating L2 VNIC 1 1002-a and L2 switch 1 1004-a. VLAN 1000 also includes computing instance 2 1000-B communicatively coupled with NVD 2 1001-B, NVD 2 1001-B instantiating L2 VNIC 2 1002-B and L2 switch 2 1004-A. VLAN 1000 also includes VSRS 1015 running on a server farm and it includes VSRS VNIC 1002-C and VSRS switch 1004-C. All switches 1004-A, 1004-B, 1004-C together form an L2 distributed switch 1050. The VSRS 1015 is communicatively coupled with the endpoint 1008, and the endpoint 1008 may include a gateway, and in particular may include an L2/L3 router, for example, in the form of another VSRS, or an L3 router, for example, in the form of a virtual router.
The control plane 1010 of the VCN hosting VLAN 1000 maintains information identifying each L2VNIC and network placement of L2 VNICs on VLAN 1000. For example, for an L2VNIC, this information may include an interface identifier associated with the L2VNIC and/or a physical IP address of an NVD hosting the L2 VNIC. The control plane 1010 updates (e.g., periodically or on demand) the interfaces in VLAN 1000 with this information. Thus, each L2VNIC 1002-A, 1002-B, 1002-C in VLAN 1000 receives information from the control plane 1010 identifying the interfaces in the VLAN and populates the table with this information. The table populated by the L2 VNICs may be stored locally to the NVD hosting the L2 VNICs. Where the VNICs 1002-A, 1002-B, 1002-C already include a current table, the VNICs 1002-A, 1002-B, 1002-C may determine any differences between the VNICs' 1002-A, 1002-B, 1002-C current table and the information/tables received from the control plane 1010. In some embodiments, the VNICs 1002-A, 1002-B, 1002-C may update their tables to match information received from the control plane 1010.
As seen in fig. 10, frames are sent via L2 switches 1004-a, 1004-B, 1004-C and received by recipient VNICs 1002-a, 1002-B, 1002-C. When a frame is received by a VNIC 1002-A, 1002-B, 1002-C, the VNIC learns the mapping of the source interface (source VNIC) to the source MAC address of the frame. Based on the table of information it receives from control plane 1010, the VNIC may map the source MAC address (from the received frame) to the interface identifier of the source VNIC and the IP address of the VNIC and/or the IP address of the NVD of the hosting VNIC (where the interface identifier and IP address(s) are available from the table). Thus, the L2 VNICs 1002-A, 1002-B, 1002-C learn the mapping of interface identifiers to MAC addresses based on received communications and/or frames. Each VNIC 1002-a, 1002-B, 1002-C may update its L2 Forwarding (FWD) table 1006-a, 1006-B, 1006-C with this learned mapping information. In some embodiments, the L2 forwarding table includes and associates a MAC address with at least one of an interface identifier or a physical IP address. In such embodiments, the MAC address is an address assigned to the L2 compute instance and may correspond to a port emulated by the L2VNIC associated with the L2 compute instance. The interface identifier may uniquely identify the L2VNIC and/or the L2 computing instance. The virtual IP address may be an address of the L2 VNIC. And the physical IP address may be an IP address of an NVD hosting the L2 VNIC. L2 forwarding updated by the L2VNIC may be stored locally on the NVD hosting the L2VNIC and used by the L2 virtual switch associated with the L2VNIC to direct the frames. In some embodiments, VNICs within a common VLAN may share all or part of their mapping tables with each other.
In view of the above network architecture, traffic flows are described next. For clarity of explanation, traffic flows are described in connection with computing examples 2 1000-B, L2 VNICs 2 10002-B, L2 switches 2 1004-B and NVD 2 1001-B. The description applies equally to traffic flowing to and/or from other computing instances.
As explained above, VLANs are implemented within VCNs as overlay L2 networks over L3 physical networks. An L2 computation instance of a VLAN may send or receive L2 frames that include overlay MAC addresses (also referred to as virtual MAC addresses) as source and destination MAC addresses. L2 frames may also encapsulate packets that include overlay IP addresses (also referred to as virtual IP addresses) as source and destination IP addresses. In some embodiments, the overlay IP address of the compute instance may belong to the CIDR range of the VLAN. The other overlay network IP address may be in the CIDR range (in which case the L2 frame is within a VLAN) or outside the CIDR range (in which case the L2 frame is sent to or received from another network). The L2 frame may also include VLAN tags that uniquely identify the VLAN and may be used to distinguish multiple L2 VNICs on the same NVD. The L2 frame may be received by the NVD in encapsulated packets via a tunnel from the host machine of the compute instance, from another NVD, or from a server farm hosting the VSRS. In these different cases, the encapsulated packet may be an L3 packet sent over a physical network, where the source and destination IP addresses are physical IP addresses. Different types of packages are possible, including a GENEVE package. The NVD may decapsulate the received packets to extract L2 frames. Similarly, to send an L2 frame, the NVD may encapsulate it in an L3 packet and send it on the physical substrate.
For intra-VLAN egress traffic from example 2 1000-B, NVD 21001-B receives frames from the host machine of example 2 1000-B over an ethernet link. The frame includes an interface identifier that identifies the L2VNIC 2 1000-B. The frame includes the overlay MAC address of example 2-1000-B (e.g., m.2) as the source MAC address and the overlay MAC address of example 1-1000-a (e.g., m.1) as the destination MAC address. Given the interface identifier, NVD 21001-B passes the frame to L2VNIC 21002-B for further processing. The L2VNIC 21002-B forwards the frame to the L2 switch 2 1004-B. Based on the L2 forwarding table 1006-B, the L2 switch 2-B determines whether the destination MAC address is known (e.g., matches an entry in the L2 forwarding table 1006-B).
If known, the L2 switch 2 1004-B determines that the L2VNIC 1 1002-A is the associated tunnel endpoint and forwards the frame to the L2VNIC 1 1002-A. Forwarding may include encapsulation of frames in packets and decapsulation of packets (e.g., GENEVE encapsulation and decapsulation), where packets include frames, physical IP address of NVD 1 1001-a (e.g., ip.1) as destination address, and physical IP address of NVD 21001-B (e.g., ip.2) as source address.
If not known, the L2 switch 2-B broadcasts frames to the various VNICs of the VLANs (e.g., including L2VNIC 1-A1002-A and any other L2 VNICs of the VLANs) where the broadcasted frames are processed (e.g., encapsulated, sent, decapsulated) between the associated NVDs. In some embodiments, this broadcast may be performed at the physical network, or more specifically simulated, encapsulating the frame into each L2VNIC, including the VSRS in the VLAN, respectively. Thus, the broadcast is emulated via a series of duplicate unicast packets at the physical network. In turn, each L2VNIC receives the frame and learns the association between the interface identifier of L2VNIC 21002-B and the source MAC address (e.g., m.2) and the source physical IP address (e.g., ip.2).
For intra-VLAN ingress traffic from compute instance 1 1000-a to compute instance 2 1000-B, NVD 2 1001-B receives packets from NVD 1. The packet has ip.1 as the source address and a frame, where the frame includes m.2 as the destination MAC address and m.1 as the source MAC address. The frame also includes a network identifier for L2VNIC 1 1002-a. After decapsulation, VNIC 2 receives the frame and learns that this interface identifier is associated with m.1 and/or ip.1, and if previously unknown, stores the learned information in L2 forwarding table 1006-B at switch 2 for subsequent egress traffic. Alternatively, after decapsulation, the L2VNIC 2 1002-B receives the frame and learns that this interface identifier is associated with m.1 and/or ip.1, and if this information is known, then refreshes the expiration time.
For egress traffic sent from instance 2-B in VLAN 1000 to an instance in another VLAN, there may be flows similar to those described above except that VSRS VNICs and VSRS switches are used. In particular, the destination MAC address is not within the L2 broadcast of VLAN 1000 (it is within another L2 VLAN). Thus, the overlay destination IP address (e.g., ip.a) of the destination instance is used for this egress traffic. For example, L2VNIC 2 1002-B determines that IP.A is outside the CIDR range of VLAN 1000. Thus, the L2VNIC 2 1002-B sets the destination MAC address to a default gateway MAC address (e.g., m.dg). Based on m.dg, L2 switch 2-1004-B sends the egress traffic to the VSRS VNIC (e.g., via tunneling, encapsulated with the appropriate end-to-end). The VSRS VNIC forwards the egress traffic to the VSRS switch. Further, the VSRS switch performs a routing function in which, based on the overlay destination IP address (e.g., ip.a), the VSRS switch of VLAN 1000 sends egress traffic to the VSRS switch of another VLAN (e.g., via a virtual router between the two VLANs, also encapsulated with an appropriate end-to-end). Next, the VSRS switch of the other VLAN performs the switching function by determining that ip.a is within CIDR of this VLAN and performs a lookup of its ARP cache based on ip.a to determine the destination MAC address associated with ip.a. If no match exists in the ARP cache, an ARP request is sent to a different L2VNIC of another VLAN to determine the destination MAC address. Otherwise, the VSRS switch sends the egress traffic to the relevant VNIC (e.g., via tunneling, with appropriate encapsulation).
For ingress traffic from an instance in another VLAN to an instance in VLAN 1000, the traffic flow is similar to that described above, except in the opposite direction. For egress traffic from an instance in VLAN 1000 to the L3 network, traffic flow is similar to that described above, except that the VSRS switch of VLAN 1000 routes the packet directly to the destination VNIC in the virtual L3 network via the virtual router (e.g., without routing the packet through another VSRS switch). For ingress traffic from the virtual L3 network to an instance in VLAN 1000, the traffic is similar to that described above, except that the packet is received by the VSRS switch of VLAN 1000A, which sends it as a frame within the VLAN. For traffic between VLAN 1000 and other networks (egress or ingress), a VSRS switch is similarly used, where its routing function is used for egress to send packets via the appropriate gateway (e.g., IGW, NGW, DRG, SGW, LPG), and where its switching function is used for ingress to send frames within VLAN 1000.
Referring now to fig. 11, a schematic diagram of an example of a VLAN 1100 (e.g., a cloud-based virtual L2 network) is shown, and in particular, an implementation view of the VLAN.
As described above, a VLAN may include "N" computing instances 1102-A, 1102-B, 1102-N, each executing on a host machine. As discussed previously, there may be a one-to-one association between a computing instance and a host machine, or there may be a many-to-one association between multiple computing instances and a single host machine. Each computing instance 1102-A, 1102-B, 1102-N may be an L2 computing instance, in which case it is associated with at least one virtual interface (e.g., L2 VNIC) 1104-A, 1104-B, 1104-N and switches 1106-A, 1106-B, 1106-N. Switches 1106-A, 1106-B, 1106-N are L2 virtual switches and together form an L2 distributed switch 1107.
The pair of L2 VNICs 1104-A, 1104-B, 1104-N and switches 1106-A, 1106-B, 1106-N associated with computing instances 1102-A, 1102-B, 1102-N on the host machine are a pair of software modules on NVDs 1108-A, 1108-B, 1108-N connected to the host machine. Each L2VNIC 1104-a, 1104-B, 1104-N represents a customer-aware L2 port of a single switch (referred to herein as a vswitch). In general, host machine "i" executes computing instance "i" and is connected to NVD "i". Further, NVD "i" executes L2VNIC "i" and "switch" i. L2VNIC "i" represents the L2 port "i" of the vswitch. "i" is a positive integer between 1 and "n". Here again, although a one-to-one association is described, other types of associations are possible. For example, a single NVD may be connected to multiple hosts, each host executing one or more compute instances belonging to a VLAN. If so, then the NVD hosts multiple pairs of L2 VNICs and switches, each pair corresponding to one of the computing instances.
The VLAN may include an instance of VSRS 1110. VSRS 1110 performs switching and routing functions and includes instances of VSRS VNIC 1112 and VSRS switch 1114. VSRS VNIC 1112 represents a port on the vswitch that connects the vswitch to other networks via a virtual router. As shown, the VSRS 1110 can be instantiated on the server farm 1116.
The control plane 1118 may keep track of information identifying the L2 VNICs 1104-a, 1104-B, 1104-N and their placement in the VLAN. The control plane 1110 may also provide this information to the L2 interfaces 1104-a, 1104-B, 1104-N in the VLAN.
As shown in fig. 11, the VLAN may be a cloud-based virtual L2 network, which may be built on top of physical network 1120. In some embodiments, this physical network 1120 may include NVDs 1108-A, 1108-B, 1108-N.
In general, a first L2 compute instance of a VLAN (e.g., compute instance 1 1102-A) may communicate with a second compute instance of the VLAN (e.g., compute instance 21102-B) using the L2 protocol. For example, frames may be sent between these two L2 computing instances via a VLAN. However, the frames may be encapsulated, tunneled, routed, and/or otherwise processed such that the frames may be transmitted over the underlying physical network 1120.
For example, compute instance 1 1102-A sends a frame destined for compute instance 2 1102-B. Depending on the network connections (e.g., TCP/IP connections, ethernet connections, tunnel connections, etc.) between host machine 1 and NVD1, NVD1 and physical network 1120, physical network 1120NVD 2, NVD 2 and host machine 2, different types of processing may be applied to the frames. For example, the frame is received and encapsulated by NVD1, and so on, until the frame reaches compute instance 2. This process is assumed to allow frames to be sent between the underlying physical resources and for the sake of brevity and clarity, its description is omitted in describing VLAN and related L2 operations.
Virtual L2 network communication
The various forms of communication may occur within or with the virtual L2 network. These may include intra-VLAN communications. In such embodiments, the source computing instance may send the communication to the destination computing instance that is in the same VLAN as the source Computing Instance (CI). Communications may also be sent to endpoints outside the VLAN of the source CI. This may include, for example, communication between a source CI in the first VLAN to a destination CI in the second VLAN, communication between a source CI in the first VLAN to a destination CI in the L3 subnet, and/or communication from a source CI in the first VLAN to a destination CI outside the VCN of the VLAN containing the source CI. This communication may also include, for example, receiving, at the destination CI, the communication from a source CI external to the VLAN of the destination CI. This source CI may be located in another VLAN, in the L3 subnet, or outside the VCN of the VLAN containing the source CI.
Each CI within a VLAN may play a positive role in traffic flow. This includes learning a mapping of interface identifiers to MAC addresses (also referred to herein as interface to MAC addresses), instances within VLANs to maintain L2 forwarding tables within VLANs, and transmission and/or reception of communications (e.g., frames in the case of L2 communications). VSRS may play a positive role in communications within a VLAN as well as in communications with source or destination CIs outside of the VLAN. The VSRS may remain present in the L2 network and in the L3 network to enable egress and ingress communications.
Referring now to fig. 12, a flow chart illustrating one embodiment of a process 1200 for intra-VLAN communication is shown. In some embodiments, process 1200 may be performed by a computing instance within a common VLAN. This process may be specifically performed where the source CI sends a communication to a destination CI within the VLAN, but does not know the IP-to-MAC address mapping of that destination CI. This may occur, for example, when the source CI sends a packet to the destination CI that has an IP address in the VLAN, but the source CI does not know the MAC address of that IP address. In this case, an ARP process may be performed to learn the destination MAC address and IP-to-MAC address mapping.
In case the source CI knows the IP-to-MAC address mapping, the source CI can send the packet directly to the destination CI and no ARP procedure needs to be performed. In some embodiments, this packet may be intercepted by the source VNIC, which is an L2VNIC in intra-VLAN communication. If the source VNIC knows the interface-to-MAC address mapping for the destination MAC address, the source VNIC may encapsulate the packet, e.g., in an L2 encapsulation, and may forward the corresponding frame to the destination VNIC, which in intra-VLAN communication is the L2VNIC, for the destination MAC address.
If the source VNIC does not know the interface-to-MAC address mapping for the MAC address, the source VNIC may perform an aspect of the interface-to-MAC address learning process. This may include the source VNIC sending frames to all interfaces in the VLAN. In some embodiments, this frame may be sent via broadcast to all interfaces within the VLAN. In some embodiments, such broadcasting may be implemented at the physical network in the form of serial unicast. This frame may include destination MAC and IP addresses, an interface identifier, and the MAC address and IP address of the source VNIC. Each VNIC in the VLAN may receive this frame and may learn the interface-to-MAC address mapping of the source VNIC.
Each receiving VNIC may also decapsulate frames and forward the decapsulated frames (e.g., corresponding packets) to their associated CIs. Each CI may include a network interface that may evaluate forwarded packets. If the network interface determines that the CI of the received forwarded packet does not match the destination MAC and/or IP address, the packet is discarded. If the network interface determines that the CI that has received the forwarded frame matches the destination MAC and/or IP address, the packet is received by the CI. In some embodiments, a CI having a MAC and/or IP address that matches the destination MAC and/or IP address of the packet may send a response to the source CI, whereby the source VNIC may learn the interface-to-MAC address mapping of the destination CI, and whereby the source CI may learn the IP-to-MAC address mapping of the destination CI.
Process 1200 may be performed when the source CI is unaware of the IP-to-MAC address mapping, or when the IP-to-MAC address mapping of the destination CI of the source CI is stale. Thus, the source CI may send packets when the IP-to-MAC address mapping is known. Process 1200 may be performed when the IP-to-MAC address mapping is unknown. When the interface-to-MAC address mapping is unknown, the interface-to-MAC address learning process outlined above may be performed. When the interface-to-MAC address mapping is known, the source VNIC may send the corresponding frame to the destination VNIC. Process 1200 begins at block 1202, where a source CI determines that the IP-to-MAC address mapping of a destination CI is unknown to the source CI. In some embodiments, this may include the source CI determining a destination IP address for the packet and determining that the destination IP address is not associated with a MAC address stored in a mapping table of the source CI. Alternatively, the source CI may determine that the IP-to-MAC address mapping for the destination CI is stale. In some embodiments, a map may be stale if the map is not updated and/or verified within a certain time limit. After determining that the IP-to-MAC address mapping of the destination CI is unknown and/or stale for the source CI, the source CI initiates an ARP request for the destination IP address and sends the ARP request for ethernet broadcasting.
At block 1204, a source VNIC (also referred to herein as a source interface) receives an ARP request from a source CI. The source interface recognizes all interfaces on the VLAN and sends ARP requests to all interfaces on the VLAN broadcast domain. As mentioned before, since the control plane knows all interfaces on the VLAN and provides this information to the interfaces with the VLAN, the source interface also knows all interfaces in the VLAN and can send ARP requests to each interface in the VLAN. To do so, the source interface replicates the ARP requests and encapsulates one of the ARP requests for each interface on the VLAN. Each encapsulated ARP request includes a source CI interface identifier and a source CI MAC and IP address, a destination IP address, and a destination CI interface identifier. The source CI interface replicates the ethernet broadcast by sending a replicated and encapsulated ARP request (e.g., ARP message) as a serial unicast to each interface in the VLAN.
At block 1206, each interface in the VLAN broadcast domain receives and decapsulates the ARP message. Each interface in the VLAN broadcast domain that receives the ARP message learns the source VNIC's interface-to-MAC address mapping of the source CI (e.g., the source CI's interface identifier-to-MAC address) because this message identifies the source CI MAC and IP address and the source CI interface identifier. As part of learning the interface-to-MAC address mapping for the source CI, each interface may update its mapping table (e.g., its L2 forwarding table) and may provide the updated mapping to its associated switch and/or CI. Each receiver interface, except the VSRS, may forward the decapsulated packet to its associated CI. The CI recipient of the forwarded decapsulated packet, and specifically the CI's network interface, may determine whether the target IP address matches the CI's IP address. If the IP address of the CI associated with the interface does not match the destination CI IP address, then in some embodiments the packet is discarded by the CI and no further action is taken. In the case of a VSRS, the VSRS may determine whether the target IP address matches the IP address of the VSRS. If the IP address of the VSRS does not match the destination IP address specified in the received packet, then in some embodiments the packet is discarded by the VSRS and no further action is taken.
If it is determined that the destination CI IP address specified in the received packet matches the IP address of the CI associated with the recipient interface (destination CI), then, and as indicated in block 1208, the destination CI sends a response, which may be a unicast ARP response for the source interface. This response includes the destination CI MAC address and the destination CI IP address, as well as the source CI IP address and the MAC address. This response is received by the destination interface encapsulating the unicast ARP response, as indicated in block 1210. In some embodiments, this package may comprise a GENEVE package. The destination interface may forward the encapsulated ARP response to the source interface via the destination switch. The encapsulated ARP response includes the destination CI MAC and IP address and the destination CI interface identifier, and the source CI MAC and IP address and the source CI interface identifier.
At block 1212, the source interface receives and decapsulates the ARP response. The source interface may further learn the interface-to-MAC address mapping of the destination CI based on the information contained in the encapsulation and/or encapsulated frames. In some embodiments, the source interface may forward the ARP response to the source CI.
At block 1214, the source CI receives the ARP response. In some embodiments, the source CI may update the mapping table based on information contained in the ARP response, and in particular based on the MAC and IP addresses of the destination CI to reflect the IP-to-MAC address mapping. The source CI may then send the packet to the destination CI based on this MAC address. This packet may include the MAC address and interface identifier of the source CI as the source MAC address and source interface and the MAC address and interface identifier of the destination CI as the destination MAC address and destination interface.
At block 1216, the source interface may receive a packet from the source CI. The source interface may encapsulate the packet and in some embodiments, such encapsulation uses a GENEVE encapsulation. The source interface may forward the corresponding frame to the destination CI, in particular to the destination interface. The encapsulated frame may include the MAC address and interface identifier of the source CI as the source MAC address and source interface identifier and the MAC address and interface identifier of the destination CI as the destination MAC address and destination interface.
At block 1218, the destination interface receives the frame from the source interface. The destination interface may decapsulate the frame and may then forward the corresponding packet to the destination CI. At block 1220, the destination CI receives the packet from the destination interface.
ACL
An Access Control List (ACL) includes information regarding permissions and/or restrictions associated with access and/or use of computing resources. In a physical layer 2 network, clients may define ACLs at the switch level for filtering frames processed by this network switch. In such physical L2 networks, a single hardware switch is typically used, and thus definition, deployment and enforcement of ACLs occurs locally at the switch. In contrast, in a virtual L2 network (e.g., any VLAN described above), a single hardware switch is not used. Instead, while customers consider using a single switch (referred to above as a vswitch), a distributed L2 virtual switch is used and includes multiple L2 switches that are local to and hosted at multiple NVDs (the actual hardware that provides the hosting functionality). This distributed architecture affects how ACLs may be deployed in a virtual L2 network. Such effects may span multiple dimensions. In the first dimension, to support customer understanding of a single switch, conversion may be performed between customer configuration of the customer's L2 virtual network and ACL and the actual distribution of the local L2 switch and associated L2 VNICs on the NVD. In the second dimension, to support port-specific ACLs (each ACL may be specific to one or more ports emulated by the corresponding L2 VNIC (s)), a dispatch may be performed to identify and send ACLs to the relevant NVD (e.g., the NVD hosting the L2 VNIC (s)). In a third dimension, to support enforcement of permissions and/or restrictions indicated by ACL information, a local L2 switch (rather than an entire distributed L2 switch) may provide relevant enforcement. The first dimension may be implemented by operation of a control plane of a virtual network hosting the virtual L2 network. The third dimension may be implemented by NVD.
FIG. 13 illustrates an example environment suitable for defining ACLs for L2 virtual networks. In an embodiment, the environment includes a computer system 1310 that communicates with a client device 1320 over one or more networks (not shown). Computer system 1310 may include a collection of hardware computing resources that host VCN 1312. The control plane hosted by one or more of the hardware computing resources may receive and process input from client device 1320 to deploy an L2 virtual network (shown as L2 VLAN1314 in fig. 13) within VCN 1312.
In an example, the input from the client device 1320 may include various types of information. This information may be specified via a console or API call and may include L2 VLAN configuration 1322 and ACL configuration 1324, as well as other customer-specified configurations.
L2 VLAN configuration 1322 may indicate, for example, the number of L2 compute instances, type(s), and configuration(s) to be included in L2 VLAN 1314. Further, the L2 VLAN configuration 1322 may indicate a customer-specific name of a port on the customer-aware vswitch, a MAC address of the L2 compute instance, and an association between the port and the MAC address (or more generally, the L2 compute instance). For example, a customer may specify that L2 VLAN1314 needs to include two L2 compute instances, a first with MAC address m.1 and associated with a first port named P1 and another with MAC address m.2 and associated with a second port named P2.
ACL configuration 1324 may indicate, for example, restrictions and/or permissions to control flow of traffic (including frames) within L2VLAN 1314, into VLAN 1314, and/or out of VLAN 1314. For example and referring back to the example of the two computing instances above, the client may specify that the frame received at port P1 from port P2 is to be dropped. Of course, this is for illustrative purposes only, and other types of flow control may be indicated in ACL configuration 1324, as further described in the following figures. Further, ACL configuration 1324 may indicate whether ACLs are to be enforced for ingress traffic and/or egress traffic. For example and referring back to the flow restriction of the P2 to P1 frames above, ACL configuration 1324 may indicate whether the frame is to be dropped on the egress at port P2 or on the ingress at port P1.
The control plane receives various information and then deploys and manages the different resources of L2VLAN 1314 and generates and distributes relevant ACL information to these resources. For example, L2VLAN 1314 is configured according to L2VLAN configuration 1322 and includes the requested L2 compute instance hosted on the host machine and the L2 VNIC-L2 virtual switch pair hosted on the NVD. To generate the ACL information, the control plane translates the customer definition from ACL configuration to the actual topology of L2VLAN 1314. For example, each L2VNIC emulates a port and the control plane associates the L2VNIC (e.g., its interface identifier, its IP address, and/or the IP address of the NVD hosting the L2 VNIC) with the name of the port and the specified MAC address. The ACL information does not use port names, but rather indicates restrictions and/or permissions by identifying the associated L2 VNICs (e.g., their interface identifiers, their IP addresses, and/or IP addresses of NVDs hosting the L2 VNICs). The NVD hosting the L2VNIC to which the ACL information is applied receives this ACL information so that the NVD can perform traffic flow enforcement.
Fig. 14 illustrates an example ACL technique in a VLAN. The top of fig. 14 illustrates an implementation view 1410 of a VLAN. The bottom of fig. 14 illustrates a customer presentation 1420 for a VLAN. The VLAN configuration is the same or similar to that of fig. 11. The commonality between these two figures is not repeated herein. In general, the control plane maintains a customer definition of a VLAN (e.g., compute instance 1 connects to port 1, etc.), a mapping of the VLAN to an actual network implementation (e.g., { customer 1, m.1→ip.1, VLAN a }), and an association between the customer definition and the mapping (e.g., port 1 corresponds to VNIC 1). The control plane may also receive input (e.g., via an API call and/or console) indicating a customer of the information about the ACL.
In an example, a client may specify an ACL for a destination port (e.g., filter frames sent to port 1), an ACL for a source port (e.g., filter frames sent from port 2), and/or a combination of both (e.g., filter frames sent from port 2 to port 1). Similarly, the client may specify an ACL for the destination overlay MAC address (e.g., filter frames with m.1 as the destination MAC address), an ACL for the source overlay MAC address (e.g., filter frames with m.2 as the source MAC address), an ACL for the destination overlay IP address (e.g., filter frames with the overlay IP address of computing example 1 as the destination IP address), an ACL for the source overlay IP address (e.g., filter frames with the overlay IP address of computing example 2 as the source IP address), and/or a combination of any of the four (e.g., filter frames with m.1 as the destination MAC address and m.2 as the source MAC address). The ACL may additionally or alternatively be defined based on TCP or UDP source port number(s), based on TCP or UDP destination port number(s), TCP or UDP source or destination port number range, or a combination of these. ACLs may additionally or alternatively be defined based on ethernet type (e.g., IPv4, IPv6, etc.). ACLs may additionally or alternatively be defined based on the type of destination MAC address of the traffic, such as unicast traffic, MAC address range, multicast traffic, or broadcast traffic for a particular MAC address. ACLs may also be specified based on byte offsets within the frame header. The client may also indicate whether an ACL is to be enforced on the ingress and/or egress.
Based on this input, mapping, and client definition-mapping association, the control plane generates and distributes the ACL to the NVD to be enforced there. For illustration, two examples are considered. In a first example, the customer's input indicates that frames from port 2 to port 1 are to be dropped on the ingress. In this example, the control plane determines that port 1 and port 2 correspond to VNIC 1 and VNIC 2, respectively, and that VNIC 1 and VNIC 2 are hosted on NVD 1 and NVD 2, respectively, and associated with m.1 and m.2, respectively. Thus, the control plane generates and sends an ACL to NVD 1 (but not to NVD 2), indicating that frames with m.2 as the source MAC address and m.1 as the destination MAC address will be discarded. Next, NVD 1 receives the encapsulated frame from NVD 2. At this time, NVD 1 cannot discard the encapsulated frame based on the ACL. Instead, NVD 1 processes it (e.g., decapsulates it) to extract the frame. After determining that its destination is m.1, the frame is passed to VNIC 1. Further, VNIC 1 applies the ACL and discards the frame after determining that the source MAC address is m.2.
In a second example, the customer's input also indicates that frames from port 2 to port 1 will be dropped, but this filtering will be performed on the egress. In this example, the control plane determines that port 1 and port 2 correspond to VNIC 1 and VNIC 2, respectively, VNIC 1 and VNIC 2 are hosted on NVD 1 and NVD 2, respectively, and associated with m.1 and m.2, respectively. Thus, the control plane generates an ACL and sends it to NVD 2 (but not to NVD 1), indicating that frames with m.1 as the destination MAC address and m.2 as the source MAC address will be discarded. Next, NVD 2 receives the encapsulated frame from host 2. After decapsulation, the frames are passed to VNIC 2. Further, VNIC 2 applies the ACL and discards the frame after determining that the destination MAC address is m.1.
Referring now to FIG. 15, a flow diagram is shown illustrating one embodiment of a process 1500 for distributing ACL information in a layer 2 virtual network. In some embodiments, process 1500 may be performed by a control plane that manages deployment of layer 2 virtual networks on an underlying physical network.
Process 1500 begins at block 1502, where the control plane receives a customer input indicating layer 2VLAN configuration and ACL configuration. In some embodiments, the client input is received from the client device via an API call and/or console.
At block 1504, the control plane generates ACL information based on the client input. In some embodiments, the control plane deploys the L2 VLAN according to the L2 VLAN configuration indicated in the customer input. Based on this deployment, the control plane determines and tracks the topology of the L2 VLAN by determining and tracking the placement of the L2 compute instance on the host machine and the placement of the L2 VNICs and L2 virtual switches on the NVD, where the L2 VLAN includes the L2 compute instance, the L2 VNICs, and the L2 virtual switches. For each L2 computing instance, the topology information may indicate an associated L2VNIC (and corresponding interface identifier), an associated MAC address (which may be indicated in customer input), an associated IP address of the L2VNIC (which may also be indicated in customer input), and an associated IP address of the NVD hosting the L2VNIC, as well as other attributes of the topology of the L2 VLAN. In addition, the control plane may translate customer-specified names of ports into topology information about corresponding L2 VNICs emulating those ports, and may then generate ACL information by updating customer-specified ACL configurations to use the topology information instead of customer definitions.
At block 1506, the control plane determines that ACL information applies to the subset of L2 VNICs. In some embodiments, the client may indicate an ACL for some, but not all, ports of the client-aware vswitch. Furthermore, the ACLs between these ports may be different. In this case, the control plane may determine the L2 VNICs corresponding to the indicated ports, and may generate relevant ACL information for each such L2 VNICs. In addition, the control plane may maintain a mapping of specific ACL information applicable to each L2 VNIC.
At block 1508, the control plane determines that ACL information is to be sent to the set of NVDs. In some embodiments, the control plane uses the mapping mentioned at block 1506 to determine a subset of ACL-compliant L2 VNICs. Based on the topology information, the control plane determines NVDs hosting such L2 VNICs. Thus, the control plane determines the specific ACL information that each NVD should receive. For example, if a first NVD hosts two L2 VNICs, each L2 VNIC being associated with its own specific ACL information, then this first NVD will receive first ACL information specific to the first L2 VNIC and second ACL information specific to the second L2 VNIC. Similarly, if the second NVD hosts an L2 VNIC associated with its own particular ACL information, the second NVD will receive this ACL information. However, if the third NVD hosts one or more L2 VNICs that are not associated with any ACL information, then the third NVD will not receive any ACL information.
At block 1510, the control plane sends ACL information to the set of NVDs. In an example, each ACL information is sent to the associated NVD in a push mechanism or a pull mechanism during and/or after deployment of the L2 VLAN.
Referring now to FIG. 16, a flow diagram of one embodiment of a process 1600 for determining applicability of ACL information to an L2 VNIC is illustrated. In some embodiments, process 1600 may be performed by a control plane that manages deployment of layer 2 virtual networks on an underlying physical network.
The process 1600 begins at block 1602, where the control plane stores an interface identifier that associates an L2 computing instance with an L2 VNIC. In some embodiments, the interface identifier may uniquely identify the L2 VNIC and may be included in header information of a frame sent from or to the L2 computing instance.
At block 1604, the control plane receives customer input indicating a name of an L2 port. In some embodiments, customer input may be received from a customer device via an API call and/or console. The L2 VNIC may emulate an L2 port, but the customer may not be aware of this L2 VNIC.
At block 1606, the control plane determines that the L2 port corresponds to the L2 VNIC. In some embodiments, the control plane determines this correspondence from topology information maintained for the L2 VLAN including the L2 compute instance and the L2 VNIC.
At block 1608, the control plane stores an association between the L2 VNIC and ACL information applicable to the L2 port. In some embodiments, the customer input may indicate an ACL configuration applicable to the L2 port. The control plane remaps the ACL configuration to apply to the L2 VNIC based on the correspondence between the L2 ports and the L2 VNIC to generate ACL information. In addition, the control plane may store an indication that ACL information applies to the L2 VNIC. This indication may include an interface identifier, a MAC address of the L2 VNIC, and/or an IP address of the L2 VNIC. Furthermore, this indication may be stored in the ACL information itself or in a data structure that also references the ACL information.
At block 1610, the control plane generates an indication that ACL information is applicable to the L2 VNIC for the NVD hosting the L2 VNIC. In some embodiments, the control plane also stores this indication in an ACL information or data structure, for example, by including the IP address of the NVD.
Referring now to FIG. 17, a flowchart of one embodiment of a process 1700 for enforcing an ACL is illustrated. In some embodiments, process 1700 may be performed by an NVD of an L2 VNIC to which a hosted ACL may apply.
Process 1700 begins at block 1702, where an NVD hosts an L2 VNIC and an L2 virtual switch belonging to an L2 virtual network (e.g., VLAN) of a customer. In some embodiments, the L2 VNICs and the L2 virtual switches are associated with L2 computing instances of the L2 virtual network. This L2 compute instance may be hosted on a host machine communicatively coupled to the NVD.
At block 1704, the NVD receives and stores ACL information associated with the L2 VNIC. In some embodiments, the NVD receives ACL information from the control plane. ACL information may control the flow of frames from the L2VNIC (corresponding to egress enforcement at the NVD) and/or to the L2VNIC (corresponding to ingress enforcement at the NVD).
At block 1706, the NVD receives the frame with header information. In some embodiments, a frame may be received from an L2 compute instance via a host machine (in which case the frame is an egress frame sent from the L2 compute instance). In some embodiments, a frame may be received from another L2 compute instance via another host machine (in which case the frame is an ingress frame sent to the L2 compute instance).
At block 1708, the NVD determines that the frame is associated with an L2VNIC based on the header information. In various embodiments, the header information includes an interface identifier and/or a MAC address associated with the L2 VNIC. The NVD may use the interface identifier and/or MAC address to determine that the frame is to be processed by the L2 VNIC.
At block 1710, the NVD determines that ACL information is applicable. In some embodiments, the ACL information indicates restrictions and/or permissions based on the MAC address. In this case, the NVD determines that ACL information controls the flow of frames based on this MAC address. For example, on the egress, if the source MAC address corresponds to an L2VNIC and another limiting criterion (e.g., if the frame has a particular destination MAC address), the ACL information may indicate that the frame cannot be sent. On the ingress, if the destination MAC address corresponds to the L2VNIC, and another limiting criterion (e.g., if the frame has a particular source MAC address), the ACL information may indicate that the frame cannot be sent to the L2 computing instance. In some embodiments, ACL information may not be bound to a MAC address. Instead, the ACL information may relate to IP addresses, communication protocols, etc. (e.g., the ACL information may limit the use of specifically encapsulated ingress frames). In this case, rather than using the interface identifier of the L2VNIC to determine ACL information, the NVD may determine how the ACL information controls frame flow based on the relevant portion(s) of the header information (e.g., fields for encapsulation type).
At block 1712, the NVD controls the flow of frames based on the ACL information. In some embodiments, the ACL information may indicate that the frame is to be discarded. In this case, the frame is not transmitted forward. In some embodiments, ACL information may indicate that a frame is to be sent. In this case, the frame is transmitted forward. For example, on egress, the L2 virtual switch associated with the L2 compute instance sends the frame to the relevant next L2 VNIC. On the portal, the L2VNIC associated with the L2 computing instance receives the frame and forwards it to the L2 computing instance. In some embodiments, the ACL information may indicate that the frame is to be processed in a particular manner before being sent (e.g., to be encapsulated using a particular encapsulation type, or that particular header information or payload information is to be edited). In this case, the frame is first processed (e.g., by the L2 VNIC) before being forwarded.
Referring now to FIG. 18, a flowchart of one embodiment of a process 1800 for enforcing an ACL is shown. In some embodiments, process 1800 may be performed by an NVD of an L2VNIC to which a hosted ACL may apply. In contrast to process 1700, process 1800 may include adding ACL information (or a portion thereof) in a frame such that enforcement of ACL information may be performed at various nodes between a source computing instance and a destination computing instance.
The process 1800 begins at block 1802 where the NVD receives a frame with header information. In some embodiments, a frame may be received from an L2 compute instance via a host machine (in which case the frame is an egress frame sent from the L2 compute instance). In some embodiments, a frame may be received from another L2 compute instance via another host machine (in which case the frame is an ingress frame sent to the L2 compute instance).
At block 1804, the NVD determines whether the frame may be transmitted. This determination may be similar to the operations described in connection with blocks 1708-1712 described above. If a frame cannot be sent (e.g., to an L2 computation instance on the ingress), then block 1806 is followed by block 1806, where the frame is discarded. Otherwise, block 1808 follows block 1804.
At block 1808, the NVD determines whether the ACL information applies to the next network hop(s) of the frame to its destination. In some embodiments, a frame may be sent from the L2 computation instance to the destination computation instance. This destination computing instance may be located in the same VLAN as the L2 computing instance, in another VLAN, in an L2 network within the same VCN of the VLAN, in another VCN, or in some other network. ACL information may limit destinations within any of these network boundaries (e.g., may limit sending frames outside of the VLAN of the L2 compute instance). In this case, the NVD may determine that ACL information applies to the next network hop(s), and block 1810 follows block 1808. Otherwise, block 1812 follows block 1808.
At block 1810, the NVD adds the relevant portion of the ACL information to the frame. For example, the NVD identifies a network boundary in which the destination may be. This network boundary may be a network identifier such as a VLAN tag for a VLAN, a network name, etc. In this way, when a frame arrives at the next network hop (e.g., another NVD or computing resource), the frame header may be checked to determine if the frame can still be sent onwards, whereby this next network hop may enforce ACL information as indicated in the header information.
At block 1812, the NVD sends the frame. In some embodiments, after adding the relevant portion of ACL information, the L2 virtual switch associated with the L2 compute instance sends the frame to the relevant next L2 VNIC, as applicable.
C-Example infrastructure as a service architecture
As noted above, infrastructure as a service (IaaS) is a particular type of cloud computing. The IaaS may be configured to provide virtualized computing resources over a public network (e.g., the internet). In the IaaS model, cloud computing providers may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., hypervisor layer), etc.). In some cases, the IaaS provider may also provide various services to accompany these infrastructure components (e.g., billing, monitoring, documentation, security, load balancing, clustering, etc.). Thus, as these services may be policy driven, iaaS users may be able to implement policies to drive load balancing to maintain availability and performance of applications.
In some cases, the IaaS client may access resources and services through a Wide Area Network (WAN), such as the internet, and may use the cloud provider's services to install the remaining elements of the application stack. For example, a user may log onto the IaaS platform to create Virtual Machines (VMs), install an Operating System (OS) on each VM, deploy middleware such as databases, create buckets for workloads and backups, and even install enterprise software into that VM. The customer may then use the provider's services to perform various functions including balancing network traffic, solving application problems, monitoring performance, managing disaster recovery, and the like.
In most cases, the cloud computing model will require participation of the cloud provider. The cloud provider may, but need not, be a third party service that specifically provides (e.g., provisions, rents, sells) IaaS. An entity may also choose to deploy a private cloud, thereby becoming its own infrastructure service provider.
In some examples, the IaaS deployment is a process of placing a new application or a new version of an application onto a prepared application server or the like. It may also include a process of preparing a server (e.g., installation library, daemon, etc.). This is typically managed by the cloud provider, below the hypervisor layer (e.g., servers, storage, network hardware, and virtualization). Thus, the guest may be responsible for processing (OS), middleware, and/or application deployment (e.g., on a self-service virtual machine (e.g., that may be started on demand), etc.).
In some examples, iaaS provisioning may refer to obtaining computers or virtual hosts for use, even installing the required libraries or services on them. In most cases, the deployment does not include provisioning, and provisioning may need to be performed first.
In some cases, the IaaS supply presents two different challenges. First, there are initial challenges to provisioning an initial infrastructure set before anything runs. Second, once everything has been provisioned, there is a challenge to evolve the existing infrastructure (e.g., add new services, change services, remove services, etc.). In some cases, both of these challenges may be addressed by enabling the configuration of the infrastructure to be defined in a declarative manner. In other words, the infrastructure (e.g., which components are needed and how they interact) may be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., which resources depend on which resources, and how they work in concert) can be described in a declarative manner. In some cases, once the topology is defined, workflows may be generated that create and/or manage the different components described in the configuration file.
In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more Virtual Private Clouds (VPCs) (e.g., potential on-demand pools of configurable and/or shared computing resources), also referred to as core networks. In some examples, one or more security group rules may also be supplied to define how to set security of the network and one or more Virtual Machines (VMs). Other infrastructure elements, such as load balancers, databases, etc., may also be supplied. As more and more infrastructure elements are desired and/or added, the infrastructure may evolve gradually.
In some cases, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Furthermore, the described techniques may enable infrastructure management within these environments. In some examples, a service team may write code that is desired to be deployed to one or more, but typically many, different production environments (e.g., across various different geographic locations, sometimes across the entire world). However, in some examples, the infrastructure on which the code is to be deployed must first be set up. In some cases, provisioning may be done manually, resources may be provisioned with a provisioning tool, and/or code may be deployed with a deployment tool once the infrastructure is provisioned.
Fig. 19 is a block diagram 1900 illustrating an example schema of an IaaS architecture in accordance with at least one embodiment. The service operator 1902 may be communicatively coupled to a secure host lease 1904 that may include a Virtual Cloud Network (VCN) 1906 and a secure host subnet 1908. In some examples, the service operator 1902 may use one or more client computing devices, which may be portable handheld devices (e.g.,cellular phone, & lt & gt>Computing tablet, personal Digital Assistant (PDA)) or wearable device (e.g., google +.>Head mounted display), running software (such as Microsoft Windows) And/or various mobile operating systems(s)Such as iOS, windows Phone, android, blackBerry, palm OS, etc.), and supports internet, email, short Message Service (SMS), and +.>Or other communication protocol. Alternatively, the client computing device may be a general purpose personal computer, including, for example, microsoft +.>Apple/>And/or a personal computer and/or a laptop computer of a Linux operating system. The client computing device may be running a variety of commercially available +.>Or a UNIX-like operating system, including but not limited to a workstation computer of any of a variety of GNU/Linux operating systems such as, for example, google Chrome OS. Alternatively or additionally, the client computing device may be any other electronic device, such as a thin client computer, an internet-enabled gaming system (e.g., with or without +. >Microsoft Xbox game console of the gesture input device), and/or a personal messaging device capable of communicating over a network having access to the VCN 1906 and/or the internet.
The VCN 1906 may include a local peer-to-peer gateway (LPG) 1910 that may be communicatively coupled to a Secure Shell (SSH) VCN 1912 via an LPG 1910 contained in the SSH VCN 1912. The SSH VCN 1912 may include an SSH subnetwork 1914, and the SSH VCN 1912 may be communicatively coupled to the control plane VCN 1916 via an LPG 1910 contained in the control plane VCN 1916. Further, the SSH VCN 1912 may be communicatively coupled to the data plane VCN 1918 via the LPG 1910. The control plane VCN 1916 and the data plane VCN 1918 may be contained in a service lease 1919 that may be owned and/or operated by the IaaS provider.
The control plane VCN 1916 may include a control plane demilitarized zone (DMZ) layer 1920 that functions as a peripheral network (e.g., a portion of a corporate network between a corporate intranet and an external network). DMZ-based servers can assume limited responsibility and help control security vulnerabilities. Further, the DMZ layer 1920 may include one or more Load Balancer (LB) subnets 1922, a control plane application layer 1924 that may include application subnet(s) 1926, a control plane data layer 1928 that may include Database (DB) subnets 1930 (e.g., front-end DB subnets and/or back-end DB subnets). The LB subnet(s) 1922 included in the control plane DMZ layer 1920 may be communicatively coupled to the application subnet(s) 1926 included in the control plane application layer 1924 and to the internet gateway 1934 that may be included in the control plane VCN 1916, and the application subnet(s) 1926 may be communicatively coupled to the DB subnet(s) 1930 and the serving gateway 1936 and Network Address Translation (NAT) gateway 1938 included in the control plane data layer 1928. The control plane VCN 1916 may include a serving gateway 1936 and a NAT gateway 1938.
The control plane VCN 1916 may include a data plane mirror application layer 1940, which may include application subnet(s) 1926. The application subnet(s) 1926 included in the data plane mirror application layer 1940 may include Virtual Network Interface Controllers (VNICs) 1942 that may execute computing instances 1944. The computing instance 1944 may communicatively couple the application subnet(s) 1926 of the data plane mirror application layer 1940 to the application subnet(s) 1926 that may be included in the data plane application layer 1946.
The data plane VCN 1918 may include a data plane application layer 1946, a data plane DMZ layer 1948, and a data plane data layer 1950. The data plane DMZ layer 1948 may include LB subnet(s) 1922, which may be communicatively coupled to the application subnet(s) 1926 of the data plane application layer 1946 and the internet gateway 1934 of the data plane VCN 1918. Application subnet(s) 1926 can be communicatively coupled to a serving gateway 1936 of data plane VCN 1918 and NAT gateway 1938 of data plane VCN 1918. Data plane data layer 1950 may also include DB subnet(s) 1930 that may be communicatively coupled to application subnet(s) 1926 of data plane application layer 1946.
The internet gateway 1934 of the control plane VCN 1916 and the data plane VCN 1918 may be communicatively coupled to the metadata management service 1952, and the metadata management service 1952 may be communicatively coupled to the public internet 1954. Public internet 1954 may be communicatively coupled to NAT gateway 1938 of control plane VCN 1916 and data plane VCN 1918. The service gateway 1936 of the control plane VCN 1916 and the data plane VCN 1918 may be communicatively coupled to the cloud service 1956.
In some examples, the service gateway 1936 of the control plane VCN 1916 or the data plane VCN 1918 may make Application Programming Interface (API) calls to the cloud services 1956 without going through the public internet 1954. API calls from service gateway 1936 to cloud service 1956 can be unidirectional: service gateway 1936 may make API calls to cloud service 1956, and cloud service 1956 may send the requested data to service gateway 1936. However, cloud service 1956 may not initiate an API call to service gateway 1936.
In some examples, secure host lease 1904 may be directly connected to service lease 1919, and service lease 1919 may be otherwise quarantined. The secure host subnetwork 1908 can communicate with the SSH subnetwork 1914 through the LPG 1910, and the LPG 1910 can enable bi-directional communication over otherwise isolated systems. Connecting secure host subnet 1908 to SSH subnet 1914 may allow secure host subnet 1908 to access other entities within service lease 1919.
The control plane VCN 1916 may allow a user of the service lease 1919 to set or otherwise provision desired resources. The desired resources provisioned in the control plane VCN 1916 may be deployed or otherwise used in the data plane VCN 1918. In some examples, the control plane VCN 1916 may be isolated from the data plane VCN 1918, and the data plane mirror application layer 1940 of the control plane VCN 1916 may communicate with the data plane application layer 1946 of the data plane VCN 1918 via VNICs 1942, which VNICs 1942 may be contained in the data plane mirror application layer 1940 and the data plane application layer 1946.
In some examples, a user or customer of the system may make a request, such as a create, read, update, or delete (CRUD) operation, through the public internet 1954, which may communicate the request to the metadata management service 1952. The metadata management service 1952 may communicate the request to the control plane VCN 1916 through an internet gateway 1934. The request may be received by LB subnet(s) 1922 contained in control plane DMZ layer 1920. The LB subnet(s) 1922 may determine that the request is valid and, in response to the determination, the LB subnet(s) 1922 may transmit the request to the application subnet(s) 1926 contained in the control plane application layer 1924. If the request is validated and calls to the public internet 1954 are required, the call to the public internet 1954 may be transmitted to a NAT gateway 1938 that may make calls to the public internet 1954. The memory in which the request may desire to store may be stored in DB subnet(s) 1930.
In some examples, data plane mirror application layer 1940 may facilitate direct communication between control plane VCN 1916 and data plane VCN 1918. For example, it may be desirable to apply changes, updates, or other suitable modifications to the configuration to the resources contained in the data plane VCN 1918. Via the VNIC 1942, the control plane VCN 1916 may communicate directly with resources contained in the data plane VCN 1918 and thus may perform changes, updates, or other suitable modifications to the configuration.
In some embodiments, control plane VCN 1916 and data plane VCN 1918 may be included in service lease 1919. In this case, a user or customer of the system may not own or operate the control plane VCN 1916 or the data plane VCN 1918. Alternatively, the IaaS provider may own or operate the control plane VCN 1916 and the data plane VCN 1918, both of which may be contained in the service lease 1919. This embodiment may enable isolation of networks that may prevent a user or customer from interacting with other users or other customers' resources. Furthermore, this embodiment may allow users or clients of the system to store databases privately without relying on the public internet 1954 for storage that may not have the desired level of security.
In other embodiments, LB subnet(s) 1922 contained in control plane VCN 1916 may be configured to receive signals from service gateway 1936. In this embodiment, the control plane VCN 1916 and the data plane VCN 1918 may be configured to be invoked by customers of the IaaS provider without invoking the public internet 1954. This embodiment may be desirable to customers of the IaaS provider because the database(s) used by the customers may be controlled by the IaaS provider and may be stored on the service lease 1919, which 1919 may be isolated from the public internet 1954.
Fig. 20 is a block diagram 2000 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. The service operator 2002 (e.g., the service operator 1902 of fig. 19) can be communicatively coupled to a secure host lease 2004 (e.g., the secure host lease 1904 of fig. 19), which secure host lease 2004 can include a Virtual Cloud Network (VCN) 2006 (e.g., the VCN 1906 of fig. 19) and a secure host subnet 2008 (e.g., the secure host subnet 1908 of fig. 19). The VCN 2006 may include a Local Peer Gateway (LPG) 2010 (e.g., LPG 1910 of fig. 19) that may be communicatively coupled to a Secure Shell (SSH) VCN 2012 (e.g., SSH VCN 1912 of fig. 19) via an LPG 1910 contained in the SSH VCN 2012. The SSH VCN 2012 may include an SSH subnetwork 2014 (e.g., SSH subnetwork 1914 of fig. 19), and the SSH VCN 2012 may be communicatively coupled to the control plane VCN 2016 (e.g., control plane VCN 1916 of fig. 19) via an LPG 2010 contained in the control plane VCN 2016. The control plane VCN 2016 may be included in a service lease 2019 (e.g., service lease 1919 of fig. 19), and the data plane VCN 2018 (e.g., data plane VCN 1918 of fig. 19) may be included in a customer lease 2021 that may be owned or operated by a user or customer of the system.
The control plane VCN 2016 may include a control plane DMZ layer 2020 (e.g., control plane DMZ layer 1920 of fig. 19), which may include LB subnet(s) 2022 (e.g., LB subnet(s) 1922 of fig. 19), may include a control plane application layer 2024 (e.g., control plane application layer 1924 of fig. 19) of application subnet(s) 2026 (e.g., application subnet(s) 1926) of fig. 19), may include a Database (DB) subnet 2030 (e.g., similar to control plane data layer 1928 of fig. 19) of DB subnet(s) 1930. The LB subnet(s) 2022 contained in the control plane DMZ layer 2020 may be communicatively coupled to the application subnet(s) 2026 contained in the control plane application layer 2024 and the internet gateway 2034 (e.g., the internet gateway 1934 of fig. 19) that may be contained in the control plane VCN 2016, and the application subnet(s) 2026 may be communicatively coupled to the DB subnet(s) 2030 and the service gateway 2036 (e.g., the service gateway of fig. 19) and the Network Address Translation (NAT) gateway 2038 (e.g., the NAT gateway 1938 of fig. 19) contained in the control plane data layer 2028. Control plane VCN 2016 may include a serving gateway 2036 and a NAT gateway 2038.
The control plane VCN 2016 may include a data plane mirror application layer 2040 (e.g., data plane mirror application layer 1940 of fig. 19), and the data plane mirror application layer 2040 may include application subnet(s) 2026. The application subnet(s) 2026 contained in the data plane mirror application layer 2040 can include a Virtual Network Interface Controller (VNIC) 2042 (e.g., a VNIC of 1942) that can execute a computing instance 2044 (e.g., similar to computing instance 1944 of fig. 19). The compute instance 2044 may facilitate communication between the application subnet(s) 2026 of the data plane mirror application layer 2040 and the application subnet(s) 2026 that may be included in the data plane application layer 2046 (e.g., the data plane application layer 1946 of fig. 19) via the VNICs 2042 included in the data plane mirror application layer 2040 and the VNICs 2042 included in the data plane application layer 2046.
The internet gateway 2034 included in the control plane VCN 2016 may be communicatively coupled to a metadata management service 2052 (e.g., the metadata management service 1952 of fig. 19), and the metadata management service 2052 may be communicatively coupled to a public internet 2054 (e.g., the public internet 1954 of fig. 19). Public internet 2054 may be communicatively coupled to NAT gateway 2038 included in control plane VCN 2016. The service gateway 2036 included in the control plane VCN 2016 may be communicatively coupled to a cloud service 2056 (e.g., cloud service 1956 of fig. 19).
In some examples, the data plane VCN 2018 may be included in the customer lease 2021. In this case, the IaaS provider may provide a control plane VCN 2016 for each customer, and the IaaS provider may set a unique computing instance 2044 for each customer that is contained in the service lease 2019. Each computing instance 2044 may allow communication between a control plane VCN 2016 contained in service lease 2019 and a data plane VCN 2018 contained in customer lease 2021. Computing instance 2044 may allow resources provisioned in control plane VCN 2016 contained in service lease 2019 to be deployed or otherwise used in data plane VCN 2018 contained in customer lease 2021.
In other examples, the customer of the IaaS provider may have a database residing in the customer lease 2021. In this example, the control plane VCN 2016 may include a data plane mirror application layer 2040, which may include application subnet(s) 2026. The data plane mirror application layer 2040 may reside in the data plane VCN 2018, but the data plane mirror application layer 2040 may not reside in the data plane VCN 2018. That is, the data plane mirror application layer 2040 may access the customer lease 2021, but the data plane mirror application layer 2040 may not exist in the data plane VCN 2018 or be owned or operated by the customer of the IaaS provider. The data plane mirror application layer 2040 may be configured to make calls to the data plane VCN 2018, but may not be configured to make calls to any entity contained in the control plane VCN 2016. The customer may desire to deploy or otherwise use the resources provisioned in the control plane VCN 2016 in the data plane VCN 2018, and the data plane mirror application layer 2040 may facilitate the customer's desired deployment or other use of the resources.
In some embodiments, the customer of the IaaS provider may apply the filter to the data plane VCN 2018. In this embodiment, the customer may determine what the data plane VCN 2018 may access, and the customer may restrict access to the public internet 2054 from the data plane VCN 2018. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN 2018 to any external networks or databases. Application of filters and controls by customers to the data plane VCN 2018 contained in the customer lease 2021 may help isolate the data plane VCN 2018 from other customers and the public internet 2054.
In some embodiments, cloud service 2056 may be invoked by service gateway 2036 to access services that may not exist on public internet 2054, control plane VCN 2016, or data plane VCN 2018. The connection between the cloud service 2056 and the control plane VCN 2016 or the data plane VCN 2018 may not be real-time or continuous. Cloud service 2056 may reside on a different network owned or operated by the IaaS provider. The cloud service 2056 may be configured to receive calls from the service gateway 2036 and may be configured to not receive calls from the public internet 2054. Some cloud services 2056 may be isolated from other cloud services 2056, and the control plane VCN 2016 may be isolated from cloud services 2056 that may not be in the same area as the control plane VCN 2016. For example, the control plane VCN 2016 may be located in "zone 1", and the cloud service "deployment 19" may be located in both zone 1 and "zone 2". If a service gateway 2036 contained in control plane VCN 2016 located in zone 1 makes a call to deployment 19, the call may be transmitted to deployment 19 in zone 1. In this example, the control plane VCN 2016 or deployment 19 in zone 1 may not be communicatively coupled or otherwise in communication with deployment 19 in zone 2.
Fig. 21 is a block diagram 2100 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. Service operator 2102 (e.g., service operator 1902 of fig. 19) can be communicatively coupled to a secure host lease 2104 (e.g., secure host lease 1904 of fig. 19), which secure host lease 2104 can include a Virtual Cloud Network (VCN) 2106 (e.g., VCN 1906 of fig. 19) and a secure host subnet 2108 (e.g., secure host subnet 1908 of fig. 19). The VCN 2106 may include an LPG 2110 (e.g., the LPG 1910 of fig. 19) that may be communicatively coupled to the SSH VCN 2112 (e.g., the SSH VCN 1912 of fig. 19) via the LPG 2110 contained in the SSH VCN 2112. The SSH VCN 2112 may include an SSH subnetwork 2114 (e.g., SSH subnetwork 1914 of fig. 19), and the SSH VCN 2112 may be communicatively coupled to the control plane VCN 2116 (e.g., control plane VCN 1916 of fig. 19) via an LPG 2110 contained in the control plane VCN 2116 and to the data plane VCN 2118 (e.g., data plane 1918 of fig. 19) via an LPG 2110 contained in the data plane VCN 2118. The control plane VCN 2116 and the data plane VCN 2118 may be included in a service lease 2119 (e.g., service lease 1919 of fig. 19).
The control plane VCN 2116 may include a control plane DMZ layer 2120 (e.g., control plane DMZ layer 1920 of fig. 19) that may include Load Balancer (LB) subnet(s) 2122 (e.g., LB subnet(s) 1922 of fig. 19), a control plane application layer 2124 (e.g., control plane application layer 1924) that may include application subnet(s) 2126 (e.g., similar to application subnet(s) 1926 of fig. 19), and a control plane data layer 2128 (e.g., control plane data layer 1928) that may include DB subnet(s) 2130. The LB subnet(s) 2122 contained in the control plane DMZ layer 2120 may be communicatively coupled to the application subnet(s) 2126 contained in the control plane application layer 2124 and the internet gateway 2134 (e.g., internet gateway 1934 of fig. 19) that may be contained in the control plane VCN 2116, and the application subnet(s) 2126 may be communicatively coupled to the DB subnet(s) 2130 and the serving gateway 2136 (e.g., serving gateway of fig. 19) and Network Address Translation (NAT) gateway 2138 (e.g., gateway 1938 of fig. 19) contained in the control plane data layer 2128. The control plane VCN 2116 may include a serving gateway 2136 and a NAT gateway 2138.
The data plane VCN 2118 may include a data plane application layer 2146 (e.g., data plane application layer 1946 of fig. 19), a data plane DMZ layer 2148 (e.g., data plane DMZ layer 1948 of fig. 19), and a data plane data layer 2150 (e.g., data plane data layer 1950 of fig. 19). The data plane DMZ layer 2148 may include trusted application subnet(s) 2160 and untrusted application subnet(s) 2162 that may be communicatively coupled to the data plane application layer 2146 and LB subnet(s) 2122 of the internet gateway 2134 contained in the data plane VCN 2118. Trusted application subnet(s) 2160 may be communicatively coupled to serving gateway 2136 included in data plane VCN 2118, NAT gateway 2138 included in data plane VCN 2118, and DB subnet(s) 2130 included in data plane data layer 2150. The untrusted application subnet(s) 2162 may be communicatively coupled to the serving gateway 2136 contained in the data plane VCN 2118 and the DB subnet(s) 2130 contained in the data plane data layer 2150. The data plane data layer 2150 may include DB subnet(s) 2130 that may be communicatively coupled to service gateway 2136 included in data plane VCN 2118.
The untrusted application subnet(s) 2162 may include one or more primary VNICs 2164 (1) - (N) that may be communicatively coupled to tenant Virtual Machines (VMs) 2166 (1) - (N). Each tenant VM 2166 (1) - (N) may be communicatively coupled to a respective application subnet 2167 (1) - (N) that may be included in a respective container outlet VCN 2168 (1) - (N), which may be included in a respective customer lease 2170 (1) - (N). The respective auxiliary VNICs 2172 (1) - (N) may facilitate communications between the untrusted application subnet(s) 2162 contained in the data plane VCN 2118 and the application subnets contained in the container egress VCNs 2168 (1) - (N). Each container egress VCN 2168 (1) - (N) may include a NAT gateway 2138, which NAT gateway 2138 may be communicatively coupled to the public internet 2154 (e.g., public internet 1954 of fig. 19).
The internet gateway 2134 included in the control plane VCN 2116 and included in the data plane VCN 2118 may be communicatively coupled to a metadata management service 2152 (e.g., the metadata management system 1952 of fig. 19), which metadata management service 2152 may be communicatively coupled to the public internet 2154. Public internet 2154 may be communicatively coupled to NAT gateway 2138 contained in control plane VCN 2116 and in data plane VCN 2118. The service gateway 2136 included in the control plane VCN 2116 and in the data plane VCN 2118 may be communicatively coupled to the cloud service 2156.
In some embodiments, the data plane VCN 2118 may be integrated with customer lease 2170. In some cases, such as where support may be desired while executing code, such integration may be useful or desirable to customers of the IaaS provider. The customer may provide code that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects to operate. In response thereto, the IaaS provider may determine whether to run the code given to the IaaS provider by the customer.
In some examples, a customer of the IaaS provider may grant temporary network access to the IaaS provider and request functionality attached to the data plane layer application 2146. Code that runs this function may be executed in VM 2166 (1) - (N), and the code may not be configured to run anywhere else on data plane VCN 2118. Each VM 2166 (1) - (N) may be connected to a customer lease 2170. Respective containers 2171 (1) - (N) contained in VMs 2166 (1) - (N) may be configured to run code. In this case, there may be dual isolation (e.g., containers 2171 (1) - (N) running code, where containers 2171 (1) - (N) may be at least contained in VMs 2166 (1) - (N) contained in untrusted application subnet(s) 2162), which may help prevent incorrect or otherwise undesirable code from damaging the network of the IaaS provider or damaging the network of a different customer. Containers 2171 (1) - (N) may be communicatively coupled to customer lease 2170 and may be configured to transmit or receive data from customer lease 2170. Containers 2171 (1) - (N) may not be configured to transmit or receive data from any other entity in data plane VCN 2118. After the run code is complete, the IaaS provider may terminate or otherwise dispose of containers 2171 (1) - (N).
In some embodiments, trusted application subnet(s) 2160 may run code that may be owned or operated by the IaaS provider. In this embodiment, trusted application subnet(s) 2160 may be communicatively coupled to DB subnet(s) 2130 and configured to perform CRUD operations in DB subnet(s) 2130. Untrusted application subnet(s) 2162 may be communicatively coupled to DB subnet(s) 2130, but in this embodiment, untrusted application subnet(s) may be configured to perform read operations in DB subnet(s) 2130. Containers 2171 (1) - (N), which may be contained in VMs 2166 (1) - (N) of each guest and may run code from the guest, may not be communicatively coupled with DB subnet(s) 2130.
In other embodiments, the control plane VCN 2116 and the data plane VCN 2118 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 2116 and the data plane VCN 2118. However, communication may occur indirectly through at least one method. The LPG 2110 may be established by an IaaS provider, which may facilitate communication between the control plane VCN 2116 and the data plane VCN 2118. In another example, the control plane VCN 2116 or the data plane VCN 2118 may invoke the cloud service 2156 via the service gateway 2136. For example, a call from the control plane VCN 2116 to the cloud service 2156 may include a request for a service that may communicate with the data plane VCN 2118.
Fig. 22 is a block diagram 2200 illustrating another example mode of an IaaS architecture in accordance with at least one embodiment. The service operator 2202 (e.g., the service operator 1902 of fig. 19) may be communicatively coupled to a secure host lease 2204 (e.g., the secure host lease 1904 of fig. 19), which secure host lease 2204 may include a Virtual Cloud Network (VCN) 2206 (e.g., the VCN 1906 of fig. 19) and a secure host subnet 2208 (e.g., the secure host subnet 1908 of fig. 19). The VCN 2206 may include an LPG 2210 (e.g., the LPG 1910 of fig. 19), which LPG 2210 may be communicatively coupled to the SSH VCN 2212 via the LPG 2210 contained in the SSH VCN 2212 (e.g., the SSH VCN 1912 of fig. 19). The SSH VCN 2212 may include an SSH subnetwork 2214 (e.g., SSH subnetwork 1914 of fig. 19), and the SSH VCN 2212 may be communicatively coupled to the control plane VCN 2216 (e.g., control plane VCN 1916 of fig. 19) via an LPG 2210 contained in the control plane VCN 2216 and to the data plane VCN 2218 (e.g., data plane 1918 of fig. 19) via an LPG 2210 contained in the data plane VCN 2218. Control plane VCN 2216 and data plane VCN 2218 may be included in a service lease 2219 (e.g., service lease 1919 of fig. 19).
The control plane VCN 2216 may include a control plane DMZ layer 2220 (e.g., control plane DMZ layer 1920 of fig. 19) that may include LB subnet(s) 2222 (e.g., LB subnet(s) 1922 of fig. 19), a control plane application layer 2224 (e.g., control plane application layer 1924 of fig. 19) that may include application subnet(s) 2226 (e.g., application subnet(s) 1926), and a control plane data layer 2228 (e.g., control plane data layer 1928 of fig. 19) that may include DB subnet(s) 2230 (e.g., DB subnet(s) 2130 of fig. 21). The LB subnet(s) 2222 included in the control plane DMZ layer 2220 may be communicatively coupled to the application subnet(s) 2226 included in the control plane application layer 2224 and the internet gateway 2234 (e.g., the internet gateway 1934 of fig. 19) that may be included in the control plane VCN 2216, and the application subnet(s) 2226 may be communicatively coupled to the DB subnet(s) 2230 and the service gateway 2236 (e.g., the service gateway of fig. 19) and the Network Address Translation (NAT) gateway 2238 (e.g., the gateway 1938 of fig. 19) included in the control plane data layer 2228. The control plane VCN 2216 may include a service gateway 2236 and a NAT gateway 2238.
The data plane VCN 2218 may include a data plane application layer 2246 (e.g., data plane application layer 1946 of fig. 19), a data plane DMZ layer 2248 (e.g., data plane DMZ layer 1948 of fig. 19), and a data plane data layer 2250 (e.g., data plane data layer 1950 of fig. 19). The data plane DMZ layer 2248 may include trusted application subnet(s) 2260 (e.g., trusted application subnet(s) 2160 of fig. 21) and untrusted application subnet(s) 2262 (e.g., untrusted application subnet(s) 2162 of fig. 21) and LB subnet(s) 2222 of the internet gateway 2234 contained in the data plane VCN 2218, which may be communicatively coupled to the data plane application layer 2246. Trusted application subnet(s) 2260 may be communicatively coupled to service gateway 2236 contained in data plane VCN 2218, NAT gateway 2238 contained in data plane VCN 2218, and DB subnet(s) 2230 contained in data plane data layer 2250. The untrusted application subnet(s) 2262 may be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218 and the DB subnet(s) 2230 contained in the data plane data layer 2250. The data plane data layer 2250 may include DB subnetwork(s) 2230 that may be communicatively coupled to service gateway 2236 included in data plane VCN 2218.
The untrusted application subnet(s) 2262 may include a host VNIC 2264 (1) - (N) that may be communicatively coupled to a tenant Virtual Machine (VM) 2266 (1) - (N) residing within the untrusted application subnet(s) 2262. Each tenant VM 2266 (1) - (N) may run code in a respective container 2267 (1) - (N) and be communicatively coupled to an application subnet 2226 that may be included in a data plane application layer 2246 included in a container outlet VCN 2268. The respective auxiliary VNICs 2272 (1) - (N) may facilitate communications between the untrusted application subnet(s) 2262 contained in the data plane VCN 2218 and the application subnets contained in the container egress VCN 2268. The container egress VCN may include a NAT gateway 2238 that may be communicatively coupled to the public internet 2254 (e.g., public internet 1954 of fig. 19).
The internet gateway 2234 contained in the control plane VCN 2216 and in the data plane VCN 2218 may be communicatively coupled to a metadata management service 2252 (e.g., the metadata management system 1952 of fig. 19), which metadata management service 2252 may be communicatively coupled to the public internet 2254. Public internet 2254 may be communicatively coupled to NAT gateway 2238 contained in control plane VCN 2216 and contained in data plane VCN 2218. The service gateway 2236 contained in the control plane VCN 2216 and contained in the data plane VCN 2218 may be communicatively coupled to the cloud service 2256.
In some examples, the pattern shown by the architecture of block 2200 of fig. 22 may be considered an exception to the pattern shown by the architecture of block 2100 of fig. 21, and if the IaaS provider cannot directly communicate with the customer (e.g., disconnected areas), such a pattern may be desirable to the customer of the IaaS provider. The guests may access the corresponding containers 2267 (1) - (N) contained in each guest's VM 2266 (1) - (N) in real-time. The containers 2267 (1) - (N) may be configured to invoke respective auxiliary VNICs 2272 (1) - (N) contained in the application subnet(s) 2226 of the data plane application layer 2246, which data plane application layer 2246 may be contained in the container egress VCN 2268. The auxiliary VNICs 2272 (1) - (N) may transmit the call to the NAT gateway 2238, and the NAT gateway 2238 may transmit the call to the public internet 2254. In this example, containers 2267 (1) - (N), which may be accessed by clients in real-time, may be isolated from control plane VCN 2216 and may be isolated from other entities contained in data plane VCN 2218. Containers 2267 (1) - (N) may also be isolated from resources from other clients.
In other examples, a customer may use containers 2267 (1) - (N) to invoke cloud service 2256. In this example, a customer may run code in containers 2267 (1) - (N) that requests services from cloud service 2256. The containers 2267 (1) - (N) may transmit the request to the auxiliary VNICs 2272 (1) - (N), and the auxiliary VNICs 2272 (1) - (N) may transmit the request to the NAT gateway, which may transmit the request to the public internet 2254. The public internet 2254 may transmit the request to the LB subnet(s) 2222 contained in the control plane VCN 2216 via the internet gateway 2234. In response to determining that the request is valid, the LB subnet(s) may transmit the request to the application subnet(s) 2226, which application subnet(s) 2226 may transmit the request to the cloud service 2256 via the service gateway 2236.
It should be appreciated that the IaaS architecture 1900, 2000, 2100, 2200 depicted in the figures may have other components than those depicted. Additionally, the embodiments shown in the figures are merely some examples of cloud infrastructure systems that may incorporate embodiments of the present disclosure. In some other embodiments, the IaaS system may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
In certain embodiments, the IaaS system described herein may include application suites, middleware, and database service products that are delivered to customers in a self-service, subscription-based, elastically extensible, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) offered by the present assignee.
FIG. 23 illustrates an example computer system 2300 in which various embodiments can be implemented. The system 2300 may be used to implement any of the computer systems described above. As shown, computer system 2300 includes a processing unit 2304 that communicates with a plurality of peripheral subsystems via a bus subsystem 2302. These peripheral subsystems may include a processing acceleration unit 2306, an I/O subsystem 2308, a storage subsystem 2318, and a communication subsystem 2324. Storage subsystem 2318 includes tangible computer-readable storage media 2322 and system memory 2310.
Bus subsystem 2302 provides a mechanism for letting the various components and subsystems of computer system 2300 communicate with each other as intended. Although bus subsystem 2302 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. The bus subsystem 2302 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. Such architectures can include Industry Standard Architecture (ISA) bus, micro Channel Architecture (MCA) bus, enhanced ISA (EISA) bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as Mezzanine bus manufactured by the IEEE P1386.1 standard, for example.
The processing unit 2304, which may be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of the computer system 2300. One or more processors may be included in the processing unit 2304. These processors may include single-core or multi-core processors. In some embodiments, processing unit 2304 may be implemented as one or more separate processing units 2332 and/or 2334, where a single-core or multi-core processor is included in each processing unit. In other embodiments, processing unit 2304 may also be implemented as a four-core processing unit formed by integrating two dual-core processors into a single chip.
In various embodiments, the processing unit 2304 may execute various programs in response to program code and may maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed may reside within the processor(s) 2304 and/or within the storage subsystem 2318. The processor(s) 2304 may provide the various functions described above by being suitably programmed. The computer system 2300 may additionally include a processing acceleration unit 2306, which may include a Digital Signal Processor (DSP), a special-purpose processor, and so forth.
The I/O subsystem 2308 may include user interface input devices and user interface output devices. The user interface input devices may include a keyboard, a pointing device such as a mouse or trackball, a touch pad or touch screen incorporated into a display, a scroll wheel, a click wheel, dials, buttons, switches, a keyboard, an audio input device with a voice command recognition system, a microphone, and other types of input devices. The user interface input device may include, for example, a motion sensing and/or gesture recognition device, such as Microsoft WindowsMotion sensor enabling a user to control e.g. Microsoft +.A motion sensor enabling a user to control e.g. a motion sensor using gesture and voice commands via a natural user interface >360 to the input device of the game controller and interact therewith. The user interface input device may also include an eye gesture recognition device, such as detecting eye activity from a user (e.g., "blinking" when taking a photograph and/or making a menu selection) and converting the eye gesture to an input device (e.g., google) Google->Blink detectionAnd (3) a device. Furthermore, the user interface input device may comprise a control unit enabling the user to communicate with the speech recognition system via voice commands (e.g. -/->Navigator) interactive voice recognition sensing device.
User interface input devices may also include, but are not limited to, three-dimensional (3D) mice, joysticks or sticks, game pads and drawing tablets, as well as audio/video devices such as speakers, digital cameras, digital video cameras, portable media players, webcams, image scanners, fingerprint scanners, bar code reader 3D scanners, 3D printers, laser rangefinders and gaze tracking devices. Further, the user interface input device may comprise, for example, a medical imaging input device, such as a computed tomography, magnetic resonance imaging, positron emission tomography, or medical ultrasound device. The user interface input device may also include, for example, an audio input device such as a MIDI keyboard, digital musical instrument, or the like.
The user interface output device may include a display subsystem, an indicator light, or a non-visual display such as an audio output device, or the like. The display subsystem may be a Cathode Ray Tube (CRT), a flat panel device such as one using a Liquid Crystal Display (LCD) or a plasma display, a projection device, a touch screen, or the like. In general, use of the term "output device" is intended to include all possible types of devices and mechanisms for outputting information from the computer system 2300 to a user or other computer. For example, user interface output devices may include, but are not limited to, various display devices that visually convey text, graphics, and audio/video information, such as monitors, printers, speakers, headphones, car navigation systems, plotters, voice output devices, and modems.
The computer system 2300 may include a storage subsystem 2318, shown as being currently located in system memory 2310, that contains software elements. The system memory 2310 may store program instructions that are loadable and executable on the processing unit 2304, as well as data generated during the execution of these programs.
Depending on the configuration and type of computer system 2300, system memory 2310 may be volatile (such as Random Access Memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.). RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated on and executed by processing unit 2304. In some implementations, the system memory 2310 may include a variety of different types of memory, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM). In some implementations, a basic input/output system (BIOS), such as the basic routines that help to transfer information between elements within the computer system 2300 during start-up, may be stored in ROM. By way of example, and not limitation, system memory 2310 also illustrates application programs 2312 that may include a client application, a web browser, a middle tier application, a relational database management system (RDBMS), and the like, program data 2314, and an operating system 2316. By way of example, the operating system 2316 can include various versions of Microsoft Windows Apple/>And/or Linux operating system, various commercially available +.>Or UNIX-like operating systems (including but not limited to various GNU/Linux operating systems, google +.>OS, etc.) and/or such as iOS,/-or the like>Phone、/>OS、/>23OS and->Mobile operating system of OS operating system.
The storage subsystem 2318 may also provide a tangible computer-readable storage medium for storing basic programming and data structures that provide the functionality of some embodiments. Software (programs, code modules, instructions) that provide the functionality described above when executed by the processor may be stored in the storage subsystem 2318. These software modules or instructions may be executed by the processing unit 2304. The storage subsystem 2318 may also provide a repository for storing data used in accordance with the present disclosure.
Storage subsystem 2300 may also include a computer-readable storage media reader 2320 that may be further connected to computer-readable storage media 2322. Along with and optionally in conjunction with system memory 2310, computer-readable storage medium 2322 may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information.
Computer-readable storage media 2322 containing code or portions of code may also include any suitable media known or used in the art, including storage media and communication media such as, but not limited to, volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This may include tangible computer-readable storage media such as RAM, ROM, electrically Erasable Programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer-readable media. This may also include non-tangible computer-readable media, such as data signals, data transmissions, or any other medium that may be used to transmit desired information and that may be accessed by computing system 2300.
For example, computer-readable storage media 2322 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and a removable, nonvolatile optical disk (such as a CD ROM, DVD, and a CD-ROM disk A disk or other optical medium) to which a data signal is read or written. Computer readable storage media 2322 may include, but is not limited to,/for example>Drives, flash memory cards, universal Serial Bus (USB) flash drives, secure Digital (SD) cards, DVD discs, digital audio tape, and the like. The computer-readable storage media 2322 may also include non-volatile memory based Solid State Drives (SSDs) (such as flash memory based SSDs, enterprise flash drives, solid state ROMs, etc.), volatile memory based SSDs (such as solid state RAM, dynamic RAM, static RAM), DRAM based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media can provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computer system 2300.
Communication subsystem 2324 provides an interface to other computer systems and networks. Communication subsystem 2324 serves as an interface for receiving data from and sending data to other systems from computer system 2300. For example, communication subsystem 2324 may enable computer system 2300 to connect to one or more devices via the internet. In some embodiments, communication subsystem 2324 may include Radio Frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., advanced data network technology using cellular telephone technology such as 3G, 4G, or EDGE (enhanced data rates for global evolution), wiFi (IEEE 802.11 family standards), or other mobile communication technologies, or any combination thereof), global Positioning System (GPS) receiver components, and/or other components. In some embodiments, communication subsystem 2324 may provide a wired network connection (e.g., ethernet) in addition to or in lieu of a wireless interface.
In some embodiments, communication subsystem 2324 may also receive input communications in the form of structured and/or unstructured data feeds 2326, event streams 2328, event updates 2330, and the like on behalf of one or more users who may use computer system 2300.
For example, the communication subsystem 2324 may be configured to receive data feeds 2326, such as in real-time, from users of social networks and/or other communication servicesFeed, & lt & gt>Updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third-party information sources.
Furthermore, communication subsystem 2324 may also be configured to receive data in the form of a continuous data stream, which may include event stream 2328 and/or event update 2330, which may be continuous or unbounded in nature, without explicitly terminated real-time events. Examples of applications that generate continuous data may include, for example, sensor data applications, financial quoters, network performance measurement tools (e.g., network monitoring and traffic management applications), click stream analysis tools, automobile traffic monitoring, and so forth.
The communication subsystem 2324 may also be configured to output structured and/or unstructured data feeds 2326, event streams 2328, event updates 2330, and the like, to one or more databases, which may be in communication with one or more streaming data source computers coupled to the computer system 2300.
The computer system 2300 may be one of various types, including a handheld portable device (e.g.,cellular phone, & lt & gt>Computing tablet, PDA), wearable device (e.g., +.>Glass head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system.
In the previous descriptions, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of the examples of the present disclosure. It may be evident, however, that the various examples may be practiced without these specific details. The following description is merely provided as an example and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the examples will provide those skilled in the art with an enabling description for implementing the examples. It being understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth in the appended claims. The drawings and description are not intended to be limiting. Circuits, systems, networks, processes, and other components may be shown in block diagram form in order not to obscure the examples in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the examples. The teachings disclosed herein may also be applied to various types of applications, such as mobile applications, non-mobile applications, desktop applications, web applications, enterprise applications, and the like. In addition, the teachings of the present disclosure are not limited to a particular operating environment (e.g., operating system, device, platform, etc.), but instead may be applied to a number of different operating environments.
Moreover, it is noted that the individual examples may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. Further, the order of the operations may be rearranged. The process terminates when its operations are completed, but the process may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, etc. When a procedure corresponds to a function, its termination may correspond to the return of the function to the calling function or the main function.
The words "example" and "exemplary" are used herein to mean "serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
The term "machine-readable storage medium" or "computer-readable storage medium" includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other media capable of storing, containing, or carrying instruction(s) and/or data. A machine-readable storage medium or computer-readable storage medium may include a non-transitory medium in which data may be stored and which does not include a carrier wave and/or transitory electronic signals propagating wirelessly or through a wired connection. Examples of non-transitory media may include, but are not limited to, magnetic disks or tapes, optical storage media such as Compact Discs (CDs) or Digital Versatile Discs (DVDs), flash memory, or memory or storage devices thereof. A computer program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
Furthermore, examples may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments (e.g., a computer program product) to perform the necessary tasks may be stored in a machine readable medium. The processor(s) may perform the necessary tasks. The systems depicted in some of the figures may be provided in various configurations. In some examples, the system may be configured as a distributed system, where one or more components of the system are distributed over one or more networks in the cloud computing system. Where a component is described as "configured to" perform a certain operation, such configuration may be accomplished, for example, by designing electronic circuitry or other hardware to perform the operation, by programming or controlling electronic circuitry (e.g., a microprocessor or other suitable electronic circuitry) to perform the operation, or any combination thereof.
While specific embodiments of the present disclosure have been described, various modifications, alterations, alternative constructions, and equivalents are also included within the scope of the disclosure. Embodiments of the present disclosure are not limited to operation within certain particular data processing environments, but may be free to operate within multiple data processing environments. Furthermore, while embodiments of the present disclosure have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. The various features and aspects of the embodiments described above may be used alone or in combination.
In addition, while embodiments of the present disclosure have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments of the present disclosure may be implemented in hardware alone, or in software alone, or in a combination thereof. The various processes described herein may be implemented in any combination on the same processor or on different processors. Thus, where a component or module is described as being configured to perform certain operations, such configuration may be accomplished by, for example, designing the electronic circuitry to perform the operations, performing the operations by programming programmable electronic circuitry (such as a microprocessor), or any combination thereof. The processes may communicate using a variety of techniques, including but not limited to conventional techniques for inter-process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various additions, subtractions, deletions and other modifications and changes may be made thereto without departing from the broader spirit and scope as set forth in the claims. Thus, while specific disclosed embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are intended to be within the scope of the following claims.
Examples of embodiments of the present disclosure may be described in consideration of the following clauses:
clause 1. A method comprising: generating Access Control List (ACL) information for traffic flows in a layer 2 virtual network of the customer based on the customer's input, wherein the layer 2 virtual network is hosted on a physical network and includes a plurality of layer 2 computing instances, a plurality of layer 2 virtual network interfaces, and a plurality of layer 2 virtual switches; determining that the ACL information applies to a subset of the plurality of layer 2 virtual network interfaces, the subset comprising layer 2 virtual network interfaces; determining that ACL information is to be sent to a Network Virtualization Device (NVD) of a physical network, wherein: the NVD hosts a layer 2 virtual network interface and a layer 2 virtual switch of the plurality of layer 2 virtual switches, the layer 2 virtual network interface and the layer 2 virtual switch being associated with a layer 2 compute instance of the plurality of layer 2 compute instances, and the layer 2 compute instance being hosted on a host machine of the physical network, the host machine and the NVD being communicatively coupled; and sending the ACL information to the NVD.
The method of clause 2, clause 1, wherein the ACL information, the NVD, the layer 2 virtual network interface, and the layer 2 virtual switch are first ACL information, a first NVD, a first layer 2 virtual network interface, and a first layer 2 virtual switch, and wherein the method further comprises: generating second ACL information applicable to the second layer 2 virtual network interface based on the input of the client; determining that the second ACL information is to be sent to a second NVD of the physical network, wherein the second NVD hosts a second layer 2 virtual network interface and a second layer 2 virtual switch of the plurality of layer 2 virtual switches; and transmitting the second ACL information to the second NVD.
Clause 3 the method of any of clauses 1-2, wherein the flow of frames from and/or to the layer 2 computing instance via the layer 2 virtual network interface is controlled by the NVD according to the ACL information based on the stored association between the ACL information and the layer 2 virtual network interface.
Clause 4. The method of any of clauses 1-3, further comprising: storing configuration information of the layer 2 virtual network, wherein the configuration information indicates a customer-specified name of the layer 2 port; determining that the layer 2 port is emulated by the layer 2 virtual network interface; and storing an association between the customer-specified name and the layer 2 virtual network interface.
Clause 5 the method of clause 4, wherein the input by the client indicates ACL information and a client-specified name for the layer 2 port, and wherein the method further comprises: identifying a layer 2 virtual network interface based on an association between the customer-specified name and the layer 2 virtual network interface; and determining that the ACL information applies to the layer 2 virtual network interface based on the identified layer 2 virtual network interface.
Clause 6 the method of clause 4, wherein the input by the client indicates ACL information and a client-specified name for the layer 2 port, and wherein the method further comprises: identifying a layer 2 virtual network interface based on an association between the customer-specified name and the layer 2 virtual network interface; determining a Medium Access Control (MAC) address associated with the layer 2 virtual network interface; and indicating to the NVD that the ACL information is applicable to the MAC address.
Clause 7 the method of clause 6, wherein the input by the client indicates ACL information and a client-specified name for the layer 2 port, and wherein the method further comprises: determining an Internet Protocol (IP) address associated with the NVD; and associating the ACL information with the MAC address and the IP address.
Clause 8 the method of any of clauses 1-7, wherein the input by the customer indicates that the ACL information is applicable to at least one of: destination port, source port, destination Medium Access Control (MAC) address, source MAC address, destination Internet Protocol (IP) address, source IP address, communication protocol type, traffic broadcast or traffic unicast.
Clause 9. A network virtualization device comprising: one or more processors; and one or more computer-readable storage media storing instructions that, when executed by the one or more processors, configure the network virtualization device to: a layer 2 virtual network interface and layer 2 virtual switch hosting a layer 2 virtual network belonging to a customer, wherein: the layer 2 virtual network interface and the layer 2 virtual switch are associated with a layer 2 computing instance belonging to a layer 2 virtual network, the layer 2 computing instance hosted on a host machine of a physical network comprising a network virtualization device, the host machine and the network virtualization device communicatively coupled, and the layer 2 virtual network hosted on the physical network and comprising a plurality of layer 2 computing instances, a plurality of layer 2 virtual network interfaces, and a plurality of layer 2 virtual switches; storing Access Control List (ACL) information associated with the layer 2 virtual network interface; receiving a frame having header information; determining that the frame is associated with a layer 2 virtual network interface based on the header information; and controlling the flow of frames based on the ACL information.
Clause 10. The network virtualization device of clause 9, wherein the ACL information controls ingress traffic to the layer 2 computing instance, wherein the header information includes a destination Media Access (MAC) address associated with the layer 2 virtual network interface, and wherein the flow of frames to the layer 2 computing instance via the layer 2 virtual network interface is controlled based on the stored association between the destination MAC address and the ACL information.
Clause 11 the network virtualization device of clause 9, wherein the ACL information controls ingress traffic to the layer 2 computing instance, wherein the header information includes a source Media Access (MAC) address associated with another layer 2 virtual network interface of the plurality of layer 2 virtual network interfaces, and wherein controlling the flow of frames to the layer 2 computing instance via the layer 2 virtual network interface is based on the stored association between the source MAC address and the ACL information.
Clause 12 the network virtualization device of any of clauses 9-11, wherein the ACL information controls egress traffic from the layer 2 computing instance, wherein the header information includes a source Media Access (MAC) address associated with the layer 2 virtual network interface, and wherein controlling the flow of frames from the layer 2 computing instance via the layer 2 virtual network interface based on the stored association between the source MAC address and the ACL information.
Clause 13 the network virtualization device of any of clauses 9-11, wherein the ACL information controls egress traffic from the layer 2 computing instance, wherein the header information includes a destination Media Access (MAC) address associated with another layer 2 virtual network interface of the plurality of layer 2 virtual network interfaces, and wherein the flow of frames from the layer 2 computing instance via the layer 2 virtual network interface is controlled based on the stored association between the destination MAC address and the ACL information.
Clause 14 the network virtualization device of any of clauses 9-13, wherein the input by the customer indicates that the ACL information is applicable to at least one of: destination port, source port, destination Medium Access Control (MAC) address, source MAC address, destination Internet Protocol (IP) address, source IP address, communication protocol type, traffic broadcast or traffic unicast.
The network virtualization device of any one of clauses 9-14, wherein the layer 2 computing instance, the layer 2 virtual network interface, the layer 2 virtual switch, and the ACL information are each a first layer 2 computing instance, the first layer 2 virtual network interface, the first layer 2 virtual switch, and the first ACL information, and wherein the one or more computer-readable storage media store additional instructions that, when executed by the one or more processors, configure the network virtualization device to: hosting a second tier 2 virtual network interface and a second tier 2 virtual switch belonging to a tier 2 virtual network, wherein the second tier 2 virtual network interface and the second tier 2 virtual switch are associated with a second tier 2 computing instance belonging to the tier 2 virtual network; and storing second ACL information associated with the second layer 2 virtual network interface.
Clause 16, the network virtualization device of clause 15, wherein the frame and header information are the first frame and the first header information, respectively, and wherein the one or more computer-readable storage media store further instructions that, when executed by the one or more processors, configure the network virtualization device to: receiving a second frame having second header information; determining that the second frame is associated with a second layer 2 virtual network interface based on the second header information; and controlling a flow of the second frame based on the second ACL information.
Clause 17 the network virtualization device of clause 16, wherein the first ACL information controls ingress traffic to the first layer 2 computing instance, wherein the second ACL information controls egress traffic from the layer 2 computing instance, wherein the flow of the first frame to the first layer 2 computing instance via the first layer 2 virtual network interface is controlled based on a first stored association between the first header information and the first ACL information, and wherein the flow of the second frame from the first layer 2 computing instance via the second layer 2 virtual network interface is controlled based on a second stored association between the second header information and the second ACL information.
Clause 18, a system comprising: one or more processors; and one or more computer-readable storage media storing instructions that, when executed by the one or more processors, configure the system to: generating Access Control List (ACL) information for traffic flows in a layer 2 virtual network of the customer based on the customer's input, wherein the layer 2 virtual network is hosted on a physical network and includes a plurality of layer 2 computing instances, a plurality of layer 2 virtual network interfaces, and a plurality of layer 2 virtual switches; determining that the ACL information applies to a subset of the plurality of layer 2 virtual network interfaces, the subset comprising layer 2 virtual network interfaces; determining that ACL information is to be sent to a Network Virtualization Device (NVD) of a physical network, wherein: the NVD hosts a layer 2 virtual network interface and a layer 2 virtual switch of the plurality of layer 2 virtual switches, the layer 2 virtual network interface and the layer 2 virtual switch being associated with a layer 2 compute instance of the plurality of layer 2 compute instances, and the layer 2 compute instance being hosted on a host machine of the physical network, the host machine and the NVD being communicatively coupled; and sending the ACL information to the NVD.
Clause 19, the system of clause 18, wherein the one or more computer-readable storage media store additional instructions that, when executed by the one or more processors, configure the system to: storing ACL information in association with information having a layer 2 virtual network interface; receiving a frame having header information; determining that the frame is associated with a layer 2 virtual network interface based on the header information; and controlling the flow of frames based on the ACL information.
The system of any of clauses 18-19, wherein the flow of frames from and/or to the layer 2 computing instance via the layer 2 virtual network interface is controlled by the NVD according to ACL information based on stored associations between ACL information and the layer 2 virtual network interface.
The use of the terms "a" and "an" and "the" and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Unless otherwise indicated, the terms "comprising," "having," "including," and "containing" are to be construed as open-ended terms (i.e., meaning "including, but not limited to"). The term "connected" should be interpreted as including in part or in whole, attached to, or connected together, even though something is intermediate. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
Disjunctive language, such as the phrase "at least one of X, Y or Z," unless expressly stated otherwise, is intended to be understood in the context of the general use of the term, terminology, etc., may be X, Y or Z, or any combination thereof (e.g., X, Y and/or Z). Thus, such disjunctive language is generally not intended nor should it suggest that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. One of ordinary skill in the art should be able to employ such variations as appropriate and may practice the disclosure in a manner other than that specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, unless otherwise indicated herein, the present disclosure includes any combination of the above elements in all possible variations thereof.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
In the foregoing specification, aspects of the present disclosure have been described with reference to specific embodiments thereof, but those skilled in the art will recognize that the present disclosure is not limited thereto. The various features and aspects of the disclosure described above may be used alone or in combination. Moreover, embodiments may be utilized in any number of environments and applications other than those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Claims (20)
1. A method, comprising:
generating Access Control List (ACL) information for traffic flows in a layer 2 virtual network of the customer based on the customer's input, wherein the layer 2 virtual network is hosted on a physical network and includes a plurality of layer 2 computing instances, a plurality of layer 2 virtual network interfaces, and a plurality of layer 2 virtual switches;
determining that ACL information applies to a subset of the plurality of layer 2 virtual network interfaces, the subset comprising layer 2 virtual network interfaces;
determining that ACL information is to be sent to a Network Virtualization Device (NVD) of a physical network, wherein:
the NVD hosts a layer 2 virtual network interface and a layer 2 virtual switch of the plurality of layer 2 virtual switches,
The layer 2 virtual network interface and the layer 2 virtual switch are associated with a layer 2 computing instance of the plurality of layer 2 computing instances, and
the layer 2 computing instance is hosted on a host machine of the physical network, the host machine communicatively coupled with the NVD; and
ACL information is sent to the NVD.
2. The method of claim 1, wherein the ACL information, NVD, layer 2 virtual network interface, and layer 2 virtual switch are first ACL information, first NVD, first layer 2 virtual network interface, and first layer 2 virtual switch, and wherein the method further comprises:
generating second ACL information applicable to a second layer 2 virtual network interface based on the input of the client;
determining that the second ACL information is to be sent to a second NVD of a physical network, wherein the second NVD hosts a second layer 2 virtual network interface and a second layer 2 virtual switch of the plurality of layer 2 virtual switches; and
and sending the second ACL information to a second NVD.
3. The method of any of claims 1-2, wherein flow of frames from and/or to the layer 2 computing instance via the layer 2 virtual network interface is controlled by the NVD according to ACL information based on stored associations between ACL information and the layer 2 virtual network interface.
4. A method as in any of claims 1-3, further comprising:
storing configuration information of the layer 2 virtual network, wherein the configuration information indicates a client-specified name of a layer 2 port;
determining that the layer 2 port is emulated by the layer 2 virtual network interface; and
the association between the customer specified name and the layer 2 virtual network interface is stored.
5. The method of claim 4, wherein the input by the client indicates ACL information and a client-specified name for a layer 2 port, and wherein the method further comprises:
identifying a layer 2 virtual network interface based on an association between the customer-specified name and the layer 2 virtual network interface; and
ACL information is determined to apply to the layer 2 virtual network interface based on the identified layer 2 virtual network interface.
6. The method of claim 4, wherein the input by the client indicates ACL information and a client-specified name for a layer 2 port, and wherein the method further comprises:
identifying a layer 2 virtual network interface based on an association between the customer-specified name and the layer 2 virtual network interface;
determining a Medium Access Control (MAC) address associated with the layer 2 virtual network interface; and
indicating to the NVD that the ACL information is applicable to the MAC address.
7. The method of claim 6, wherein the input by the client indicates ACL information and a client-specified name for a layer 2 port, and wherein the method further comprises:
determining an Internet Protocol (IP) address associated with the NVD; and
ACL information is associated with the MAC address and the IP address.
8. The method of any of claims 1-7, wherein the input by the client indicates ACL information is applicable to at least one of: destination port, source port, destination Medium Access Control (MAC) address, source MAC address, destination Internet Protocol (IP) address, source IP address, communication protocol type, traffic broadcast or traffic unicast.
9. A network virtualization device, comprising:
one or more processors; and
one or more computer-readable storage media storing instructions that, when executed by the one or more processors, configure a network virtualization device to:
a layer 2 virtual network interface and layer 2 virtual switch hosting a layer 2 virtual network belonging to a customer, wherein:
the layer 2 virtual network interface and layer 2 virtual switch are associated with layer 2 computing instances belonging to the layer 2 virtual network,
the layer 2 computing instance is hosted on a host machine of a physical network including a network virtualization device, the host machine and the network virtualization device communicatively coupled, and
The layer 2 virtual network is hosted on the physical network and includes a plurality of layer 2 computing instances, a plurality of layer 2 virtual network interfaces, and a plurality of layer 2 virtual switches;
storing Access Control List (ACL) information associated with the layer 2 virtual network interface;
receiving a frame having header information;
determining that the frame is associated with a layer 2 virtual network interface based on the header information; and
the flow of frames is controlled based on ACL information.
10. The network virtualization device of claim 9, wherein ACL information controls ingress traffic to the layer 2 computing instance, wherein the header information includes a destination Media Access (MAC) address associated with the layer 2 virtual network interface, and wherein the flow of frames to the layer 2 computing instance via the layer 2 virtual network interface is controlled based on the stored association between the destination MAC address and the ACL information.
11. The network virtualization device of claim 9, wherein ACL information controls ingress traffic to the layer 2 computing instance, wherein header information includes a source Media Access (MAC) address associated with another layer 2 virtual network interface of the plurality of layer 2 virtual network interfaces, and wherein the flow of frames to the layer 2 computing instance via the layer 2 virtual network interface is controlled based on the stored association between the source MAC address and the ACL information.
12. The network virtualization device of any one of claims 9-11, wherein ACL information controls egress traffic from the layer 2 computing instance, wherein header information includes a source Media Access (MAC) address associated with the layer 2 virtual network interface, and wherein the flow of frames from the layer 2 computing instance via the layer 2 virtual network interface is controlled based on the stored association between the source MAC address and the ACL information.
13. The network virtualization device of any one of claims 9-11, wherein ACL information controls egress traffic from a layer 2 computing instance, wherein header information includes a destination Media Access (MAC) address associated with another layer 2 virtual network interface of the plurality of layer 2 virtual network interfaces, and wherein the flow of frames from the layer 2 computing instance via the layer 2 virtual network interface is controlled based on the stored association between the destination MAC address and ACL information.
14. The network virtualization device of any one of claims 9-13, wherein the input by the customer indicates that ACL information is applicable to at least one of: destination port, source port, destination Medium Access Control (MAC) address, source MAC address, destination Internet Protocol (IP) address, source IP address, communication protocol type, traffic broadcast or traffic unicast.
15. The network virtualization device of any one of claims 9-14, wherein the layer 2 computing instance, layer 2 virtual network interface, layer 2 virtual switch, and ACL information are first layer 2 computing instances, first layer 2 virtual network interface, first layer 2 virtual switch, and first ACL information, respectively, and wherein the one or more computer-readable storage media store additional instructions that, when executed by the one or more processors, configure the network virtualization device to:
hosting a second tier 2 virtual network interface and a second tier 2 virtual switch belonging to a tier 2 virtual network, wherein the second tier 2 virtual network interface and the second tier 2 virtual switch are associated with a second tier 2 computing instance belonging to the tier 2 virtual network; and
second ACL information associated with a second layer 2 virtual network interface is stored.
16. The network virtualization device of claim 15, wherein the frame and header information are a first frame and a first header information, respectively, and wherein the one or more computer-readable storage media store further instructions that, when executed by the one or more processors, configure the network virtualization device to:
Receiving a second frame having second header information;
determining that the second frame is associated with a second layer 2 virtual network interface based on the second header information; and
the flow of the second frame is controlled based on the second ACL information.
17. The network virtualization device of claim 16, wherein the first ACL information controls ingress traffic to the first layer 2 computing instance, wherein the second ACL information controls egress traffic from the layer 2 computing instance, wherein flow of the first frame to the first layer 2 computing instance via the first layer 2 virtual network interface is controlled based on a first stored association between the first header information and the first ACL information, and wherein flow of the second frame from the first layer 2 computing instance via the second layer 2 virtual network interface is controlled based on a second stored association between the second header information and the second ACL information.
18. A system, comprising:
one or more processors; and
one or more computer-readable storage media storing instructions that, when executed by the one or more processors, configure a system to:
generating Access Control List (ACL) information for traffic flows in a layer 2 virtual network of the customer based on the customer's input, wherein the layer 2 virtual network is hosted on a physical network and includes a plurality of layer 2 computing instances, a plurality of layer 2 virtual network interfaces, and a plurality of layer 2 virtual switches;
Determining that ACL information applies to a subset of the plurality of layer 2 virtual network interfaces, the subset comprising layer 2 virtual network interfaces;
determining that ACL information is to be sent to a Network Virtualization Device (NVD) of a physical network, wherein:
the NVD hosts a layer 2 virtual network interface and a layer 2 virtual switch of the plurality of layer 2 virtual switches,
the layer 2 virtual network interface and the layer 2 virtual switch are associated with a layer 2 computing instance of the plurality of layer 2 computing instances, and
the layer 2 computing instance is hosted on a host machine of the physical network, the host machine communicatively coupled with the NVD; and
ACL information is sent to the NVD.
19. The system of claim 18, wherein the one or more computer-readable storage media store additional instructions that, when executed by the one or more processors, configure the system to:
storing ACL information in association with information having a layer 2 virtual network interface;
receiving a frame having header information;
determining that the frame is associated with a layer 2 virtual network interface based on the header information; and
the flow of frames is controlled based on ACL information.
20. The system of any of claims 18-19, wherein flow of frames from and/or to the layer 2 computing instance via the layer 2 virtual network interface is controlled by the NVD according to ACL information based on stored associations between ACL information and the layer 2 virtual network interface.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/132,377 | 2020-12-30 | ||
US17/494,720 US11909636B2 (en) | 2020-12-30 | 2021-10-05 | Layer-2 networking using access control lists in a virtualized cloud environment |
US17/494,720 | 2021-10-05 | ||
PCT/US2021/060721 WO2022146585A1 (en) | 2020-12-30 | 2021-11-24 | Layer-2 networking using access control lists in a virtualized cloud environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116648691A true CN116648691A (en) | 2023-08-25 |
Family
ID=87623437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180088348.9A Pending CN116648691A (en) | 2020-12-30 | 2021-11-24 | Layer 2network using access control lists in virtualized cloud environments |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116648691A (en) |
-
2021
- 2021-11-24 CN CN202180088348.9A patent/CN116648691A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4183120B1 (en) | Interface-based acls in an layer-2 network | |
US11909636B2 (en) | Layer-2 networking using access control lists in a virtualized cloud environment | |
US20240031282A1 (en) | Layer-2 networking span port in a virtualized cloud environment | |
US20230370371A1 (en) | Layer-2 networking storm control in a virtualized cloud environment | |
US20240121186A1 (en) | Layer-2 networking using access control lists in a virtualized cloud environment | |
EP4272383B1 (en) | Layer-2 networking information in a virtualized cloud environment | |
CN116648691A (en) | Layer 2network using access control lists in virtualized cloud environments | |
CN116830547A (en) | Layer 2networking spanning ports in virtualized cloud environments | |
CN116648892A (en) | Layer 2networking storm control in virtualized cloud environments | |
CN116711270A (en) | Layer 2networking information in virtualized cloud environments | |
WO2022146587A1 (en) | Internet group management protocol (igmp) of a layer 2 network in a virtualized cloud environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |