CN116724546A - RDMA (RoCE) cloud-scale multi-tenancy for converged Ethernet - Google Patents

RDMA (RoCE) cloud-scale multi-tenancy for converged Ethernet Download PDF

Info

Publication number
CN116724546A
CN116724546A CN202180088766.8A CN202180088766A CN116724546A CN 116724546 A CN116724546 A CN 116724546A CN 202180088766 A CN202180088766 A CN 202180088766A CN 116724546 A CN116724546 A CN 116724546A
Authority
CN
China
Prior art keywords
layer
packet
vcn
2rdma
header
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180088766.8A
Other languages
Chinese (zh)
Inventor
S·N·希林卡尔
D·D·贝克尔
J·S·布拉尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2021/025459 external-priority patent/WO2022146466A1/en
Application filed by Oracle International Corp filed Critical Oracle International Corp
Priority claimed from PCT/US2021/027069 external-priority patent/WO2022146470A1/en
Publication of CN116724546A publication Critical patent/CN116724546A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Techniques and apparatuses for data networking are described. In one example, a method includes receiving a first layer 2 Remote Direct Memory Access (RDMA) packet including a Virtual Local Area Network (VLAN) tag and a quality of service (QoS) data field; converting the first layer 2RDMA packet into a first layer 3 encapsulated packet; and forwarding the first layer 3 encapsulated packet to the switch fabric. In this method, converting includes adding at least one header to the first layer 2RDMA packet, wherein the at least one header includes: a virtual network identifier based on information from the VLAN tag, and a QoS value based on information from the QoS data field.

Description

RDMA (RoCE) cloud-scale multi-tenancy for converged Ethernet
Cross Reference to Related Applications
The present application claims priority from U.S. provisional application Ser. No. 63/132,417 entitled "CLOUD SCALE MULTI-TENANCY FOR RDMA OVER CONVERGED ETHERNET (RoCE)" filed on month 12 of 2020, U.S. non-provisional application Ser. No. 17/165,877 entitled "CLOUD SCALE MULTI-TENANCY FOR RDMA OVER CONVERGED ETHERNET (RoCE)" filed on month 2 of 2021, U.S. non-provisional application Ser. No. 17/166,922 entitled "CLASS-BASED QUEUING FOR SCALABLE MULTI-TENANT RDMA TRAFFIC" filed on month 2 of 2021, and PCT application Ser. No. PCT/US2021/025459 entitled "CLASS-BASED QUEUEING FOR SCALABLE MULTI-TENANT RDMA TRAFFIC" filed on month 4 of 2021, both of which are incorporated herein by reference in their entirety.
Background
RDMA over converged Ethernet (RoCE) is a network protocol that allows Remote Direct Memory Access (RDMA) over lossless Ethernet networks. RoCE accomplishes this by encapsulating InfiniBand (IB) the transport packets over ethernet. In general, roCE relates to a layer 2 network with dedicated RDMA queues and dedicated VLANs. However, layer 2 networks cannot be extended and perform poorly because they lack the key features and characteristics that exist in more scalable and high performance layer 3 networks. Thus, existing public cloud implementations cannot provide data transfer using the RoCE protocol.
Disclosure of Invention
The present disclosure relates generally to data networking. More specifically, techniques are described that enable layer 2 traffic to be transported over a layer 3 network using a layer 3 protocol. In certain embodiments, the techniques described herein enable Remote Direct Memory Access (RDMA) traffic (e.g., RDMA over converged ethernet (RoCE) traffic) to be transferred from a compute instance on a multi-tenant host machine (i.e., a host machine hosting compute instances belonging to different tenants or customers) to a compute instance on another multi-tenant host machine through a shared layer 3 physical network or switch fabric using a layer 3 routing protocol. Such communication may optionally include other traffic as well (e.g., TCP and/or UDP traffic). Customers or tenants experience that communications occur over a dedicated layer 2 network, while communications actually occur over a shared (i.e., shared among multiple customers or tenants) layer 3 network using a layer 3 routing protocol. Various embodiments are described herein, including methods, systems, non-transitory computer-readable storage media storing programs, code, or instructions executable by one or more processors, and the like.
In some embodiments, a method of data networking includes receiving, at an ingress switch and from a host machine executing a plurality of computing instances for a plurality of tenants, a first layer 2RDMA packet for a first tenant among the plurality of tenants; converting the first layer 2RDMA packet into a first layer 3 encapsulated packet having at least one header; and forwarding the first layer 3 encapsulated packet to a switch fabric, wherein the first layer 2RDMA packet includes a Virtual Local Area Network (VLAN) tag and a quality of service (QoS) data field, and wherein converting includes adding at least one header to the first layer 2RDMA packet, the at least one header including: a virtual network identifier based on information from the VLAN tag, and a QoS value based on information from the QoS data field. The method may further comprise: at an intermediate switch of the switch fabric and in response to the indication of congestion, modifying a congestion notification data field of at least one header of the first layer 3 encapsulated packet. Alternatively or additionally, the method may further include receiving a layer 2RDMA packet including a VLAN tag and a QoS data field; converting the layer 2RDMA packet into a layer 3 encapsulated packet having at least one header; and forwarding the layer 3 encapsulated packet to the switch fabric, wherein the VLAN tag of the layer 2RDMA packet indicates a different VLAN than the VLAN tag of the layer 2RDMA packet. Such a method may further include, at an intermediate switch of the switch fabric: queuing the first layer 3 encapsulated packet to a first queue of an intermediate switch based on a QoS value of at least one header of the first layer 3 encapsulated packet; and queuing the second layer 3 encapsulated packet to a second queue of the intermediate switch that is different from the first queue based on the QoS value of the at least one header of the second layer 3 encapsulated packet.
In yet other embodiments, a method of data networking includes, at an egress switch, receiving a first layer 3 encapsulated packet; decapsulating the first layer 3 encapsulated packet to obtain a first layer 2RDMA packet; setting a value of a congestion notification data field of the first layer 2RDMA packet based on information in the congestion notification data field of at least one header of the first layer 3 encapsulated packet; and forwarding the first layer 2RDMA packet to the first compute instance after the setting and based on the VLAN tag of the first layer 2RDMA packet. The method may further include, at the egress switch, receiving the layer 3 encapsulated packet; decapsulating the layer 3 encapsulated packet to obtain a layer 2RDMA packet; and forwarding the second layer 2RDMA packet to a second compute instance different from the first compute instance based on the VLAN tag of the second layer 2RDMA packet. The method may further comprise, at the egress switch: draining the first layer 3 encapsulated packet into a first queue of an egress switch based on a quality of service (QoS) value of an outer header of the first layer 3 encapsulated packet; and queuing the second layer 3 encapsulated packet to a second queue of the egress switch that is different from the first queue based on the QoS value of the outer header of the second layer 3 encapsulated packet.
In still other embodiments, techniques for class-based queuing of RDMA traffic are described (e.g., in a layer 3 network), which may be used to maintain class-based separation in a cloud-scale network architecture such that RDMA traffic in a particular queue does not affect RDMA traffic in other queues. According to some embodiments, a system may be implemented to include a shared architecture for transporting RDMA traffic of different classes and from different tenants, wherein each device in a path across the shared architecture from one RDMA Network Interface Controller (NIC) to another NIC includes multiple queues dedicated to RDMA traffic of different classes.
According to some embodiments, a method of queuing RDMA packets includes receiving, by a networking device, a plurality of RDMA packets. Each RDMA packet of the plurality of RDMA packets includes a quality of service (QoS) data field, and for each RDMA packet of the plurality of RDMA packets, the QoS data field has a QoS value indicating a class of service, the RDMA packet being among the plurality of QoS values. The method also includes distributing, by the networking device, the plurality of RDMA packets among the plurality of RDMA queues. The distributing is performed according to a first mapping of the plurality of QoS values to the plurality of RDMA queues. The method also includes retrieving, by the networking device, a plurality of RDMA packets from the plurality of RDMA queues according to the first weight among the plurality of RDMA queues. The retrieved plurality of RDMA packets may include a plurality of packet streams, in which case the example may further include routing the plurality of packet streams of the retrieved plurality of RDMA packets according to a per-stream equal cost multi-path scheme. Each RDMA packet of the plurality of RDMA packets may be a RoCEv2 packet, or each RDMA packet of the plurality of RDMA packets may be a layer 3 encapsulated packet formatted according to an overlay encapsulation protocol (e.g., vxLAN, NVGRE, GENEVE, STT or MPLS).
In a further example, the distributing includes storing a first RDMA packet of the plurality of RDMA queues to a first RDMA queue of the plurality of RDMA queues in response to determining that a QoS data field of the first RDMA packet has a first QoS value; and store a second RDMA packet of the plurality of RDMA packets to a second RDMA queue of the plurality of RDMA queues in response to determining that the QoS data field of the second RDMA packet has a second QoS value, wherein the second QoS value is different from the first QoS value.
According to some embodiments, another method of queuing RDMA packets further comprises retrieving, by the networking device, the plurality of control packets from the control queue, wherein retrieving the plurality of control packets has a strict priority over retrieving the plurality of RDMA packets. In this case, the control queue may be configured to have a lower bandwidth than any of the plurality of RDMA queues. Alternatively or additionally, the plurality of control packets may include at least one network control protocol packet (e.g., BGP packet) and/or at least one congestion notification packet (CNP packet).
According to some embodiments, a networking device (e.g., a leaf switch or a backbone switch) may be configured to include a plurality of RDMA queues, and processing circuitry coupled to the plurality of RDMA queues and configured to receive a plurality of RDMA packets, wherein each RDMA packet of the plurality of RDMA packets includes a quality of service (QoS) data field; distributing a plurality of RDMA packets among the plurality of RDMA queues according to a first mapping of the plurality of QoS values to the plurality of RDMA queues; retrieving a plurality of RDMA packets from a plurality of RDMA queues according to a first weight among the plurality of RDMA queues. For each RDMA packet of the plurality of RDMA packets, the QoS data field has a value indicating a class of service of the RDMA packet and among the plurality of QoS values.
In still other embodiments, techniques for class-based tagging of encapsulated Remote Direct Memory Access (RDMA) traffic are described that may be used to maintain consistent class-based separation across network fabrics at the cloud scale (e.g., during layer 3 transport) so that RDMA traffic in a particular queue does not affect RDMA traffic in other queues. According to some embodiments, a system may be implemented to include a shared architecture for transporting RDMA traffic of different classes and from different tenants, wherein each device in a path across the shared architecture from one RDMA Network Interface Controller (NIC) to another NIC includes multiple queues dedicated to RDMA traffic of different classes. Various inventive embodiments are described herein, including methods, systems, non-transitory computer-readable storage media storing program code, instructions, etc., executable by one or more processors.
According to some embodiments, a data networking method includes receiving, by a networking device, a plurality of RDMA packets. Each RDMA packet of the plurality of RDMA packets includes a quality of service (QoS) data field having a QoS value indicating a class of service of the RDMA packet. The plurality of RDMA packets includes RDMA packets having QoS data fields with a first QoS value and RDMA packets having QoS data fields with a second QoS value different from the first QoS value. The method further includes, for each of the plurality of RDMA packets, encapsulating the RDMA packet to produce a corresponding one of the plurality of layer 3 encapsulated packets, the corresponding layer 3 encapsulated packet having at least one outer header. For each of the plurality of RDMA packets, encapsulating the RDMA packet includes adding at least one outer header of the corresponding layer 3 encapsulated packet to the RDMA packet. For each of the plurality of layer 3 encapsulated packets, the QoS data field of at least one outer header of the layer 3 encapsulated packet adopts a QoS value that is based on the QoS value of the QoS data field of the corresponding RDMA packet. For each layer 3 encapsulated packet of the plurality of layer 3 encapsulated packets, the at least one outer header may include a virtual network identification field based on a VLAN ID of the corresponding RDMA packet. In this case, the plurality of RDMA packets may include RDMA packets each having a first VLAN ID (some packets may have different QoS values than others) and RDMA packets each having a second VLAN ID different from the first VLAN ID. Alternatively or additionally, at least one layer 3 encapsulated packet of the plurality of layer 3 encapsulated packets may include a first VLAN tag and a second VLAN tag different from the first VLAN tag.
For each of the plurality of layer 3 encapsulated packets, at least one outer header of the encapsulated packet may include a User Datagram Protocol (UDP) header having a destination port number 4791 (e.g., a RoCEv2 reserved UDP port). Alternatively or additionally, at least one outer header of the layer 3 encapsulated packet may include an Internet Protocol (IP) header having a destination IP address associated with a destination Media Access Control (MAC) address of the corresponding RDMA packet.
For each RDMA packet of the plurality of RDMA packets, the QoS data field of the RDMA packet may be a DSCP data field of an IP header of the RDMA packet. In this case, for each of the plurality of layer 3 encapsulated packets, the QoS value in the QoS data field of at least one outer header of the layer 3 encapsulated packet may be equal to the QoS value in the QoS data field of the corresponding RDMA packet. Alternatively, for each RDMA packet of the plurality of RDMA packets, the QoS data field of the RDMA packet may be an IEEE 802.1p data field of the VLAN tag. In this case, encapsulating the RDMA packet may include obtaining a QoS value for the QoS data field of the at least one outer header of the corresponding layer 3 encapsulated packet from a mapping of QoS values and QoS values of QoS data fields of the RDMA packet, and storing the obtained QoS value in the QoS data field of the at least one outer header of the layer 3 encapsulated packet.
According to some embodiments, the further data networking method further comprises, for each of at least one layer 3 encapsulated packet of the plurality of layer 3 encapsulated packets, copying congestion indication information from the corresponding RDMA packet to at least one outer header of the packet encapsulated at layer 3. Alternatively or additionally, the method of data networking may further comprise decapsulating each of the second plurality of layer 3 encapsulated packets to obtain a corresponding one of a plurality of decapsulated RDMA packets. For at least one of the plurality of decapsulated RDMA packets, the decapsulating may include copying congestion indication information from at least one outer header of the corresponding layer 3 encapsulated packet to the decapsulated RDMA packet.
According to some embodiments, a non-transitory computer readable memory may store a plurality of instructions executable by one or more processors, the plurality of instructions comprising instructions that when executed by the one or more processors cause the one or more processors to perform any of the methods described above.
According to some embodiments, a system may include one or more processors and a memory coupled to the one or more processors. The memory may store a plurality of instructions executable by the one or more processors, the plurality of instructions comprising instructions that when executed by the one or more processors cause the one or more processors to perform any of the methods described above.
The foregoing, along with other features and embodiments, will become more apparent with reference to the following description, claims and accompanying drawings.
Drawings
FIG. 1 is a high-level diagram of a distributed environment, illustrating a virtual or overlay cloud network hosted by a cloud service provider infrastructure, according to some embodiments.
Fig. 2 depicts a simplified architectural diagram of physical components in a physical network within a CSPI, in accordance with some embodiments.
FIG. 3 illustrates an example arrangement within a CSPI in which a host machine is connected to multiple Network Virtualization Devices (NVDs), in accordance with certain embodiments.
FIG. 4 depicts connectivity between a host machine and an NVD for providing I/O virtualization to support multi-tenancy (NVD) in accordance with certain embodiments.
Fig. 5 depicts a simplified block diagram of a physical network provided by a CSPI, in accordance with some embodiments.
Fig. 6 illustrates an example of a distributed cloud environment for data networking, in accordance with certain embodiments.
7A-7C illustrate simplified flow diagrams depicting a process for performing RDMA data transfer from a source computing instance on a multi-tenant source host machine to a destination computing instance on a multi-tenant destination host machine over a shared layer 3 switch fabric using a layer 3 routing protocol, in accordance with certain embodiments.
FIG. 8A illustrates an RDMA packet format according to version 2 of the RDMA over converged Ethernet (RoCEv 2) protocol.
Fig. 8B illustrates a format of a VLAN tagged RoCEv2 packet in accordance with some embodiments.
Fig. 8C illustrates a format of a Q-in-Q tagged RoCEv2 packet in accordance with some embodiments.
Fig. 9A shows the format of an Internet Protocol (IP) header.
Fig. 9B and 9C illustrate implementation of multiple queues according to some embodiments.
Fig. 10 illustrates a format of a VxLAN packet in accordance with certain embodiments.
FIG. 11 is a block diagram illustrating one mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
Fig. 12 is a block diagram illustrating another mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
Fig. 13 is a block diagram illustrating another mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
Fig. 14 is a block diagram illustrating another mode for implementing cloud infrastructure as a service system in accordance with at least one embodiment.
FIG. 15 is a block diagram illustrating an example computer system in accordance with at least one embodiment.
Detailed Description
In the following description, for purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. It may be evident, however, that the various embodiments may be practiced without these specific details. The drawings and description are not intended to be limiting. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
The present disclosure relates generally to networking, and more particularly to techniques to enable layer 2 traffic to be transmitted over a layer 3 network using a layer 3 protocol. In certain embodiments, the techniques described herein enable RDMA (RoCE) traffic over converged ethernet to be transferred over a shared layer 3 physical network or switch fabric using a layer 3 routing protocol from a compute instance on a multi-tenant host machine (i.e., a host machine hosting compute instances belonging to different tenants or customers) to a compute instance on another multi-tenant host machine. Customers or tenants experience that communications occur over a dedicated layer 2 network, while communications actually occur over a shared (i.e., shared among multiple customers or tenants) layer 3 network using a layer 3 routing protocol.
Techniques are also disclosed that enable VLAN identification information (e.g., VLAN ID) that can identify a tenant to be specified in a layer 2 header of a RoCE packet (e.g., VLAN ID is included in an 802.1Q tag added to a RoCE packet) and VLAN identification information for mapping to information included in a layer 3 overlay encapsulation protocol wrapper that is added to an 802.1Q tagged RoCE layer 2 packet as the packet passes through a switch fabric. Mapping VLAN identification information (or tenant information) to fields of the layer 3 encapsulation wrapper makes the distinction between traffic from different tenants visible to the connected network in the layer 3 switch fabric. The networked devices may use this information to isolate traffic belonging to different customers or tenants.
Techniques are disclosed that enable QoS information associated with layer 2RDMA packets (e.g., roCE packets) to be preserved from the source host machine transmitting the data end-to-end all the way along the switch fabric and transmitted to the destination host machine to which the data is to be transmitted. By encoding that information into the layer 3 overlay encapsulation protocol wrapper, the QoS information encoded in the layer 2RoCE packets is visible to networking devices in the switch fabric, which is added to the 802.1Q tagged RoCE packets by the original switch handling traffic sent by hosts (e.g., ingress top of shelf (TOR) switches) as the packets enter the switch fabric. Mapping (e.g., copying) the QoS information into the encapsulation wrapper enables networking devices in the switch fabric to route RoCE traffic through the switch fabric using the layer 3 routing protocol and according to the QoS information associated with each packet.
Techniques are also disclosed that enable any networking device in the switch fabric to signal congestion on a per packet basis. This congestion information is preserved in the packets as the packets travel through the switch fabric from the TOR switch connected to the source host machine ("ingress TOR switch") to the TOR switch connected to the destination host machine ("egress TOR switch"). At the TOR switch connected to the destination host machine, the congestion information from the layer 3 encapsulation wrapper is translated (e.g., copied) to the RoCE packet header (e.g., translated into ECN bits in the IP header of the RoCE packet) and thus saved and made available to the destination host machine. The destination host machine may then respond to the congestion information by sending a congestion notification packet (e.g., indicating congestion to the source host machine so that it may, for example, reduce its transmission rate accordingly).
In a typical computing environment, data that is transferred is replicated multiple times by computer-executed network protocol stack software as the data is exchanged between two computers. This is known as the multiple copy problem. Further, the OS kernel and the CPU of the computer participate in these communications because the network stack (e.g., TCP stack) is native to the kernel. This introduces significant latency in the data transfer, which is intolerable for some applications.
Remote Direct Memory Access (RDMA) is a direct memory access mechanism that enables data to be moved between application memories of a computer or server without involving the CPU (CPU bypass) or operating system (OS kernel bypass) of the computer. This mechanism allows high throughput, high data transfer rates, and low latency networking. RDMA supports zero-copy networking by enabling a network adapter or Network Interface Card (NIC) on a computer to transfer data directly from a line to the computer's application memory or from the computer's application memory to a line, thereby eliminating the need to copy data between the application memory and a data buffer in the computer's operating system. Such transfers require little CPU or cache to do anything, and avoid context switching of the computer, and the transfer can continue in parallel with other system operations. RDMA is very useful for High Performance Computing (HPC) and applications requiring low latency.
RDMA over converged Ethernet (RoCE) is a network protocol that allows Remote Direct Memory Access (RDMA) over lossless Ethernet networks. RoCE accomplishes this by encapsulating InfiniBand (IB) the transport packets over ethernet. In general, roCE involves the use of dedicated RDMA queues and dedicated VLANs, as well as layer 2 networks. However, layer 2 networks cannot be extended and perform poorly because they lack the key features and characteristics that exist in more scalable and high performance layer 3 networks. For example, layer 2 networks: multiple paths between a data producer (e.g., source) and a data consumer (e.g., destination) in a network architecture are not supported; layer 2 loops have problems; there is a flooding problem of layer 2 frames; support for hierarchies is lacking in address schemes (e.g., layer 2 does not have the concepts of CIDR, prefix, and subnet); there is a problem of large broadcast traffic; lack of control protocols that allow advertising network connectivity (e.g., layer 2 lacks protocols like BGP, RIP, or IS-IS); lack of troubleshooting protocols and tools (e.g., layer 2 lacks tools such as ICMP or Traceroute); etc.
There are currently two versions of the RoCE protocol, roCEv1 and RoCEv2.RoCEv2, also known as "routable RoCE", document InfiniBand TM Architecture Specification Release 1.2.1 Annex A17:RoCEv2 "(InfiniBand Trade Association, beaverton, OR, day 2, 9 of 2014). RoCEv2 uses User Datagram Protocol (UDP) as a transport protocol. Unfortunately, UDP lacks the complex congestion control and congestion control mechanisms provided by TCP. Therefore, roCEv2 has the following problems: network livelock (e.g., process is changing state, frame moving but frame not advancing); network deadlocks (e.g., processes are always waiting due to cyclic resource dependencies); head of line (HOL) blocking (e.g., packets that cannot forward the head of the queue prevent packets behind it); victim flows (e.g., flows between non-congested nodes via congested switches); unfairness (e.g., high bandwidth flows add latency to other flows); and adverse effects on lossy traffic (such as TCP) due to lossless traffic (such as RDMA) consuming buffers.
Successful RoCEv2 implementations also typically require network paths and VLANs specific to RDMA traffic. Furthermore, roCEv2 as a protocol relies on layer 2 Priority Flow Control (PFC), explicit Congestion Notification (ECN), or a combination of PFC and ECN to achieve some similar congestion management, but these schemes are often inadequate in practice. PFC supports up to eight independent traffic classes and allows the receiver to request the sender to PAUSE traffic of a specified class by sending a PAUSE frame to the sender. Unfortunately, PFC is prone to PAUSE frame storms (e.g., too many PAUSE frames affect all traffic of a given class along the entire path of the traffic source), which can result in complete deadlock of the network. Furthermore, PFC PAUSE frames do not allow multi-tenant operation because PAUSE causes the sender to PAUSE the transmission of all traffic for a specified class-while PFC provides eight traffic classes at maximum, the number of tenants may be many times greater than eight.
Embodiments disclosed herein include systems, methods, and apparatuses for implementing RDMA over multi-tenant converged ethernet (RoCE) on a public cloud scale. For example, the environment may extend to very large networks that span hundreds, thousands, or more hosts. Such embodiments disclosed herein include techniques for supporting multi-tenant RoCE traffic in the public cloud while avoiding head-of-line blocking and while maintaining high performance, low latency, and lossless operation for RDMA applications. At the same time, the disclosed techniques may also be implemented to support conventional non-RDMA applications that use TCP/IP or UDP as their transport protocols. These techniques may be applied to all standard speed, roCE-capable ethernet network interfaces including, for example, 25G, 40G, 100G, 400G, and 800G.
Techniques for scaling RoCE in the cloud as disclosed herein may include one or more of the following: providing each customer with a VLAN or set of VLANs for their traffic; allowing the host to isolate traffic between clients and across different applications for a given client using 802.1Q-in-802.1Q (e.g., a packet may have two 802.1Q tags: a public VLAN tag and a private VLAN tag); mapping each VLAN to a VxLAN VNI on ToR and assigning a unique VxLAN VNI for each customer and each VLAN thereof; layer 2 traffic carrying customers over a layer 3 network using VxLAN coverage; the EVPN (ethernet VPN) is used to carry MAC address information across the underlying layer 3 network (base layer).
Embodiments may be implemented as described herein to support multiple RDMA applications (e.g., cloud services, high Performance Computing (HPC), and/or database applications), each having multiple traffic classes. Such support may be provided by isolating their traffic using the concept of network QoS traffic classes, where a set of dedicated RDMA queues are allocated for different traffic classes with mission critical traffic. This isolation using RDMA queues may ensure that a particular queue (e.g., congestion of a particular queue) does not affect another queue. Such techniques may be used to support multiple RDMA tenants (also referred to as "public cloud customers") such that the queue configuration in the Clos architecture is transparent to the end-customer hosts (cloud customers). The network may be configured to map DSCP marking received from the client host to the correct settings for the network queues, thereby decoupling the host QoS policies (configurations) from the fabric QoS policies (configurations). Clients may use DSCP traffic classes (also referred to as DSCP code points) and/or 802.1p traffic classes to represent their performance expectations. These DSCP and 802.1p classes map to QoS queues in the Clos network in a manner that provides decoupling of host QoS configuration from Clos architecture QoS configuration.
To communicate QoS queue information and ECN marking over Clos architecture, it may be required to ensure that QoS queue information is carried across multiple network domains: for example, from a layer 2 port to a host, from a layer 3 port to another switch, or from a VxLAN virtual layer 2 port to another VxLAN interface on another switch. Such cross-domain transmission of QoS queue information may include carrying and executing QoS markers and ECN bit markers across these different network domains, as described herein.
Example virtual networking architecture
The term cloud service is generally used to refer to services provided by a Cloud Service Provider (CSP) to users or customers on demand (e.g., via a subscription model) using systems and infrastructure (cloud infrastructure) provided by the CSP. Typically, the servers and systems that make up the CSP infrastructure are separate from the customer's own in-house deployment servers and systems. Thus, customers can utilize cloud services provided by CSPs without purchasing separate hardware and software resources for the services. Cloud services are designed to provide subscribing customers with simple, extensible access to applications and computing resources without requiring the customers to invest in purchasing infrastructure for providing the services.
There are several cloud service providers that offer various types of cloud services. There are various different types or models of cloud services, including software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (IaaS), and the like.
A customer may subscribe to one or more cloud services provided by the CSP. The customer may be any entity, such as an individual, organization, business, etc. When a customer subscribes to or registers for a service provided by the CSP, a lease or account will be created for the customer. The customer may then access one or more cloud resources of the subscription associated with the account via this account.
As described above, infrastructure as a service (IaaS) is a specific type of cloud computing service. In the IaaS model, CSPs provide infrastructure (referred to as cloud service provider infrastructure or CSPI) that can be used by customers to build their own customizable networks and deploy customer resources. Thus, the customer's resources and network are hosted in a distributed environment by the CSP's provided infrastructure. This is in contrast to traditional computing, where the customer's resources and network are hosted by the customer's provided infrastructure.
The CSPI may include high performance computing resources, including various host machines, memory resources, and network resources, that form an interconnection of a physical network, also referred to as a baseboard network or an underlay network. Resources in the CSPI may be spread over one or more data centers, which may be geographically spread over one or more geographic regions. Virtualization software may be executed by these physical resources to provide a virtualized distributed environment. Virtualization creates an overlay network (also referred to as a software-based network, a software-defined network, or a virtual network) on a physical network. The CSPI physical network provides an underlying foundation for creating one or more overlay or virtual networks over the physical network. The virtual or overlay network may include one or more Virtual Cloud Networks (VCNs). Virtual networks are implemented using software virtualization techniques (e.g., a hypervisor, functions performed by a Network Virtualization Device (NVD) (e.g., a smartNIC), a top-of-rack (TOR) switch, a smart TOR that implements one or more functions performed by the NVD, and other mechanisms) to create a layer of network abstraction that can run over a physical network. Virtual networks may take many forms, including peer-to-peer networks, IP networks, and the like. The virtual network is typically either a layer 3IP network or a layer 2VLAN. This method of virtual or overlay networking is often referred to as a virtual or overlay 3 network. Examples of protocols developed for virtual networks include IP-in-IP (or Generic Routing Encapsulation (GRE)), virtual extensible local area networks (VXLAN-IETF RFC 7348), virtual Private Networks (VPNs) (e.g., MPLS layer 3 virtual private networks (RFC 4364)), NSX of VMware, GENEVE, and the like.
For IaaS, the infrastructure provided by CSP (CSPI) may be configured to provide virtualized computing resources over a public network (e.g., the internet). In the IaaS model, cloud computing service providers may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., hypervisor layer), etc.). In some cases, the IaaS provider may also offer various services to accompany those infrastructure components (e.g., billing, monitoring, logging, security, load balancing, clustering, etc.). Thus, as these services may be policy driven, iaaS users may be able to implement policies to drive load balancing to maintain application availability and performance. CSPI provides a collection of infrastructure and complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted distributed environment. CSPI provides high performance computing resources and capabilities as well as storage capacity in flexible virtual networks that are securely accessible from a variety of networking locations, such as from a customer's in-house deployment network. When a customer subscribes to or registers for an IaaS service provided by the CSP, the lease created for that customer is a secure and sequestered partition within the CSPI in which the customer can create, organize and manage their cloud resources.
Customers may build their own virtual network using the computing, memory, and networking resources provided by the CSPI. One or more customer resources or workloads, such as computing instances, may be deployed on these virtual networks. For example, a customer may use resources provided by the CSPI to build one or more customizable and private virtual networks, referred to as Virtual Cloud Networks (VCNs). A customer may deploy one or more customer resources, such as computing instances, on a customer VCN. The computing instances may take the form of virtual machines, bare metal instances, and the like. Thus, CSPI provides a collection of infrastructure and complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available virtual hosted environment. Clients do not manage or control the underlying physical resources provided by the CSPI, but may control the operating system, storage devices, and deployed applications; and may have limited control over selected networking components (e.g., firewalls).
CSP may provide a console that enables clients and network administrators to use CSPI resources to configure, access, and manage resources deployed in the cloud. In some embodiments, the console provides a web-based user interface that may be used to access and manage the CSPI. In some implementations, the console is a web-based application provided by the CSP.
The CSPI may support single-lease or multi-lease architectures. In a single tenancy architecture, software (e.g., applications, databases) or hardware components (e.g., host machines or servers) serve a single customer or tenant. In a multi-tenancy architecture, software or hardware components serve multiple customers or tenants. Thus, in a multi-tenancy architecture, the CSPI resources are shared among multiple customers or tenants. In the multi-tenancy case, precautions are taken and safeguards are implemented in the CSPI to ensure that each tenant's data is isolated and remains invisible to other tenants.
In a physical network, a network endpoint (endpoint) refers to a computing device or system that connects to and communicates back and forth with the physical network to which it is connected. Network endpoints in a physical network may be connected to a Local Area Network (LAN), wide Area Network (WAN), or other type of physical network. Examples of traditional endpoints in a physical network include modems, hubs, bridges, switches, routers and other network devices, physical computers (or host machines), and the like. Each physical device in the physical network has a fixed network address that can be used to communicate with the device. This fixed network address may be a layer 2 address (e.g., MAC address), a fixed layer 3 address (e.g., IP address), etc. In a virtualized environment or virtual network, endpoints may include various virtual endpoints, such as virtual machines hosted by components of a physical network (e.g., by physical host machines). These endpoints in the virtual network are addressed by overlay addresses, such as overlay 2 addresses (e.g., overlay MAC addresses) and overlay 3 addresses (e.g., overlay IP addresses). Network coverage enables flexibility by allowing a network administrator to move around an overlay address associated with a network endpoint using software management (e.g., via software implementing a control plane for a virtual network). Thus, unlike a physical network, in a virtual network, an overlay address (e.g., an overlay IP address) may be moved from one endpoint to another endpoint using network management software. Because the virtual network builds on top of the physical network, communication between components in the virtual network involves the virtual network and the underlying physical network. To facilitate such communications, components of the CSPI are configured to learn and store mappings that map overlay addresses in the virtual network to actual physical addresses in the baseboard network, and vice versa. These mappings are then used to facilitate communications. Customer traffic is encapsulated to facilitate routing in the virtual network.
Thus, a physical address (e.g., a physical IP address) is associated with a component in the physical network, and an overlay address (e.g., an overlay IP address) is associated with an entity in the virtual network. Both the physical IP address and the overlay IP address are types of real IP addresses. These are separate from the virtual IP addresses, which map to multiple real IP addresses. The virtual IP address provides a one-to-many mapping between the virtual IP address and a plurality of real IP addresses.
The cloud infrastructure or CSPI is physically hosted in one or more data centers in one or more regions of the world. The CSPI may include components in a physical or substrate network and virtualized components located in a virtual network built upon the physical network components (e.g., virtual networks, computing instances, virtual machines, etc.). In certain embodiments, the CSPI is organized and hosted in the domain, region, and availability domains. A region is typically a localized geographic area containing one or more data centers. Regions are generally independent of each other and can be far apart, e.g., across countries or even continents. For example, a first region may be in australia, another in japan, another in india, etc. The CSPI resources are divided between regions such that each region has its own independent subset of CSPI resources. Each region may provide a set of core infrastructure services and resources, such as computing resources (e.g., bare machine servers, virtual machines, containers, and related infrastructure, etc.); storage resources (e.g., block volume storage, file storage, object storage, archive storage); network resources (e.g., virtual Cloud Network (VCN), load balancing resources, connections to an in-premise network), database resources; edge networking resources (e.g., DNS); and access to management and monitoring resources, etc. Each region typically has multiple paths connecting it to other regions in the field.
In general, an application is deployed in an area where it is most frequently used (i.e., on the infrastructure associated with the area) because resources in the vicinity are used faster than resources in the distance. Applications may also be deployed in different areas for various reasons, such as redundancy to mitigate risk of regional-wide events (such as large weather systems or earthquakes) to meet different requirements of legal jurisdictions, tax domains, and other business or social standards, and so forth.
Data centers within a region may be further organized and subdivided into Availability Domains (ADs). The availability domain may correspond to one or more data centers located within the region. A region may be comprised of one or more availability domains. In such a distributed environment, the CSPI resources are either region-specific, such as a Virtual Cloud Network (VCN), or availability domain-specific, such as computing instances.
ADs within a region are isolated from each other, have fault tolerance capability, and are configured such that they are highly unlikely to fail simultaneously. This is achieved by the ADs not sharing critical infrastructure resources (such as networking, physical cables, cable paths, cable entry points, etc.) so that a failure at one AD within a region is less likely to affect the availability of other ADs within the same region. ADs within the same region may be connected to each other through low latency, high bandwidth networks, which makes it possible to provide high availability connections for other networks (e.g., the internet, customer's on-premise networks, etc.) and build replication systems in multiple ADs to achieve high availability and disaster recovery. Cloud services use multiple ADs to ensure high availability and prevent resource failures. As the infrastructure provided by IaaS providers grows, more regions and ADs and additional capacity can be added. Traffic between availability domains is typically encrypted.
In some embodiments, regions are grouped into domains. A domain is a logical collection of regions. The domains are isolated from each other and do not share any data. Regions in the same domain may communicate with each other, but regions in different domains may not. The customer's lease or account with the CSP exists in a single area and may be spread across one or more regions belonging to that area. Typically, when a customer subscribes to an IaaS service, a lease or account is created for the customer in a region designated by the customer in the domain (referred to as the "home" region). The customer may extend the customer's lease to one or more other areas within the domain. The customer cannot access areas that are not in the area of the customer's rental agency.
The IaaS provider may provide multiple domains, each domain catering to a particular set of customers or users. For example, business fields may be provided for business clients. As another example, a domain may be provided for a particular country for clients within that country. As yet another example, government fields may be provided for governments and the like. For example, a government domain may cater to a particular government and may have a higher level of security than a business domain. For example, oracle cloud infrastructure (Oracle Cloud Infrastructure, OCI) currently provides a field for commercial regions, and two fields (e.g., fedwamp-authorized and IL 5-authorized) for government cloud regions.
In some embodiments, an AD may be subdivided into one or more fault domains. A fault domain is a grouping of infrastructure resources within an AD to provide counteraffinity. The failure domain allows for the distribution of computing instances such that they are not located on the same physical hardware within a single AD. This is called counteraffinity. A failure domain refers to a collection of hardware components (computers, switches, etc.) that share a single point of failure. The computing pool is logically divided into fault domains. Thus, a hardware failure or computing hardware maintenance event affecting one failure domain does not affect instances in other failure domains. The number of fault domains for each AD may vary depending on the embodiment. For example, in some embodiments, each AD contains three fault domains. The failure domain acts as a logical data center within the AD.
When a customer subscribes to the IaaS service, resources from the CSPI are provisioned to the customer and associated with the customer's lease. Clients can use these provisioned resources to build private networks and deploy resources on these networks. Customer networks hosted in the cloud by CSPI are referred to as Virtual Cloud Networks (VCNs). A customer may set up one or more Virtual Cloud Networks (VCNs) using CSPI resources allocated for the customer. VCNs are virtual or software defined private networks. Customer resources deployed in a customer's VCN may include computing instances (e.g., virtual machines, bare metal instances) and other resources. These computing instances may represent various customer workloads, such as applications, load balancers, databases, and the like. Computing instances deployed on a VCN may communicate with publicly accessible endpoints ("public endpoints"), with other instances in the same VCN or other VCNs (e.g., other VCNs of the customer or VCNs not belonging to the customer), with customer's in-house deployment data centers or networks, and with service endpoints and other types of endpoints through a public network such as the internet.
CSP may use CSPI to provide various services. In some cases, the clients of the CSPI themselves may act like service providers and provide services using CSPI resources. The service provider may expose a service endpoint featuring identifying information (e.g., IP address, DNS name, and port). The customer's resources (e.g., computing instances) may use a particular service by accessing service endpoints exposed by the service for that particular service. These service endpoints are typically endpoints that a user can publicly access via a public communications network, such as the internet, using a public IP address associated with the endpoint. Publicly accessible network endpoints are sometimes referred to as public endpoints.
In some embodiments, a service provider may expose a service via an endpoint for the service (sometimes referred to as a service endpoint). The customer of the service may then use this service endpoint to access the service. In some embodiments, a service endpoint that provides a service may be accessed by multiple clients that intend to consume the service. In other embodiments, a dedicated service endpoint may be provided for a customer such that only the customer may use the dedicated service endpoint to access a service.
In some embodiments, when the VCN is created, it is associated with a private overlay classless inter-domain routing (CIDR) address space, which is a series of private overlay IP addresses (e.g., 10.0/16) assigned to the VCN. The VCN includes associated subnets, routing tables, and gateways. The VCNs reside within a single region, but may span one or more or all of the availability domains of the region. The gateway is a virtual interface configured for the VCN and enables communication of traffic between the VCN and one or more endpoints external to the VCN. One or more different types of gateways may be configured for the VCN to enable communications to and from different types of endpoints.
The VCN may be subdivided into one or more subnetworks, such as one or more subnetworks. Thus, a subnet is a configured unit or subdivision that can be created within a VCN. The VCN may have one or more subnets. Each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and represent a subset of the address space within the address space of the VCN.
Each computing instance is associated with a Virtual Network Interface Card (VNIC), which enables the computing instance to participate in a subnet of the VCN. VNICs are logical representations of physical Network Interface Cards (NICs). Generally, a VNIC is an interface between an entity (e.g., a computing instance, a service) and a virtual network. The VNICs exist in a subnet with one or more associated IP addresses and associated security rules or policies. The VNICs correspond to layer 2 ports on the switch. The VNICs are attached to the computing instance and to a subnet within the VCN. The VNICs associated with the computing instance enable the computing instance to be part of a subnet of the VCN and to communicate (e.g., send and receive packets) with endpoints that are on the same subnet as the computing instance, with endpoints in a different subnet in the VCN, or with endpoints that are external to the VCN. Thus, the VNICs associated with the computing instance determine how the computing instance connects with endpoints internal and external to the VCN. When a computing instance is created and added to a subnet within the VCN, a VNIC for the computing instance is created and associated with the computing instance. For a subnet that includes a set of computing instances, the subnet contains VNICs corresponding to the set of computing instances, each VNIC attached to a computing instance within the set of computing instances.
Each computing instance is assigned a private overlay IP address via the VNIC associated with the computing instance. This private overlay network IP address is assigned to the VNIC associated with the computing instance when the computing instance is created and is used to route traffic to and from the computing instance. All VNICs in a given subnetwork use the same routing table, security list, and DHCP options. As described above, each subnet within a VCN is associated with a contiguous range of overlay IP addresses (e.g., 10.0.0.0/24 and 10.0.1.0/24) that do not overlap with other subnets in the VCN and represent a subset of the address space within the address space of the VCN. For a VNIC on a particular subnet of a VCN, the private overlay IP address assigned to that VNIC is an address from a contiguous range of overlay IP addresses allocated for the subnet.
In some embodiments, in addition to private overlay IP addresses, the computing instance may optionally be assigned additional overlay IP addresses, such as, for example, one or more public IP addresses if in a public subnet. The plurality of addresses are assigned either on the same VNIC or on a plurality of VNICs associated with the computing instance. However, each instance has a master VNIC that is created during instance startup and is associated with an overlay private IP address assigned to the instance—this master VNIC cannot be deleted. Additional VNICs, referred to as secondary VNICs, may be added to existing instances in the same availability domain as the primary VNIC. All VNICs are in the same availability domain as this example. The auxiliary VNICs may be located in a subnet in the same VCN as the main VNIC or in a different subnet in the same VCN or a different VCN.
If the computing instance is in a public subnet, it may optionally be assigned a public IP address. When creating a subnet, the subnet may be designated as either a public subnet or a private subnet. A private subnet means that resources (e.g., compute instances) and associated VNICs in the subnet cannot have a public overlay IP address. A public subnet means that resources in a subnet and associated VNICs may have a public IP address. A customer may specify that a subnet exists in a single availability domain or multiple availability domains in a cross-regional or domain.
As described above, the VCN may be subdivided into one or more subnets. In some embodiments, a Virtual Router (VR) configured for a VCN (referred to as a VCN VR or simply VR) enables communication between subnets of the VCN. For a subnet within a VCN, VR means a logical gateway for that subnet that enables that subnet (i.e., the computing instance on that subnet) to communicate with endpoints on other subnets within the VCN as well as other endpoints outside the VCN. The VCN VR is a logical entity configured to route traffic between VNICs in the VCN and virtual gateways ("gateways") associated with the VCN. The gateway is further described below with respect to fig. 1. VCN VR is a layer 3/IP layer concept. In one embodiment, there is one VCN VR for the VCN, where the VCN VR has a potentially unlimited number of ports addressed by the IP address, one port for each subnet of the VCN. In this way, the VCN VR has a different IP address for each subnet in the VCN to which the VCN VR is attached. The VR is also connected to various gateways configured for the VCN. In some embodiments, a particular overlay IP address in the overlay IP address range for a subnet is reserved for a port of a VCN VR for that subnet. Consider, for example, that a VCN has two subnets, with associated address ranges of 10.0/16 and 10.1/16, respectively. For the first subnet in the VCN with an address range of 10.0/16, addresses within this range are reserved for ports of the VCN VR for that subnet. In some cases, the first IP address within range may be reserved for VCN VR. For example, for a subnet covering an IP address range of 10.0/16, an IP address of 10.0.0.1 may be reserved for ports of the VCN VR for that subnet. For a second subnet in the same VCN with an address range of 10.1/16, the VCN VR may have a port for the second subnet with an IP address of 10.1.0.1. The VCN VR has a different IP address for each subnet in the VCN.
In some other embodiments, each subnet within the VCN may have its own associated VR that is addressable by the subnet using a reserved or default IP address associated with the VR. For example, the reserved or default IP address may be the first IP address in the range of IP addresses associated with the subnet. The VNICs in the subnet may use this default or reserved IP address to communicate (e.g., send and receive packets) with the VR associated with the subnet. In such an embodiment, the VR is the entry/exit point of the subnet. The VR associated with a subnet within the VCN may communicate with other VR associated with other subnets within the VCN. The VR may also communicate with a gateway associated with the VCN. The VR functions of the subnetwork are run on or performed by one or more NVDs that perform VNIC functions for VNICs in the subnetwork.
The VCN may be configured with routing tables, security rules, and DHCP options. The routing table is a virtual routing table for the VCN and includes rules for routing traffic from a subnet within the VCN to a destination outside the VCN through a gateway or specially configured instance. The routing tables of the VCNs may be customized to control how packets are forwarded/routed to and from the VCNs. DHCP options refer to configuration information that is automatically provided to an instance at instance start-up.
The security rules configured for the VCN represent overlay firewall rules for the VCN. Security rules may include ingress and egress rules and specify the type of traffic (e.g., protocol and port based) that is allowed to enter and exit the VCN instance. The client may choose whether a given rule is stateful or stateless. For example, a client may allow incoming SSH traffic from anywhere to a collection of instances by setting state entry rules with source CIDR 0.0.0.0/0 and destination TCP ports 22. The security rules may be implemented using a network security group or security list. A network security group consists of a set of security rules that apply only to the resources in the group. In another aspect, the security list includes rules applicable to all resources in any subnet that uses the security list. The VCN may be provided with a default security list with default security rules. The DHCP options configured for the VCN provide configuration information that is automatically provided to the instances in the VCN at instance start-up.
In some embodiments, configuration information for the VCN is determined and stored by the VCN control plane. For example, configuration information for a VCN may include information about: address ranges associated with the VCN, subnets and associated information within the VCN, one or more VRs associated with the VCN, computing instances in the VCN and associated VNICs, NVDs (e.g., VNICs, VRs, gateways) that perform various virtualized network functions associated with the VCN, status information for the VCN, and other VCN related information. In certain embodiments, the VCN distribution service publishes configuration information stored by the VCN control plane or portion thereof to the NVD. The distributed information may be used to update information (e.g., forwarding tables, routing tables, etc.) stored and used by the NVD to forward packets to or from computing instances in the VCN.
In some embodiments, the creation of VCNs and subnets is handled by the VCN Control Plane (CP) and the launching of compute instances is handled by the compute control plane. The compute control plane is responsible for allocating physical resources for the compute instance and then invoking the VCN control plane to create and attach the VNICs to the compute instance. The VCN CP also sends the VCN data map to a VCN data plane configured to perform packet forwarding and routing functions. In some embodiments, the VCN CP provides a distribution service responsible for providing updates to the VCN data plane. Examples of VCN control planes are also depicted in fig. 11, 12, 13, and 14 (see references 1116, 1216, 1316, and 1416) and described below.
A customer may create one or more VCNs using resources hosted by the CSPI. Computing instances deployed on a client VCN may communicate with different endpoints. These endpoints may include endpoints hosted by the CSPI and endpoints external to the CSPI.
Various different architectures for implementing cloud-based services using CSPI are depicted in fig. 1, 2, 3, 4, 5, 11, 12, 13, and 15 and described below. Fig. 1 is a high-level diagram of a distributed environment 100, illustrating an overlay or customer VCN hosted by a CSPI, in accordance with certain embodiments. The distributed environment depicted in fig. 1 includes a plurality of components in an overlay network. The distributed environment 100 depicted in FIG. 1 is only an example and is not intended to unduly limit the scope of the claimed embodiments. Many variations, alternatives, and modifications are possible. For example, in some embodiments, the distributed environment depicted in fig. 1 may have more or fewer systems or components than those shown in fig. 1, may combine two or more systems, or may have different system configurations or arrangements.
As shown in the example depicted in fig. 1, distributed environment 100 includes CSPI 101 that provides services and resources that customers can subscribe to and use to build their Virtual Cloud Network (VCN). In some embodiments, CSPI 101 provides IaaS services to subscribing clients. Data centers within CSPI 101 may be organized into one or more regions. An example zone "zone US"102 is shown in fig. 1. The customer has configured a customer VCN 104 for the region 102. A customer may deploy various computing instances on the VCN 104, where the computing instances may include virtual machine or bare machine instances. Examples of instances include applications, databases, load balancers, and the like.
In the embodiment depicted in fig. 1, customer VCN 104 includes two subnets, namely, "subnet-1" and "subnet-2," each having its own CIDR IP address range. In FIG. 1, the overlay IP address range for subnet-1 is 10.0/16 and the address range for subnet-2 is 10.1/16.VCN virtual router 105 represents a logical gateway for the VCN that enables communication between the subnetworks of VCN 104 and with other endpoints external to the VCN. The VCN VR 105 is configured to route traffic between the VNICs in the VCN 104 and gateways associated with the VCN 104. The VCN VR 105 provides a port for each subnet of the VCN 104. For example, VR 105 may provide a port for subnet-1 with IP address 10.0.0.1 and a port for subnet-2 with IP address 10.1.0.1.
Multiple computing instances may be deployed on each subnet, where the computing instances may be virtual machine instances and/or bare machine instances. Computing instances in a subnet may be hosted by one or more host machines within CSPI 101. The computing instance participates in the subnet via the VNIC associated with the computing instance. For example, as shown in fig. 1, computing instance C1 becomes part of subnet-1 via the VNIC associated with the computing instance. Likewise, computing instance C2 becomes part of subnet-1 via the VNIC associated with C2. In a similar manner, multiple computing instances (which may be virtual machine instances or bare machine instances) may be part of subnet-1. Each computing instance is assigned a private overlay IP address and a MAC address via its associated VNIC. For example, in fig. 1, the overlay IP address of computing instance C1 is 10.0.0.2, the mac address is M1, and the private overlay IP address of computing instance C2 is 10.0.0.3, the mac address is M2. Each compute instance in subnet-1 (including compute instance C1 and C2) has a default route to VCN VR 105 using IP address 10.0.0.1, which is the IP address for the port of VCN VR 105 for subnet-1.
Multiple computing instances may be deployed on subnet-2, including virtual machine instances and/or bare machine instances. For example, as shown in fig. 1, computing instances D1 and D2 become part of subnet-2 via VNICs associated with the respective computing instances. In the embodiment shown in fig. 1, the overlay IP address of computing instance D1 is 10.1.0.2, the mac address is MM1, and the private overlay IP address of computing instance D2 is 10.1.0.3, the mac address is MM2. Each compute instance in subnet-2 (including compute instances D1 and D2) has a default route to VCN VR 105 using IP address 10.1.0.1, which is the IP address for the port of VCN VR 105 for subnet-2.
The VCN a 104 may also include one or more load balancers. For example, a load balancer may be provided for a subnet and may be configured to load balance traffic across multiple compute instances on the subnet. A load balancer may also be provided to load balance traffic across subnets in the VCN.
A particular computing instance deployed on VCN 104 may communicate with a variety of different endpoints. These endpoints may include endpoints hosted by CSPI 200 and endpoints external to CSPI 200. Endpoints hosted by CSPI 101 may include: endpoints on the same subnet as a particular computing instance (e.g., communications between two computing instances in subnet-1); endpoints located on different subnets but within the same VCN (e.g., communications between a compute instance in subnet-1 and a compute instance in subnet-2); endpoints in different VCNs in the same region (e.g., communication between a compute instance in subnet-1 and an endpoint in a VCN in the same region 106 or 110, communication between a compute instance in subnet-1 and an endpoint in a service mesh point 110 in the same region); or endpoints in VCNs in different regions (e.g., communications between computing instances in subnet-1 and endpoints in VCNs in different regions 108). Computing instances in a subnet hosted by CSPI 101 may also communicate with endpoints that are not hosted by CSPI 101 (i.e., external to CSPI 101). These external endpoints include endpoints in customer's on-premise network 116, endpoints in other remote cloud-hosted networks 118, public endpoints 114 accessible via a public network (such as the internet), and other endpoints.
Communication between computing instances on the same subnet is facilitated using VNICs associated with the source computing instance and the destination computing instance. For example, compute instance C1 in subnet-1 may want to send a packet to compute instance C2 in subnet-1. For a packet that originates from a source computing instance and whose destination is another computing instance in the same subnet, the packet is first processed by the VNIC associated with the source computing instance. The processing performed by the VNICs associated with the source computing instance may include determining destination information for the packet from a packet header, identifying any policies (e.g., security lists) configured for the VNICs associated with the source computing instance, determining a next hop for the packet, performing any packet encapsulation/decapsulation functions as needed, and then forwarding/routing the packet to the next hop for the purpose of facilitating communication of the packet to its intended destination. When the destination computing instance and the source computing instance are located in the same subnet, the VNIC associated with the source computing instance is configured to identify the VNIC associated with the destination computing instance and forward the packet to the VNIC for processing. The VNIC associated with the destination computing instance is then executed and the packet is forwarded to the destination computing instance.
For packets to be transmitted from computing instances in a subnet to endpoints in different subnets in the same VCN, communication is facilitated by VNICs associated with source and destination computing instances and VCN VR. For example, if computing instance C1 in subnet-1 in FIG. 1 wants to send a packet to computing instance D1 in subnet-2, then the packet is first processed by the VNIC associated with computing instance C1. The VNIC associated with computing instance C1 is configured to route packets to VCN VR 105 using a default route or port 10.0.0.1 of the VCN VR. The VCN VR 105 is configured to route packets to subnet-2 using port 10.1.0.1. The VNIC associated with D1 then receives and processes the packet and the VNIC forwards the packet to computing instance D1.
For packets to be communicated from a computing instance in VCN 104 to an endpoint external to VCN 104, communication is facilitated by a VNIC associated with the source computing instance, VCN VR 105, and a gateway associated with VCN 104. One or more types of gateways may be associated with VCN 104. A gateway is an interface between a VCN and another endpoint that is external to the VCN. The gateway is a layer 3/IP layer concept and enables the VCN to communicate with endpoints external to the VCN. Thus, the gateway facilitates traffic flow between the VCN and other VCNs or networks. Various different types of gateways may be configured for the VCN to facilitate different types of communications with different types of endpoints. Depending on the gateway, the communication may be through a public network (e.g., the internet) or through a private network. Various communication protocols may be used for these communications.
For example, computing instance C1 may want to communicate with endpoints external to VCN 104. The packet may be first processed by the VNIC associated with the source computing instance C1. The VNIC processing determines that the destination of the packet is outside of subnet-1 of C1. The VNIC associated with C1 may forward the packet to the VCN VR 105 for VCN 104. The VCN VR 105 then processes the packet and, as part of the processing, determines a particular gateway associated with the VCN 104 as the next hop for the packet based on the destination of the packet. The VCN VR 105 may then forward the packet to the particular identified gateway. For example, if the destination is an endpoint within a customer's in-premise network, the packet may be forwarded by the VCN VR 105 to a Dynamic Routing Gateway (DRG) gateway 122 configured for the VCN 104. The packet may then be forwarded from the gateway to the next hop to facilitate delivery of the packet to its final intended destination.
Various different types of gateways may be configured for the VCN. An example of a gateway that may be configured for a VCN is depicted in fig. 1 and described below. Examples of gateways associated with VCNs are also depicted in fig. 11, 12, 13, and 14 (e.g., gateways referenced by reference numerals 1134, 1136, 1138, 1234, 1236, 1238, 1334, 1336, 1338, 1434, 1436, and 1438) and described below. As shown in the embodiment depicted in fig. 1, dynamic Routing Gateway (DRG) 122 may be added to or associated with customer VCN 104 and provide a path for private network traffic communications between customer VCN 104 and another endpoint, which may be customer's on-premise network 116, VCN 108 in a different region of CSPI 101, or other remote cloud network 118 not hosted by CSPI 101. The customer in-house deployment network 116 may be a customer network or customer data center built using the customer's resources. Access to the customer in-house deployment network 116 is typically very limited. For customers having both customer in-premise network 116 and one or more VCNs 104 deployed or hosted by CSPI 101 in the cloud, customers may want their in-premise network 116 and their cloud-based VCNs 104 to be able to communicate with each other. This enables customers to build an extended hybrid environment, including customers' VCNs 104 hosted by CSPI 101 and their on-premise network 116.DRG 122 enables such communication. To enable such communications, a communication channel 124 is provided in which one endpoint of the channel is located in customer on-premise network 116 and the other endpoint is located in CSPI 101 and connected to customer VCN 104. The communication channel 124 may be over a public communication network (such as the internet) or a private communication network. Various different communication protocols may be used, such as IPsec VPN technology on a public communication network (such as the internet), fastConnect technology using a private network instead of Oracle of a public network, etc. The devices or equipment in the customer-premises deployment network 116 that form one endpoint of the communication channel 124 are referred to as Customer Premise Equipment (CPE), such as CPE 126 depicted in fig. 1. On the CSPI 101 side, the endpoint may be a host machine executing DRG 122.
In some embodiments, a remote peer-to-peer connection (RPC) may be added to the DRG, which allows a customer to peer one VCN with another VCN in a different locale. Using such RPCs, customer VCN 104 may connect with VCN 108 in another region using DRG 122. DRG 122 may also be used to communicate with other remote cloud networks 118 (such as Microsoft Azure cloud, amazon AWS cloud, etc.) that are not hosted by CSPI 101.
As shown in fig. 1, the customer VCN 104 may be configured with an Internet Gateway (IGW) 120 that enables computing instances on the VCN 104 to communicate with a public endpoint 114 that is accessible over a public network, such as the internet. IGW 1120 is a gateway that connects the VCN to a public network such as the internet. IGW 120 enables public subnets within a VCN, such as VCN 104, where resources in the public subnets have public overlay IP addresses, to directly access public endpoints 112 on public network 114, such as the internet. Using IGW 120, a connection may be initiated from a subnet within VCN 104 or from the internet.
Network Address Translation (NAT) gateway 128 may be configured for the customer's VCN 104 and enable cloud resources in the customer's VCN that do not have a private public overlay IP address to access the internet and do so without exposing those resources to direct incoming internet connections (e.g., L4-L7 connections). This enables private subnets within the VCN (such as private subnet-1 in VCN 104) to privately access public endpoints on the internet. In NAT gateways, connections to the public internet can only be initiated from the private subnetwork, and not from the internet.
In some embodiments, a Serving Gateway (SGW) 126 may be configured for the customer VCN 104 and provide a path for private network traffic between the VCN 104 and service endpoints supported in the services network 110. In some embodiments, the services network 110 may be provided by a CSP and may provide various services. An example of such a service network is the Oracle service network, which provides various services available to customers. For example, a computing instance (e.g., database system) in a private subnet of the client VCN 104 may backup data to a service endpoint (e.g., object store) without requiring a public IP address or access to the internet. In some embodiments, the VCN may have only one SGW and the connection may be initiated only from a subnet within the VCN and not from the serving network 110. If the VCN is peer to peer with another, resources in the other VCN typically cannot access the SGW. Resources in an on-premise network that Connect to a VCN with FastConnect or VPN Connect may also use a service gateway configured for that VCN.
In some embodiments, SGW 126 uses the concept of a service-generic-free inter-domain routing (CIDR) tag, which is a string that represents all regional public IP address ranges for a service or group of services of interest. Customers use the service CIDR tag when they configure the SGW and associated routing rules to control traffic to the service. If the public IP address of the service changes in the future, the client can optionally use it in configuring security rules without having to adjust them.
A local peer-to-peer gateway (LPG) 132 is a gateway that may be added to a customer VCN 104 and enable the VCN 104 to peer with another VCN in the same region. Peer-to-peer refers to the VCN communicating using a private IP address, traffic need not be routed through a public network (such as the internet) or through the customer's on-premise network 116. In the preferred embodiment, the VCN has a separate LPG for each peer it establishes. Local peer-to-peer or VCN peer-to-peer is a common practice for establishing network connectivity between different applications or infrastructure management functions.
A service provider, such as the provider of a service in the services network 110, may provide access to the service using different access models. According to the public access model, services may be exposed as public endpoints publicly accessible by computing instances in the client VCN via a public network (such as the internet), and/or may be privately accessible via SGW 126. The service may be accessed as a private IP endpoint in a private subnet in the client's VCN according to a particular private access model. This is known as Private Endpoint (PE) access and enables a service provider to expose its services as instances in a customer's private network. The private endpoint resources represent services within the customer's VCN. Each PE appears as a VNIC (referred to as a PE-VNIC, having one or more private IPs) in a subnet selected by the customer in the customer's VCN. Thus, the PE provides a way to use the VNIC to present services in a private customer VCN subnet. Since the endpoints are exposed as VNICs, all features associated with the VNICs (such as routing rules, security lists, etc.) may now be used for the PE VNICs.
Service providers may register their services to enable access through the PE. The provider may associate policies with the service that limit the visibility of the service to customer leases. A provider may register multiple services under a single virtual IP address (VIP), especially for multi-tenant services. There may be multiple such private endpoints (in multiple VCNs) representing the same service.
The computing instance in the private subnet may then access the service using the private IP address or service DNS name of the PE VNIC. The computing instance in the client VCN may access the service by sending traffic to the private IP address of the PE in the client VCN. The Private Access Gateway (PAGW) 130 is a gateway resource that may be attached to a service provider VCN (e.g., a VCN in the service network 110) that acts as an ingress/egress point for all traffic from/to the customer subnet private endpoint. The PAGW 130 enables the provider to extend the number of PE connections without utilizing its internal IP address resources. The provider need only configure one PAGW for any number of services registered in a single VCN. The provider may represent the service as a private endpoint in multiple VCNs of one or more customers. From the customer's perspective, the PE VNICs are not attached to the customer's instance, but rather appear to be attached to the service with which the customer wishes to interact. Traffic destined for the private endpoint is routed to the service via the PAGW 130. These are called customer-to-service private connections (C2S connections).
The PE concept can also be used to extend private access for services to customer's internal networks and data centers by allowing traffic to flow through the FastConnect/IPsec links and private endpoints in the customer's VCN. Private access to services can also be extended to the customer's peer VCN by allowing traffic to flow between LPG 132 and PEs in the customer's VCN.
The customer may control routing in the VCN at the subnet level, so the customer may specify which subnets in the customer's VCN (such as VCN 104) use each gateway. The routing table of the VCN is used to decide whether to allow traffic to leave the VCN through a particular gateway. For example, in a particular example, a routing table for a common subnet within customer VCN 104 may send non-local traffic through IGW 120. Routing tables for private subnets within the same customer VCN 104 may send traffic destined for CSP services through SGW 126. All remaining traffic may be sent via NAT gateway 128. The routing table only controls traffic out of the VCN.
The security list associated with the VCN is used to control traffic entering the VCN via the gateway via the inbound connection. All resources in the subnetwork use the same routing table and security list. The security list may be used to control the particular type of traffic that is allowed to enter and exit instances in the sub-network of the VCN. Security list rules may include ingress (inbound) and egress (outbound) rules. For example, an ingress rule may specify an allowed source address range, while an egress rule may specify an allowed destination address range. The security rules may specify a particular protocol (e.g., TCP, ICMP), a particular port (e.g., 22 for SSH, 3389 for Windows RDP), etc. In some implementations, the operating system of the instance can enforce its own firewall rules that conform to the security list rules. Rules may be stateful (e.g., track connections and automatically allow responses without explicit security list rules for response traffic) or stateless.
Accesses from a customer's VCN (i.e., through resources or computing instances deployed on the VCN 104) may be categorized as public, private, or private. Public access refers to an access model that uses public IP addresses or NATs to access public endpoints. Private access enables customer workloads in the VCN 104 (e.g., resources in a private subnet) with private IP addresses to access services without traversing a public network such as the internet. In some embodiments, CSPI 101 enables a customer VCN workload with a private IP address to access (the public service endpoint of) a service using a service gateway. Thus, the service gateway provides a private access model by establishing a virtual link between the customer's VCN and the public endpoint of a service residing outside the customer's private network.
In addition, the CSPI may provide private public access using techniques such as FastConnect public peering, where an on-customer deployment instance may access one or more services in a customer's VCN using a FastConnect connection without traversing a public network such as the internet. The CSPI may also provide private access using FastConnect private peering, where an on-premise instance with a private IP address may access the customer's VCN workload using FastConnect connection. FastConnect is a network connectivity alternative to connecting customers' in-house networks to the CSPI and its services using the public internet. FastConnect provides a simple, flexible, and economical way to create private and private connections with higher bandwidth options and a more reliable and consistent network experience than Internet-based connections.
FIG. 1 and the accompanying description above describe various virtualized components in an example virtual network. As described above, the virtual network is built on the underlying physical or substrate network. Fig. 2 depicts a simplified architectural diagram of physical components in a physical network within CSPI 200 that provides an underlying layer for a virtual network, in accordance with some embodiments. As shown, CSPI 200 provides a distributed environment including components and resources (e.g., computing, memory, and network resources) provided by a Cloud Service Provider (CSP). These components and resources are used to provide cloud services (e.g., iaaS services) to subscribing clients (i.e., clients that have subscribed to one or more services provided by CSPs). Clients are provisioned with a subset of the resources (e.g., computing, memory, and network resources) of CSPI 200 based on the services subscribed to by the clients. The customer may then build its own cloud-based (i.e., CSPI-hosted) customizable and private virtual network using the physical computing, memory, and networking resources provided by CSPI 200. As indicated previously, these customer networks are referred to as Virtual Cloud Networks (VCNs). Clients may deploy one or more client resources, such as computing instances, on these client VCNs. The computing instance may be in the form of a virtual machine, a bare metal instance, or the like. CSPI 200 provides a collection of infrastructure and complementary cloud services that enable customers to build and run a wide range of applications and services in a highly available hosted environment.
In the example embodiment depicted in fig. 2, the physical components of CSPI 200 include one or more physical host machines or physical servers (e.g., 202, 206, 208), network Virtualization Devices (NVDs) (e.g., 210, 212), top of rack (TOR) switches (e.g., 214, 216), and physical networks (e.g., 218), as well as switches in physical network 218. The physical host machine or server may host and execute various computing instances that participate in one or more subnets of the VCN. The computing instances may include virtual machine instances and bare machine instances. For example, the various computing instances depicted in fig. 1 may be hosted by the physical host machine depicted in fig. 2. The virtual machine computing instances in the VCN may be executed by one host machine or a plurality of different host machines. The physical host machine may also host a virtual host machine, a container-based host or function, or the like. The VNICs and VCN VRs depicted in fig. 1 may be performed by the NVD depicted in fig. 2. The gateway depicted in fig. 1 may be performed by the host machine and/or NVD depicted in fig. 2.
The host machine or server may execute a hypervisor (also referred to as a virtual machine monitor or VMM) that creates and enables virtualized environments on the host machine. Virtualized or virtualized environments facilitate cloud-based computing. One or more computing instances may be created, executed, and managed on a host machine by a hypervisor on the host machine. The hypervisor on the host machine enables the physical computing resources (e.g., computing, memory, and network resources) of the host machine to be shared among the various computing instances executed by the host machine.
For example, as depicted in FIG. 2, host machines 202 and 208 execute hypervisors 260 and 266, respectively. These hypervisors may be implemented using software, firmware, or hardware, or a combination thereof. Typically, a hypervisor is a process or software layer that sits on top of the Operating System (OS) of the host machine, which in turn executes on the hardware processor of the host machine. The hypervisor provides a virtualized environment by enabling the physical computing resources of the host machine (e.g., processing resources such as processors/cores, memory resources, network resources) to be shared among the various virtual machine computing instances executed by the host machine. For example, in fig. 2, hypervisor 260 may be located above the OS of host machine 202 and enable computing resources (e.g., processing, memory, and network resources) of host machine 202 to be shared among computing instances (e.g., virtual machines) executed by host machine 202. The virtual machine may have its own operating system (referred to as a guest operating system), which may be the same as or different from the OS of the host machine. The operating system of a virtual machine executed by a host machine may be the same as or different from the operating system of another virtual machine executed by the same host machine. Thus, the hypervisor enables multiple operating systems to be executed simultaneously while sharing the same computing resources of the host machine. The host machines depicted in fig. 2 may have the same or different types of hypervisors.
The computing instance may be a virtual machine instance or a bare machine instance. In FIG. 2, computing instance 268 on host machine 202 and computing instance 274 on host machine 208 are examples of virtual machine instances. The host machine 206 is an example of a bare metal instance provided to a customer.
In some cases, an entire host machine may be provisioned to a single customer, and one or more computing instances (or virtual or bare machine instances) hosted by the host machine all belong to the same customer. In other cases, the host machine may be shared among multiple guests (i.e., multiple tenants). In such a multi-tenancy scenario, the host machine may host virtual machine computing instances belonging to different guests. These computing instances may be members of different VCNs for different customers. In some embodiments, bare metal computing instances are hosted by bare metal servers without hypervisors. When supplying a bare metal computing instance, a single customer or tenant maintains control of the physical CPU, memory, and network interfaces of the host machine hosting the bare metal instance, and the host machine is not shared with other customers or tenants.
As previously described, each computing instance that is part of a VCN is associated with a VNIC that enables the computing instance to be a member of a subnet of the VCN. The VNICs associated with the computing instances facilitate communication of packets or frames to and from the computing instances. The VNIC is associated with a computing instance when the computing instance is created. In some embodiments, for a computing instance executed by a host machine, a VNIC associated with the computing instance is executed by an NVD connected to the host machine. For example, in fig. 2, host machine 202 executes virtual machine computing instance 268 associated with VNIC 276, and VNIC 276 is executed by NVD 210 connected to host machine 202. As another example, bare metal instances 272 hosted by host machine 206 are associated with VNICs 280 that are executed by NVDs 212 connected to host machine 206. As yet another example, the VNICs 284 are associated with computing instances 274 that are executed by the host machine 208, and the VNICs 284 are executed by NVDs 212 connected to the host machine 208.
For a computing instance hosted by a host machine, an NVD connected to the host machine also executes a VCN VR corresponding to the VCN of which the computing instance is a member. For example, in the embodiment depicted in fig. 2, NVD 210 executes VCN VR 277 corresponding to the VCN of which computing instance 268 is a member. NVD 212 may also execute one or more VCN VRs 283 corresponding to VCNs corresponding to computing instances hosted by host machines 206 and 208.
The host machine may include one or more Network Interface Cards (NICs) that enable the host machine to connect to other devices. A NIC on a host machine may provide one or more ports (or interfaces) that enable the host machine to communicatively connect to another device. For example, the host machine may connect to the NVD using one or more ports (or interfaces) provided on the host machine and on the NVD. The host machine may also be connected to other devices (such as another host machine).
For example, in fig. 2, host machine 202 is connected to NVD 210 using link 220, link 220 extending between port 234 provided by NIC 232 of host machine 202 and port 236 of NVD 210. The host machine 206 is connected to the NVD 212 using a link 224, the link 224 extending between a port 246 provided by the NIC 244 of the host machine 206 and a port 248 of the NVD 212. Host machine 208 is connected to NVD 212 using link 226, link 226 extending between port 252 provided by NIC 250 of host machine 208 and port 254 of NVD 212.
The NVD in turn is connected via communication links to top of rack (TOR) switches that are connected to a physical network 218 (also referred to as a switch fabric). In certain embodiments, the links between the host machine and the NVD and between the NVD and the TOR switch are Ethernet links. For example, in fig. 2, NVDs 210 and 212 are connected to TOR switches 214 and 216 using links 228 and 230, respectively. In some embodiments, links 220, 224, 226, 228, and 230 are ethernet links. The collection of host machines and NVDs connected to TOR is sometimes referred to as a rack (rack).
The physical network 218 provides a communication architecture that enables TOR switches to communicate with each other. The physical network 218 may be a multi-layer network. In some embodiments, the physical network 218 is a multi-layer Clos network of switches, where TOR switches 214 and 216 represent leaf level nodes of the multi-layer and multi-node physical switching network 218. Different Clos network configurations are possible, including but not limited to layer 2 networks, layer 3 networks, layer 4 networks, layer 5 networks, and general "n" layer networks. An example of a Clos network is depicted in fig. 5 and described below.
There may be a variety of different connection configurations between the host machine and the NVD, such as a one-to-one configuration, a many-to-one configuration, a one-to-many configuration, and the like. In one-to-one configuration implementations, each host machine is connected to its own separate NVD. For example, in fig. 2, host machine 202 is connected to NVD 210 via NIC 232 of host machine 202. In a many-to-one configuration, multiple host machines are connected to one NVD. For example, in fig. 2, host machines 206 and 208 are connected to the same NVD 212 via NICs 244 and 250, respectively.
In a one-to-many configuration, one host machine is connected to multiple NVDs. FIG. 3 shows an example within CSPI 300 where a host machine is connected to multiple NVDs. As shown in fig. 3, host machine 302 includes a Network Interface Card (NIC) 304 that includes a plurality of ports 306 and 308. Host machine 300 is connected to first NVD 310 via port 306 and link 320, and to second NVD 312 via port 308 and link 322. Ports 306 and 308 may be ethernet ports and links 320 and 322 between host machine 302 and NVDs 310 and 312 may be ethernet links. The NVD 310 is in turn connected to a first TOR switch 314 and the NVD 312 is connected to a second TOR switch 316. The links between NVDs 310 and 312 and TOR switches 314 and 316 may be ethernet links. TOR switches 314 and 316 represent layer 0 switching devices in a multi-layer physical network 318.
The arrangement depicted in fig. 3 provides two separate physical network paths from the physical switch network 318 to the host machine 302: the first path passes through TOR switch 314 to NVD 310 to host machine 302 and the second path passes through TOR switch 316 to NVD 312 to host machine 302. The separate path provides enhanced availability (referred to as high availability) of the host machine 302. If one of the paths (e.g., a link in one of the paths is broken) or one of the devices (e.g., a particular NVD is not running) is in question, then the other path may be used for communication with host machine 302.
In the configuration depicted in fig. 3, the host machine connects to two different NVDs using two different ports provided by the NIC of the host machine. In other embodiments, the host machine may include multiple NICs that enable the host machine to connect to multiple NVDs.
Referring back to fig. 2, an nvd is a physical device or component that performs one or more network and/or storage virtualization functions. An NVD may be any device having one or more processing units (e.g., CPU, network Processing Unit (NPU), FPGA, packet processing pipeline, etc.), memory (including cache), and ports. Various virtualization functions may be performed by software/firmware executed by one or more processing units of the NVD.
NVD may be implemented in a variety of different forms. For example, in certain embodiments, the NVD is implemented as an interface card called a smart NIC or a smart NIC with an on-board embedded processor. A smart NIC is a device independent of the NIC on the host machine. In fig. 2, NVDs 210 and 212 may be implemented as smart nics connected to host machine 202 and host machines 206 and 208, respectively.
However, the smart nic is only one example of an NVD implementation. Various other implementations are possible. For example, in some other implementations, the NVD or one or more functions performed by the NVD may be incorporated into or performed by one or more host machines, one or more TOR switches, and other components of CSPI 200. For example, the NVD may be implemented in a host machine, where the functions performed by the NVD are performed by the host machine. As another example, the NVD may be part of a TOR switch, or the TOR switch may be configured to perform functions performed by the NVD, which enables the TOR switch to perform various complex packet conversions for the public cloud. TOR performing the function of NVD is sometimes referred to as intelligent TOR. In other embodiments where a Virtual Machine (VM) instance is provided to the client instead of a Bare Metal (BM) instance, the functions performed by the NVD may be implemented within the hypervisor of the host machine. In some other implementations, some of the functionality of the NVD may be offloaded to a centralized service running on a set of host machines.
In some embodiments, such as when implemented as a smart nic as shown in fig. 2, the NVD may include a plurality of physical ports that enable it to connect to one or more host machines and one or more TOR switches. Ports on NVD may be classified as host-oriented ports (also referred to as "south ports") or network-oriented or TOR-oriented ports (also referred to as "north ports"). The host-facing port of the NVD is a port for connecting the NVD to a host machine. Examples of host-facing ports in fig. 2 include port 236 on NVD 210 and ports 248 and 254 on NVD 212. The network-facing port of the NVD is a port for connecting the NVD to the TOR switch. Examples of network-facing ports in fig. 2 include port 256 on NVD 210 and port 258 on NVD 212. As shown in fig. 2, NVD 210 connects to TOR switch 214 using link 228 extending from port 256 of NVD 210 to TOR switch 214. Similarly, NVD 212 connects to TOR switch 216 using link 230 extending from port 258 of NVD 212 to TOR switch 216.
The NVD receives packets and frames (e.g., packets and frames generated by computing instances hosted by the host machine) from the host machine via the host-oriented ports, and after performing the necessary packet processing, the packets and frames may be forwarded to the TOR switch via the network-oriented ports of the NVD. The NVD may receive packets and frames from the TOR switch via the network-oriented ports of the NVD, and after performing the necessary packet processing, may forward the packets and frames to the host machine via the host-oriented ports of the NVD.
In some embodiments, there may be multiple ports and associated links between the NVD and the TOR switch. These ports and links may be aggregated to form a link aggregation group (referred to as LAG) of multiple ports or links. Link aggregation allows multiple physical links between two endpoints (e.g., between NVD and TOR switches) to be considered a single logical link. All physical links in a given LAG may operate in full duplex mode at the same speed. LAG helps to increase the bandwidth and reliability of the connection between two endpoints. If one of the physical links in the LAG fails, traffic will be dynamically and transparently reassigned to one of the other physical links in the LAG. The aggregated physical link delivers a higher bandwidth than each individual link. The multiple ports associated with the LAG are considered to be a single logical port. Traffic may be load balanced among the multiple physical links of the LAG. One or more LAGs may be configured between the two endpoints. The two endpoints may be located between the NVD and TOR switches, between the host machine and the NVD, and so on.
The NVD implements or performs network virtualization functions. These functions are performed by software/firmware executed by the NVD. Examples of network virtualization functions include, but are not limited to: packet encapsulation and decapsulation functions; a function for creating a VCN network; functions for implementing network policies, such as VCN security list (firewall) functionality; a function to facilitate routing and forwarding of packets to and from a compute instance in the VCN; etc. In some embodiments, upon receiving a packet, the NVD is configured to execute a packet processing pipeline to process the packet and determine how to forward or route the packet. As part of this packet processing pipeline, the NVD may perform one or more virtual functions associated with the overlay network, such as performing VNICs associated with cis in the VCN, performing Virtual Routers (VR) associated with the VCN, encapsulation and decapsulation of packets to facilitate forwarding or routing in the virtual network, execution of certain gateways (e.g., local peer gateways), implementation of security lists, network security groups, network Address Translation (NAT) functionality (e.g., translating public IP to private IP on a host-by-host basis), throttling functions, and other functions.
In some embodiments, the packet processing data path in the NVD may include a plurality of packet pipelines, each pipeline being comprised of a series of packet transform stages. In some embodiments, after receiving a packet, the packet is parsed and classified into a single pipeline. The packets are then processed in a linear fashion, stage by stage, until the packets are either discarded or sent out over the NVD interface. These stages provide basic functional packet processing building blocks (e.g., validate headers, force throttling, insert new layer 2 headers, force L4 firewalls, VCN encapsulation/decapsulation, etc.) so that new pipelines can be built by combining existing stages and new functionality can be added by creating new stages and inserting them into existing pipelines.
The NVD may perform control plane and data plane functions corresponding to the control plane and data plane of the VCN. Examples of VCN control planes are also depicted in fig. 11, 12, 13, and 14 (see references 1116, 1216, 1316, and 1416) and described below. Examples of VCN data planes are depicted in fig. 11, 12, 13, and 14 (see references 1118, 1218, 1318, and 1418) and described below. The control plane functions include functions for configuring the network (e.g., setting up routing and routing tables, configuring VNICs, etc.) that control how data is forwarded. In some embodiments, a VCN control plane is provided that centrally computes all overlay-to-baseboard mappings and publishes them to NVDs and virtual network edge devices (such as various gateways, such as DRG, SGW, IGW, etc.). Firewall rules may also be published using the same mechanism. In certain embodiments, the NVD only obtains a mapping related to the NVD. The data plane functions include functions to actually route/forward packets based on a configuration using control plane settings. The VCN data plane is implemented by encapsulating the customer's network packets before they traverse the baseboard network. Encapsulation/decapsulation functionality is implemented on the NVD. In certain embodiments, the NVD is configured to intercept all network packets in and out of the host machine and perform network virtualization functions.
As indicated above, the NVD performs various virtualization functions, including VNICs and VCN VR. The NVD may execute a VNIC associated with a computing instance hosted by one or more host machines connected to the VNIC. For example, as depicted in fig. 2, NVD 210 performs the functionality of VNIC 276 associated with computing instance 268 hosted by host machine 202 connected to NVD 210. As another example, NVD 212 executes VNICs 280 associated with bare metal computing instances 272 hosted by host machine 206 and executes VNICs 284 associated with computing instances 274 hosted by host machine 208. The host machine may host computing instances belonging to different VCNs (belonging to different customers), and an NVD connected to the host machine may execute a VNIC corresponding to the computing instance (i.e., perform VNIC-related functionality).
The NVD also executes a VCN virtual router corresponding to the VCN of the computing instance. For example, in the embodiment depicted in fig. 2, NVD 210 executes VCN VR 277 corresponding to the VCN to which computing instance 268 belongs. NVD 212 executes one or more VCN VRs 283 corresponding to one or more VCNs to which computing instances hosted by host machines 206 and 208 belong. In some embodiments, the VCN VR corresponding to the VCN is executed by all NVDs connected to a host machine hosting at least one computing instance belonging to the VCN. If a host machine hosts computing instances belonging to different VCNs, then an NVD connected to the host machine may execute VCN VR corresponding to those different VCNs.
In addition to the VNICs and VCN VRs, the NVD may execute various software (e.g., daemons) and include one or more hardware components that facilitate various network virtualization functions performed by the NVD. For simplicity, these various components are grouped together as a "packet processing component" shown in fig. 2. For example, NVD 210 includes a packet processing component 286 and NVD 212 includes a packet processing component 288. For example, a packet processing component for an NVD may include a packet processor configured to interact with ports and hardware interfaces of the NVD to monitor all packets received by and transmitted using the NVD and store network information. The network information may include, for example, network flow information and per-flow information (e.g., per-flow statistics) identifying different network flows handled by the NVD. In some embodiments, network flow information may be stored on a per VNIC basis. The packet processor may perform packet-by-packet manipulation and implement stateful NAT and L4 Firewalls (FWs). As another example, the packet processing component may include a replication agent configured to replicate information stored by the NVD to one or more different replication target repositories. As yet another example, the packet processing component may include a logging agent configured to perform a logging function of the NVD. The packet processing component may also include software for monitoring the performance and health of the NVD and possibly also the status and health of other components connected to the NVD.
FIG. 1 illustrates components of an example virtual or overlay network, including a VCN, a subnet within the VCN, a computing instance deployed on the subnet, a VNIC associated with the computing instance, a VR for the VCN, and a set of gateways configured for the VCN. The overlay component depicted in fig. 1 may be executed or hosted by one or more of the physical components depicted in fig. 2. For example, computing instances in a VCN may be executed or hosted by one or more host machines depicted in fig. 2. For a computing instance hosted by a host machine, a VNIC associated with the computing instance is typically executed by an NVD connected to the host machine (i.e., VNIC functionality is provided by an NVD connected to the host machine). The VCN VR functions for a VCN are performed by all NVDs connected to a host machine that hosts or executes computing instances that are part of the VCN. The gateway associated with the VCN may be implemented by one or more different types of NVDs. For example, some gateways may be implemented by a smart nic, while other gateways may be implemented by one or more host machines or other implementations of NVDs.
As described above, the computing instances in the client VCN may communicate with various different endpoints, where the endpoints may be within the same subnet as the source computing instance, in different subnets but within the same VCN as the source computing instance, or with endpoints external to the VCN of the source computing instance. These communications are facilitated using a VNIC associated with the computing instance, a VCN VR, and a gateway associated with the VCN.
For communication between two computing instances on the same subnet in a VCN, the VNICs associated with the source and destination computing instances are used to facilitate the communication. The source and destination computing instances may be hosted by the same host machine or by different host machines. Packets originating from a source computing instance may be forwarded from a host machine hosting the source computing instance to an NVD connected to the host machine. On the NVD, packets are processed using a packet processing pipeline, which may include execution of VNICs associated with the source computing instance. Because the destination endpoints for the packets are located within the same subnet, execution of the VNICs associated with the source computing instance causes the packets to be forwarded to the NVD executing the VNICs associated with the destination computing instance, which then processes the packets and forwards them to the destination computing instance. VNICs associated with source and destination computing instances may execute on the same NVD (e.g., when the source and destination computing instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination computing instances are hosted by different host machines connected to the different NVDs). The VNIC may use the routing/forwarding table stored by the NVD to determine the next hop for the packet.
For packets to be transferred from a computing instance in a subnet to an endpoint in a different subnet in the same VCN, packets originating from a source computing instance are transferred from a host machine hosting the source computing instance to an NVD connected to the host machine. On the NVD, packets are processed using a packet processing pipeline, which may include execution of one or more VNICs and VR associated with the VCN. For example, as part of a packet processing pipeline, the NVD executes or invokes functionality (also referred to as executing VNICs) of a VNIC associated with the source computing instance. The functionality performed by the VNIC may include looking at the VLAN tag on the packet. The VCN VR functionality is next invoked and executed by the NVD because the destination of the packet is outside the subnet. The VCN VR then routes the packet to an NVD that executes the VNIC associated with the destination computing instance. The VNIC associated with the destination computing instance then processes the packet and forwards the packet to the destination computing instance. VNICs associated with source and destination computing instances may execute on the same NVD (e.g., when the source and destination computing instances are hosted by the same host machine) or on different NVDs (e.g., when the source and destination computing instances are hosted by different host machines connected to the different NVDs).
If the destination for the packet is outside of the VCN of the source computing instance, the packet originating from the source computing instance is transmitted from the host machine hosting the source computing instance to an NVD connected to the host machine. The NVD executes the VNIC associated with the source computing instance. Since the destination endpoint of the packet is outside the VCN, the packet is then processed by the VCN VR for that VCN. The NVD invokes VCN VR functionality, which causes the packet to be forwarded to the NVD executing the appropriate gateway associated with the VCN. For example, if the destination is an endpoint within a customer's in-premise network, the packet may be forwarded by the VCN VR to the NVD executing a DRG gateway configured for the VCN. The VCN VR may be executed on the same NVD as the NVD executing the VNIC associated with the source computing instance, or by a different NVD. The gateway may be implemented by an NVD, which may be a smart NIC, a host machine, or other NVD implementation. The packet is then processed by the gateway and forwarded to the next hop, which facilitates delivery of the packet to its intended destination endpoint. For example, in the embodiment depicted in fig. 2, packets originating from computing instance 268 may be transmitted from host machine 202 to NVD 210 over link 220 (using NIC 232). On NVD 210, VNIC 276 is invoked because it is the VNIC associated with source computing instance 268. VNIC 276 is configured to examine the information encapsulated in the packet and determine the next hop for forwarding the packet in order to facilitate delivery of the packet to its intended destination endpoint, and then forward the packet to the determined next hop.
Computing instances deployed on a VCN may communicate with a variety of different endpoints. These endpoints may include endpoints hosted by CSPI 200 and endpoints external to CSPI 200. Endpoints hosted by CSPI 200 may include instances in the same VCN or other VCNs, which may be customer VCNs or VCNs that do not belong to customers. Communication between endpoints hosted by CSPI 200 may be performed through physical network 218. The computing instance may also communicate with endpoints that are not hosted by CSPI 200 or external to CSPI 200. Examples of such endpoints include endpoints within a customer's in-house network or data centers, or public endpoints accessible through a public network such as the internet. Communication with endpoints external to CSPI 200 may be performed over a public network (e.g., the internet) (not shown in fig. 2) or a private network (not shown in fig. 2) using various communication protocols.
The architecture of CSPI 200 depicted in fig. 2 is merely an example and is not intended to be limiting. In alternative embodiments, variations, alternatives, and modifications are possible. For example, in some embodiments, CSPI 200 may have more or fewer systems or components than those shown in fig. 2, may combine two or more systems, or may have different system configurations or arrangements. The systems, subsystems, and other components depicted in fig. 2 may be implemented in software (e.g., code, instructions, programs) executed by one or more processing units (e.g., processors, cores) of the respective system, using hardware, or a combination thereof. The software may be stored on a non-transitory storage medium (e.g., on a memory device).
FIG. 4 depicts a connection between a host machine and an NVD for providing I/O virtualization to support multi-tenancy in accordance with certain embodiments. As depicted in fig. 4, host machine 402 executes hypervisor 404 that provides a virtualized environment. The host machine 402 executes two virtual machine instances, VM1 406 belonging to guest/tenant #1 and VM2408 belonging to guest/tenant # 2. Host machine 402 includes a physical NIC410 connected to NVD 412 via link 414. Each computing instance is attached to a VNIC executed by NVD 412. In the embodiment in FIG. 4, VM1 406 is attached to VNIC-VM1 420 and VM2408 is attached to VNIC-VM2 422.
As shown in fig. 4, NIC410 includes two logical NICs, logical NIC a 416 and logical NIC B418. Each virtual machine is attached to its own logical NIC and is configured to work with its own logical NIC. For example, VM1 406 is attached to logical NIC A416 and VM2408 is attached to logical NIC B418. Although the host machine 402 includes only one physical NIC410 shared by multiple tenants, each tenant's virtual machine believes that they have their own host machine and network card because of the logical NIC.
In some embodiments, each logical NIC is assigned its own VLAN ID. Thus, a specific VLAN ID is assigned to logical NIC a 416 for tenant #1, and a separate VLAN ID is assigned to logical NIC B418 for tenant # 2. When a packet is transferred from VM1 406, a label assigned to tenant #1 is appended to the packet by the hypervisor, which is then transferred from host machine 402 to NVD 412 over link 414. In a similar manner, when a packet is transmitted from VM2408, a label assigned to tenant #2 is appended to the packet by the hypervisor, and the packet is then transmitted from host machine 402 to NVD 412 over link 414. Thus, packets 424 transmitted from host machine 402 to NVD 412 have associated labels 426 identifying the particular tenant and associated VM. On the NVD, for a packet 424 received from host machine 402, a tag 426 associated with the packet is used to determine whether the packet is processed by VNIC-VM1 420 or VNIC-VM2 422. The packets are then processed by the corresponding VNICs. The configuration depicted in fig. 4 enables each tenant's computing instance to trust that they own host machine and NIC. The arrangement depicted in FIG. 4 provides I/O virtualization to support multi-tenancy.
Fig. 5 depicts a simplified block diagram of a physical network 500 according to some embodiments. The embodiment depicted in fig. 5 is structured as a Clos network. Clos networks are a specific type of network topology designed to provide connection redundancy while maintaining high split bandwidth and maximum resource utilization. Clos networks are a type of non-blocking, multi-stage or multi-layer switching network, where the number of stages or layers may be two, three, four, five, etc. The embodiment depicted in fig. 5 is a 3-layer network, including layer 1, layer 2, and layer 3.TOR switches 504 represent layer 0 switches in a Clos network. One or more NVDs are connected to the TOR switch. Layer 0 switches are also known as edge devices of a physical network. The layer 0 switch is connected to a layer 1 switch, also known as a leaf switch. In the embodiment depicted in fig. 5, a set of "m" layer 0TOR switches is connected to a set of "r" layer 1 switches and forms a group (pod) (where the integers m and r may have the same value or different values). Each layer 0 switch in a cluster is interconnected to all layer 1 switches in the cluster, but there is no connectivity of switches between clusters. In some embodiments, two clusters are referred to as blocks. Each block is served by or connected to a set of "q" layer 2 switches (sometimes referred to as backbone switches). There may be several blocks in the physical network topology. The layer 2 switch is in turn connected to "p" layer 3 switches (sometimes referred to as a super backbone switch) (where the integers p and q may have the same value or different values). Communication of packets over the physical network 500 is typically performed using one or more layer 3 communication protocols. Typically, all layers of the physical network (except the TOR layer) are redundant (e.g., p-way, q-way, or r-way redundancy), thus allowing for high availability. Policies may be specified for the clusters and blocks to control the visibility of switches to each other in the physical network, enabling extension (scale) of the physical network.
The Clos network is characterized by a fixed maximum number of hops from one layer 0 switch to another layer 0 switch (or from an NVD connected to a layer 0 switch to another NVD connected to a layer 0 switch). For example, in a 3-layer Clos network, a maximum of seven hops are required for packets to reach from one NVD to another, with the source and target NVDs connected to the leaf layers of the Clos network. Also, in a 4-layer Clos network, a maximum of nine hops are required for packets to reach from one NVD to another, with the source and target NVDs connected to the leaf layers of the Clos network. Thus, the Clos network architecture maintains consistent latency throughout the network, which is important for communication between and within the data center. The Clos topology is horizontally scalable and cost-effective. The bandwidth/throughput capacity of the network can be easily increased by adding more switches (e.g., more leaf switches and backbone switches) at each layer and by increasing the number of links between switches at adjacent layers.
In some embodiments, each resource within the CSPI is assigned a unique identifier called a Cloud Identifier (CID). This identifier is included as part of the information of the resource and may be used to manage the resource, e.g., via a console or through an API. An example syntax for CID is:
ocid 1.<RESOURCE TYPE>.<REALM>.[REGION][.FUTURE USE].<UNIQUE ID>
Wherein, the liquid crystal display device comprises a liquid crystal display device,
ocid1: a text string indicating a version of the CID;
resource type: types of resources (e.g., instance, volume, VCN, subnet, user, group, etc.);
realm: the domain in which the resource is located. Exemplary values are "c1" for the business domain, "c2" for the government cloud domain, or "c3" for the federal government cloud domain, etc. Each domain may have its own domain name;
region: the region where the resource is located. If the region is not suitable for the resource, then this portion may be empty;
future use: reserved for future use.
unique ID: a unique portion of the ID. The format may vary depending on the type of resource or service.
RDMA/RoCE techniques
Fig. 6 illustrates an example of a distributed multi-tenant cloud environment 600 that may be hosted by a Cloud Services Provided Infrastructure (CSPI). As shown in fig. 6, a plurality of hosts (e.g., 602 and 622) are communicatively coupled via a physical network or switch fabric 640 that includes a plurality of switches or more broadly networking devices. In some implementations, the switch fabric 640 can be an n-tier Clos network as depicted in fig. 5 and described above, and the design can be optimized for the performance of the Clos fabric and placement of the physical switches 642, 644, and 646. The value of "n" may be one, two, three, etc., depending on the implementation. However, it is noted that each additional layer is expected to increase the latency of packet transfer in the fabric, which may be undesirable for some applications. Top of rack (TOR) switches 642 and 644 represent leaf or layer 0 devices in the switch fabric 640. Depending on the value of "n," switch fabric 640 may include one or more backbone switches, super backbone switches, and the like. In fig. 6, switches between TOR switch 642 and TOR switch 644 (e.g., layer 1, layer 2, and layer 3 switches in fig. 5) are represented by intermediate switch 646. Intermediate switch 646 may generally include one or more switches or networking devices. Switch fabric 640 may also be implemented to include switch fabric IP addresses that are not reachable by the client computing instance (e.g., for management purposes). It may be desirable to implement a Spanning Tree Protocol (STP) on TOR switches of the switch fabric 640 (e.g., to avoid loops, which may occur due to errors, for example). In some configurations, each TOR switch on the switch fabric 640 is per-service specific (e.g., database cloud service, HPC cloud service, GPU cloud service, etc.), and the traffic of the different services is only mixed at a higher layer (e.g., at the intermediate switch 646).
Host machines 602 and 622 may host computing instances of multiple customers or tenants and thus may be referred to as multi-tenant host machines. For example, as depicted in FIG. 6, host machine 602 hosts computing instance A-1 604 for customer A and computing instance B-1 for customer B. Host machine 604 hosts computing instance A-2 624 for customer A and computing instance B-2 626 for customer B. In some embodiments, computing instances 604, 606, 624, and 626 are virtual machines. In this way, virtual machines belonging to different guests may be hosted on the same host machine. However, each of these computing instances experiences that it owns the entire host machine. In some embodiments, the computing instance for the customer may also include a bare metal host. The teachings of the present disclosure apply to computing instances in the form of virtual machines or bare metal hosts. For ease of explanation, the example of fig. 1 shows only two multi-tenant host machines 602 and 622, but this is not intended to be limiting; the principles disclosed herein are not limited to any particular number of multi-tenant hosts, and may be implemented to include a greater number of multi-tenant hosts and/or further include one or more specific examples of computing instances (e.g., single-tenant hosts) as bare metal hosts.
In a multi-tenant environment, as shown in fig. 6, it is desirable to properly isolate traffic originating from and directed to computing instances of different customers from each other. In some embodiments, this traffic isolation is accomplished by configuring separate network domains for separate customers. For example, a computing instance of client A may be assigned to a particular network domain that is separate and distinct from the network domain to which a computing instance of client B is assigned. In some embodiments, these network domains may be configured in the form of Virtual LANs (VLANs), where each VLAN is identified by a unique VLAN identifier. For example, in FIG. 6, computing instances A-1 604 and A-2 624 of customer A are assigned to VLAN 1001, where "1001" represents a unique VLAN identifier. Computing instances B-1 606 and B-2 626 of customer B are assigned to VLAN 1002, where "1002" represents a unique VLAN identifier. For ease of explanation, the example of fig. 1 shows only two members of each VLAN 1001 and 1002, but this is not intended to be limiting; the principles disclosed herein are not limited to any particular number of VLAN members. Furthermore, there may be multiple VLANs, not just the two depicted in fig. 6. For example, the IEEE 802.1Q standard supports identifying up to 4096 different VLANs.
In some implementations, computing instances belonging to the same customer may have different quality of service expectations. For example, a customer may have computing instances belonging to two or more different services (or applications or departments), such as a first set of one or more computing instances corresponding to service a (e.g., a simulated service) and a second set of computing instances corresponding to service B (e.g., a backup service), where the two services have very different quality of service expectations (e.g., in terms of latency, packet loss, bandwidth requirements, etc.). For example, service a may be more time-sensitive than service B, and thus, a customer may desire that traffic associated with service a be given a different traffic class (e.g., higher priority) than traffic associated with service B. In this case, different computing instances belonging to the same customer may have different quality of service requirements.
Computing instances on the same VLAN or on peer VLANs (e.g., VLANs on different layer 2 domains that belong to the same tenant but may have different VLAN IDs) may desire to communicate with each other. In some implementations, computing instances on VLANs (or peer VLANs) may be able to exchange data using RDMA and RoCE protocols. In such embodiments, the host machine hosting the compute instance is equipped with special hardware and software enabling RDMA and RoCE based communications. For example, as depicted in fig. 6, host machines 602 and 622 include RDMA Network Interface Cards (NICs) (e.g., roCE NICs) 608 and 628, respectively, that enable computing instances hosted by host machine 602 to exchange data with computing instances hosted by host machine 622 and on the same VLAN (or peer VLAN) using RDMA and RoCE protocols. The RoCE NIC may be implemented, for example, as a hardware component (e.g., an interface card) installed within the host machine (e.g., roCE NIC 608 installed in multi-tenant host machine 602 and RoCE NIC 628 installed in host machine 622). Computing instances a-1 604 and a-2 624 belonging to the same VLAN 1001 may exchange data using RDMA and RoCE protocols using RoCE NICs 608 and 628 in host machines 602 and 622, respectively. In certain embodiments, the RoCE NIC is separate from the NIC depicted in fig. 2 and 3 and described above. In other embodiments, the NIC depicted in fig. 2 and 3 may be configured to also operate as a RoCE NIC.
As shown in the example depicted in fig. 6, the RoCE NIC includes a RoCE engine and implements virtual functions (e.g., SR-IOV functions), where each of the virtual functions may be configured for a different corresponding one of the virtual machines supported by the host machine. In this example, the RoCE NIC is implemented to support multi-tenancy through a technique called SR-IOV (single root input/output virtualization) that allows physical devices to appear on a peripheral component interconnect Express (PCI Express or PCIe) bus as a plurality of different virtual instances (also called "virtual functions" or VFs), each of which is assigned to a respective VM and has resources separate from those of other VFs. For example, in FIG. 6, the RoCE NIC 608 on the host machine 602 includes Sub>A RoCE engine 610, sub>A virtual function VF-A-1 612 for the virtual machine computing instance A-1 604, and Sub>A virtual function VF-B-1614 for the virtual machine computing instance B-1 606. The RoCE NIC 628 on the host machine 622 comprises Sub>A RoCE engine 630, virtual functions VF-Sub>A-2 632 for the virtual machine computing instance Sub>A-2 624, and virtual functions VF-B-2 634 for the virtual machine computing instance B-2 626. For ease of explanation, the example in fig. 6 shows only two virtual functions per host machine, but this is not meant to be limiting in any way; the principles described herein are not limited to any particular number of virtual functions. In one example, an SR-IOV may support up to 16 VFs for one physical NIC port, and one host machine may also have multiple RDMA NICs (e.g., multiple RoCE NICs).
In some embodiments, the virtual functions for the RoCE NIC are programmed by a hypervisor on the host machine for a particular virtual machine compute instance and are configured to force packets from the virtual machine to be communicated over a network, such as switch fabric 640, to be tagged with a VLAN tag (e.g., an 802.1Q VLAN tag) corresponding to the VLAN to which the virtual machine belongs. For the example depicted in fig. 6, virtual function VF-Sub>A-1 612 may be configured to add Sub>A VLAN tag that indicates VLAN 1001 (e.g., sub>A VLAN tag having Sub>A VLAN ID of value 1001) to Sub>A packet carrying datSub>A from virtual machine computing instance Sub>A-1 604, and virtual function VF-B-1 614 may be configured to add Sub>A VLAN tag that indicates VLAN 1002 (e.g., sub>A VLAN tag having Sub>A VLAN ID of value 1002) to Sub>A packet carrying datSub>A from virtual machine computing instance B-1 606. In Sub>A similar manner, virtual function VF-A-2 632 may be configured to add Sub>A VLAN tag that indicates VLAN 1001 (e.g., sub>A VLAN tag having Sub>A VLAN ID of value 1001) to packets carrying datSub>A from virtual machine computing instance A-2 624 and virtual function VF-B-2 634 may be configured to add Sub>A VLAN tag that indicates VLAN 1002 (e.g., sub>A VLAN tag having Sub>A VLAN ID of value 1002) to packets carrying datSub>A from virtual machine computing instance B-2 626. The downstream network components may use these VLAN tags to segregate or separate traffic belonging to different VLANs (e.g., in fig. 6, traffic belonging to a computing instance of customer a is segregated from traffic belonging to a computing instance of customer B).
In some implementations, the virtual functions assigned to the respective compute instances are configured (e.g., in cooperation with a RoCE engine on a RoCE NIC) to perform Direct Memory Access (DMA) read operations and to perform DMA write operations for the memory space of the corresponding compute instance for RDMA data transfer. In the example of FIG. 6, the virtual function VF-A-1 612 is configured to perform direct memory access read and write operations for the compute instance A-1 604 in conjunction with the RoCE engine 608 as part of the RDMA process. The virtual function VF-B-1 614 is similarly configured to perform direct memory access read and write operations for the compute instance B-1 606 in conjunction with the RoCE engine 608 as part of the RDMA process.
The RoCE engine in the RoCE NIC is configured to facilitate transfer of RDMA/RoCE traffic from a host machine and to facilitate receipt of RDMA/RoCE traffic transferred by another host machine. In some embodiments, the RoCE engine receives an instruction (e.g., metadata) identifying an address range in the application memory of the compute instance, where the address range represents a block of data to be transferred to the application memory of the target compute instance using RDMA and RoCE. For example, the RoCE engine 610 may receive information identifying RDMA channels that have been set for data transfer and an address range representing a block of data to be transferred using RDMA from the compute instance A-1 604 (i.e., from the application memory for A-1 provided by the host machine 602) to the application memory of compute instance A-2 624 on the host machine 622 (i.e., to the application memory of A-2 provided by the host machine 622). The RoCE engine is configured to access data from the application memory of the source computing instance, to enable the data to communicate with the target or destination computing instance, to package the data in an appropriate packet format (i.e., to generate and assemble layer 2 frames of data), and then to transfer the packets to a TOR switch (e.g., a leaf switch in the switch fabric for use in transferring the data to the destination computing instance). Thus, the RoCE engine is an offload engine and the CPU or operating system of the host machine does not have to participate in data transfer. Such offloading reduces the latency involved in data transfer.
For example, the RoCE engine 610 may be configured to append headers (e.g., UDP and IP headers) and VLAN tags (e.g., as enforced by the virtual functions 612 and 614) to the data payloads to create VLAN tagged RoCEv2 format packets and send the RoCEv2 packets over lines (e.g., ethernet cables) to leaf switches (e.g., TOR switches 642) of the switch fabric 640. Regarding traffic incoming to the RoCE engine from the switch fabric, the RoCE engine 610 may be configured to receive the RoCEv2 packets from the TOR switch 642, remove UDP and IP headers, strip VLAN tags, and forward each resulting frame (e.g., as an IB payload sent by the source host machine) to an SR-IOV virtual function mapped to a VLAN ID over which the packet is received. The virtual function may be configured to store the data payload of the packet to a memory space of a destination compute instance on a corresponding VLAN.
The layer 2 frames assembled by the RoCE NIC are then transmitted to the RoCE NIC of the host machine hosting the destination or target computing instance via a plurality of networking devices in the layer 3 switch fabric using layer 3 routing protocols. For example, if RDMA and RoCE are used to transfer data from computing instance A-1 604 in FIG. 6 to target computing instance A-2624 on host machine 622, the path taken by the packet with the data payload is as follows: source computing instance a-1 604 on host machine 602 → RoCE NIC 608 on host machine 602 → TOR switch 642 → one or more intermediate switches 646 → TOR switch 644 → RoCE NIC 628 on host machine 622 → computing instance a-2624 on host machine 622. As part of this communication, TOR switches 642 representing ingress edge devices for switch fabric 640 are configured to convert layer 2 frames received from the RoCE NIC into layer 3 packets by encapsulating the packets within a wrapper (e.g., including one or more headers) corresponding to the layer 3 tunneling protocol used to transport the packets through switch fabric 640. A variety of different tunneling protocols may be used, such as VxLAN, NVGRE, STT, GENEVE, MPLS and the like. The layer 3 packets then travel from TOR switch 642 to TOR switch 644 via one or more intermediate switches 646, the TOR switch 644 representing an egress edge device for the switch fabric 640. TOR switch 644 is configured to decapsulate the packets and convert them into layer 2 frames, which are then transmitted to RoCE NIC 628 on host machine 622 hosting destination or target computing instance a-2 624. The RoCE NIC 628 on the host machine 622 then communicates the data to the destination computing instance a-2 624. The packet may be transferred to compute instance a-2 by writing the packet data to the application memory of compute instance a-2 624. Details regarding the processing performed by the various network components to facilitate data transfer from a multi-tenant host machine to another multi-tenant host machine using RDMA and RoCE are described below.
As another example, if compute instance B-1 606 wants to transfer data to compute instance B-2 626, the path taken by the packet is as follows: source computing instance B-1 606 on host machine 602→roce NIC 608 on host machine 602→tor switch 642→one or more intermediate switches 646→tor switch 644→roce NIC 628 on host machine 622→computing instance B-2 626 on host machine 622. As can be seen, the switch fabric 640 is shared by clients or tenants for communication of their layer 2RoCE traffic. The same switch fabric is used to transmit RoCE packets for different tenants. RoCE packets (and optionally non-RoCE normal IP traffic) from different tenants flow through the same common network architecture. Traffic in this common network architecture is isolated using tags associated with packets. Each customer (e.g., a computing instance of a customer on a VLAN or peer VLAN) experiences that it has a dedicated layer 2 network to carry RoCE traffic, which is actually carried over a shared layer 3 cloud-based switch fabric network. The host machine that generates the RoCE traffic from the client's application generates layer 2 ethernet frames (also referred to as layer 2 packets) instead of layer 3 packets.
Fig. 7A, 7B, and 7C present a simplified flowchart 700 depicting a process for performing RDMA data transfer from a source computing instance on a multi-tenant source host machine to a destination computing instance on a multi-tenant destination host machine over a shared layer 3 switch fabric using a layer 3 routing protocol, in accordance with certain embodiments. The processes depicted in fig. 7A-C may be implemented in software (e.g., code, instructions, programs) executed by one or more processing units (e.g., processors, cores) of the respective systems, using hardware, or a combination thereof. The software may be stored on a non-transitory storage medium (e.g., on a memory device). The methods presented in fig. 7A-C and described below are intended to be illustrative and not limiting. Although fig. 7A-C depict various process steps occurring in a particular order or sequence, such depiction is not intended to be limiting. In some alternative embodiments, the processes may be performed in a different order and/or some steps may be performed in parallel. In certain embodiments, such as the embodiment depicted in fig. 6, the processes depicted in fig. 7A-C may be performed cooperatively by the RoCE NICs 608 and 628, TOR switches 642 and 644, and one or more intermediate switches 646 of the switch fabric 640. The methods depicted in fig. 7A-C and described below may be used for different versions of RoCE, such as RoCEv2 and other future versions, as well as layer 2RDMA packets according to other RDMA protocols supporting VLAN tagging.
For purposes of describing the method and as an example using the embodiment depicted in FIG. 6, assume that data will be transferred using RDMA and RoCE from computing instance A-1 604 hosted by host machine 602 to computing instance A-2 624 hosted by host machine 622, where A-1 and A-2 belong to the same customer A and are on the same VLAN 1001. The computing instance that initiated the data to be transferred (e.g., a-1 604) may be referred to as a source computing instance, and the host machine that hosts the source computing instance (e.g., host machine 602) may be referred to as a source host machine. The computing instance to which the data is to be transferred (e.g., a-2 624) may be referred to as a destination or target computing instance, and the host machine hosting the destination computing instance (e.g., host machine 622) is referred to as the destination or target host machine. The source and destination computing instances may be virtual machines or bare machine instances. The source and destination host machines may be in the same ethernet domain or in different ethernet domains.
At 702, a RoCE NIC on a source host machine hosting a source computing instance receives information (e.g., from a virtual function) identifying data to be transferred from the source computing instance to a destination computing instance using RDMA and RoCE. For example, the RoCE NIC may receive information identifying a source computing instance and a range of memory addresses identifying data blocks to be transferred from the source computing instance to a destination computing instance. For the embodiment depicted in fig. 6, the RoCE NIC 608 may receive information for a block of data to be transferred from computing instance a-1 604 hosted by host machine 602 to computing instance a-2 624 hosted by host machine 622.
At 704, the RoCE NIC accesses data to be transmitted from an application memory of a source computing instance on a source host machine and generates layer 2802.1Q tagged RoCE packets for the data to be transmitted, wherein VLAN information is encoded in an 802.1Q tag appended to each packet and QoS information is encoded in one or more headers of the packet. Each layer 2802.1Q tagged RoCE packet may have one 802.1Q tag or more than one 802.1Q tag (e.g., 802.1ad or "Q-in-Q tag" as described herein). The data may be accessed from memory using a Direct Memory Access (DMA) controller on the RoCE NIC and then packed by breaking the accessed data into RDMA payload blocks for packets. For example, 1 Megabyte (MB) of data may be marked for transfer from a source computing instance to a destination computing instance. The RoCE NIC may access this data from the application memory of the source computing instance and divide the data into 2 Kilobyte (KB) blocks, where each block represents the RDMA payload to be transferred to the destination computing instance. Each payload is then packetized by the RoCE NIC to generate a RoCE layer 2 packet (or "frame"). The packets are formatted according to an appropriate version of the RoCE protocol, such as RoCEv2 (or other RoCE protocol, or another RDMA protocol supporting VLAN tagging).
The RoCE NIC adds an 802.1Q tag to each RoCE packet, the 802.1Q tag encoding information (e.g., VLAN identifier) identifying the VLAN to which the source computing instance belongs. The 802.1Q protocol covers the use of VLANs on ethernet networks and supporting VLANs. The 802.1Q protocol uses tags (referred to as 802.1Q tags for short) to distinguish traffic that passes through the trunk (trunk) and belongs to different VLANs.
In some embodiments, the processing in 704 is performed in conjunction with a RoCE engine on a RoCE NIC and virtual functions on a RoCE NIC corresponding to the source computing instance. The RoCE engine is responsible for generating RoCE packets. The RoCE engine may not have information identifying the VLAN identifier or QoS information for the packet that will be encoded in the 802.1Q tag added to the packet. In some embodiments, the virtual function corresponding to the source computing instance provides a particular VLAN identifier that indicates the VLAN of the source computing instance. This VLAN identifier is then encoded as an 802.1Q tag that is added to each packet. In some embodiments, the virtual functions on the host machine are programmed by the hypervisor to force packets outgoing from the RoCE NIC of the host machine to the switch fabric network to have 802.1Q VLAN tags. For example, for the embodiment depicted in FIG. 6, VF-A-1 612 may be implemented to force the tagging of the RoCE packet from source computing instance A-1 604 with Sub>A VLAN ID indicating VLAN 1001, and VF-B-1 614 may be implemented to force the tagging of the RoCE packet from source computing instance B-1606 with Sub>A VLAN ID indicating VLAN 1002.
QoS information may be encoded in one or more different portions of the 802.1Q tagged RoCE packet. In some embodiments, the QoS information is encoded in DSCP bits of the IP header of each packet. In certain other embodiments, the QoS information may be encoded in 802.1p bits in the ethernet header of each packet. QoS information generally includes information indicating priorities (or classes) for different traffic flows. QoS information for a packet may specify a particular priority class for that packet that will be used to forward/route the packet to its destination. For example, the QoS information may contain information identifying the priority (e.g., high, low, etc.) assigned to the RoCE packet. The QoS information may also include other information such as information specifying various parameters for each priority class related to flow control, buffer allocation, queuing, scheduling, etc. In one example, the QoS information is provided by a computing instance that initiates the RDMA transfer.
Fig. 8A shows a RoCE packet format according to version 2 of the RoCE protocol (RoCEv 2). As shown in fig. 8A, the RoCEv2 packet 800 includes a 22-byte ethernet header 801, a 20-byte IP header 803, and an 8-byte UDP header 804. The ethernet header 801 includes an octet preamble field 816, a six byte destination MAC address field 809, a six byte source MAC address field 810, and a two byte ethernet type field 811 whose value indicates that the header is appended to an IP packet. The UDP header 804 includes a value indicating that the destination port number is 4791, which specifies RoCEv2. The RoCEv2 packet 800 also includes a twelve byte Infiniband (IB) base transport header 805, an RDMA data payload 806 (which may be up to about 1400 bytes in length), a 32 bit RoCE end-to-end constant cyclic redundancy check (ICRC) field 807, and a four byte ethernet hop Frame Check Sequence (FCS) field 808. The RoCEv2 is presented by way of example only and it is contemplated that the systems, methods, and apparatus described herein may likewise be implemented using one or more other protocols, such as VLAN tagging, that support RDMA traffic.
As part of the processing in 704 for generating a RoCEv2 layer 2 packet as shown in fig. 8A, the RoCE NIC on the host machine is configured to access data to be transferred and prepare RDMA payload 806. The RoCE NIC is then configured to add headers 805, 804, 803, and 801 (and check fields 807 and 808) to generate a RoCEv2 layer 2 packet.
As described above, the virtual function is programmed (e.g., by the hypervisor of the host machine) to force RDMA packets outgoing to the fabric to be tagged with an IEEE 802.1Q header (sometimes also referred to as a "VLAN tag") that identifies the VLAN on which the source computing instance of the packet was found. In some embodiments, VLANs are associated with individual customers or tenants, so the associated VLAN identifiers are used within the architecture to enforce traffic isolation between different tenants at layer 2. Fig. 8B shows the format of an 802.1Q VLAN tagged RoCEv2 packet 820 that includes a four byte VLAN tag 802 inserted between a source MAC address field 810 and an ethernet type field 811. VLAN tag 802 includes sixteen-bit tag protocol ID data field 812 having a value of 0x8100, three-bit user Priority Code Point (PCP) data field 813, one-bit discard eligibility indicator data field 814, and twelve-bit VLAN identifier data field 815 that identifies the VLAN. In some embodiments, the VLAN identifier for the packet is encoded in this VLAN identifier field 815. In some embodiments, PCP data field 813 (also referred to as an "IEEE 802.1p" or "802.1p" data field) in VLAN tag 802 may be used to encode QoS information (e.g., traffic class priority information) for the packet.
In some other embodiments, qoS information may be encoded in DSCP bits of IP header 803 of each packet. Fig. 9A shows the format of an IP header 803, which includes an eight bit version and header length data field 901, a six bit Differential Services Code Point (DSCP) data field 902, a two bit Explicit Congestion Notification (ECN) data field 903, a sixteen bit length data field 904, a sixteen bit identification data field 905, a sixteen bit fragment flag and offset data field 906, an eight bit time-to-live (TTL) data field 907, an eight bit protocol data field 908, a sixteen bit checksum data field 909, a four byte source IP address data field 910, and a four byte destination IP address data field 911. In the RoCEv2 packet, the protocol data field 908 has a value indicating that a header is attached to the UDP packet. DSCP data field 902 may be used to carry QoS information for a packet. As described in more detail below, the ECN data field 903 may be used to indicate congestion information that indicates whether a packet is experiencing congestion in its path from a source computing instance to a destination computing instance.
As previously described, qoS information may be used to indicate the traffic class of the packet. For example, qoS information may be used to assign different levels of traffic class priority to packets, where the priority associated with the packet is used to determine the priority at which the packet will be forwarded from the source host machine to the destination host machine in the network path. The priority assigned to packets originating from one application may be different from the priority assigned to packets originating from a different application. In general, packets associated with applications that are more sensitive to network latency have higher priority than packets associated with applications that are less sensitive to network latency. The QoS information of the packet may be specified by the computing instance that originated the RDMA transfer (e.g., by an application executing on the computing instance). In one such example, the initiating computing instance may instruct the corresponding virtual function to perform RDMA transfer according to a specified quality of service (QoS) (e.g., a specified traffic class), and then the virtual function causes an RDMA engine on the RoCE NIC to generate a packet including a VLAN tag identifying the VLAN of the source computing instance, and also encode the specified QoS information in a data field (e.g., 802.1p field or DSCP field) of the 802.1Q tagged RoCE packet.
The QoS value may be used by the client to indicate performance expectations. For example, a client may assign a low QoS priority to RoCE packets carrying a large amount of data transfer for delay tolerant applications and/or may assign a higher QoS priority value to RoCE packets carrying low volume data transfer for delay sensitive applications.
In another example, the value of the QoS data field is indicated by the type of RoCE transfer being performed (e.g., according to a predetermined mapping of QoS priority values to RDMA transfer types). For example, a RoCE packet carrying a large capacity data transfer may be marked with a different QoS than a RoCE packet carrying a small capacity data transfer that is extremely delay sensitive. Examples of high volume transfers may include backup, reporting, or batching messages, while examples of low latency critical transfers may include congestion information notification, cluster heartbeat, transaction commit, cache fusion operations, and so forth.
For example, in fig. 6, if a packet represents data to be transferred from compute instance a-1604 to compute instance a-2 624 on VLAN 1001 via RDMA, then the RoCE NIC 608 generates layer 2 802.1Q tagged RoCE packets for the data to be transferred, each with an 802.1Q tag, where VLAN ID field 815 of each encodes information identifying VLAN 1001. Further, qoS information for each packet may be encoded in the DSCP field and/or PCP field of the packet.
The 12-bit VLAN tag field defined in the IEEE 802.1Q standard can identify a maximum of 4096 VLANs. The 802.1ad Q-in-Q standard (also referred to as "802.1Q-in-802.1Q" or "Q-in-Q-tag" or "Q-in-Q standard") was developed to extend over 4096 VLANs. According to the Q-in-Q standard, two (or more) 802.1Q VLAN tags may be attached to the packet. These two tags, referred to as an internal tag and an external tag, may be used for a variety of different purposes (e.g., an internal tag may represent additional security rules). For example, in some embodiments, internal and external tags may be used to support application-specific network enforcement. It may be desirable to differentiate packets based on tenant and application, where one label corresponds to the tenant and another label corresponds to one of the tenant's particular applications. In one such example, a host machine configured to execute multiple compute instances for the same customer on the same service VLAN may use the customer VLAN tag to isolate traffic across the multiple compute instances on the service VLAN. Alternatively or additionally, a host machine configured to execute multiple applications for the same customer on the same service VLAN may use the customer VLAN tag to split traffic across multiple applications on the service VLAN. Thus, in some cases, as part of the processing performed in 704, the RoCE NIC may append two labels to each RoCE packet according to the Q-in-Q standard. For example, in the case of a tenant with different applications, such as tenant a with a simulation application and a backup application, two separate 802.1Q tags may be attached to the RoCE packet, one with a VLAN ID identifying the tenant (e.g., tenant a) and the second with a VLAN ID identifying the application (e.g., simulation, backup).
An example of a Q-in-Q tagged RoCEv2 packet 830 is depicted in fig. 8C. In this case, each packet includes a second VLAN tag 822 (also referred to as an "inside" VLAN tag, a "private" VLAN tag, or a customer VLAN (C-VLAN) tag) in addition to the first VLAN tag 802 (also referred to as an "outside" VLAN tag, a "public" VLAN tag, or a service VLAN (S-VLAN) tag). One or more additional VLAN tags may be added to the Q-in-Q tagged RoCEv2 packet in the same manner.
Referring back to fig. 7A, at 706, the 802.1Q tagged RoCE packet is forwarded from the RoCE NIC on the source host machine to the TOR switch connected to the source host machine. The TOR switch that receives the 802.1Q tagged RoCE packet may also be referred to as an ingress TOR switch because it represents an ingress edge device for the switch fabric. For example, in FIG. 6, an 802.1Q-labeled RoCE packet generated by the RoCE NIC 608 is transmitted to the TOR switch 642, where the TOR switch 642 is a layer 0 switch in the switch fabric 640.
Although the embodiment in fig. 6 and the flowcharts in fig. 7A-C describe the processing performed by the TOR switch, this example is not intended to be limiting. In general, the RoCE packets may be transmitted from the source host machine to a networking device (e.g., a switch) that provides layer 2 functionality, a networking device (e.g., a router) that provides layer 3 functionality, or a networking device that provides both layer 2 and layer 3 functionality. For example, TOR switches 642 and 644 depicted in fig. 6 may provide layer 2 and layer 3 functionality. Typically, the networking device that receives the RoCE packets from the source host machine is an edge device of the switch fabric for transferring data from the source host machine to the destination host machine.
The source host machine 602 may be connected to the TOR switch 642 via an ethernet cable (e.g., copper cable, fiber optics, etc.). In some embodiments, the packets arrive at the trunk ports of TOR switch 642. Trunk ports may admit packets belonging to multiple VLANs, while traffic isolation is accomplished through VLAN information encoded in the packets. For example, a RoCE packet representing data to be transmitted from compute instance A-1 604 would be tagged with a tag identifying VLAN 1001, while a RoCE packet representing data to be transmitted from compute instance B-1 606 would be tagged with a tag identifying VLAN 1002.
At 708, the ingress TOR switch (e.g., TOR switch 642) receiving the packet converts each layer 2.1q tagged RoCE packet to be forwarded via intermediate switch 646 into a layer 3 packet, wherein the format of the layer 3 packet is based on a particular over-the-package protocol (OEP) that is used to transmit the packet through the switch fabric. Various different overlay encapsulation protocols may be used to transport RoCE packets through the switch fabric, such as, for example, virtual extensible LAN (VxLAN), network virtualization using generic routing encapsulation (NVGRE), generic networking virtualization encapsulation (GENEVE), MPLS, stateless Transport Tunneling (STT), etc. For example, for the embodiment in fig. 6, ingress TOR switch 642 receives 802.1Q tagged RoCE packets from RoCE NIC 608 and converts each packet as described in 708 by encapsulating the packet. Encapsulation is performed by adding a wrapper corresponding to the overlay encapsulation protocol to the packet that will be used to transport the packet over the layer 3 switch fabric using the layer 3 routing protocol, wherein the wrapper includes one or more headers. ( For completeness, it is noted that in some environments implementing the method 700, the ingress TOR switch may receive 802.1Q tagged RoCE packets from a source computing instance that are directed to a destination computing instance that is in the same chassis as the source computing instance. In such cases, the ingress TOR switch may forward the packets to the destination computing instance (e.g., via the corresponding RoCE NIC) through layer 2 transport without processing them at 708 and later. )
As part of the processing at 708, ingress TOR switch 642 generates the appropriate header or headers corresponding to the overlay encapsulation protocol being used and adds a wrapper including the header(s) to each received layer 2 802.1q tagged RoCE packet to convert the packet into a layer 3 packet, wherein the overlay encapsulation protocol header added to the packet is visible to networking devices in switch fabric 640. Layer 2 frames are converted into layer 3 packets to enable packets to be routed from TOR switch 642 connected to the source host machine to TOR switch 644 connected to the destination host machine 622, where the routing occurs over layer 3 switch fabric 640 and uses a layer 3 routing protocol that is more robust and scalable than the layer 2 forwarding protocol.
As part of the processing performed at 708, at 708-1, for each received 802.1Q tagged RoCE packet, TOR switch 642 determines VLAN information from the received packet and maps or translates the information into a field (or fields) that is added to the wrapper of the packet at 708. In this way, VLAN identifier information is mapped to a layer 3 header that is added to the packet at 708 and is visible to the various networking devices in the switch fabric 640. The layer 3 packet includes at least one outer header that is added to the layer 2 packet at 708.
For example, if the VxLAN protocol is used as a layer 3 encapsulation protocol for transporting packets through the switch fabric 640, then in 708 the layer 2 802.1q packet is converted to a VxLAN packet by adding a VxLAN header (and other fields of the VxLAN wrapper) to the packet received in 708. As part of this operation, at 708-1, TOR switch 642 determines VLAN identifier information encoded in the packet's 802.1Q tag and maps (or encodes) this information to a field added to the packet's VxLAN header at 708. In some implementations, VLAN information in the 802.1Q tag of the RoCE packet is mapped (e.g., according to a VNI-VLAN mapping) to a corresponding unique VNI that is copied to a VNI field added to the VxLAN header of the packet. In this way, VLAN identifier information in the 802.1Q tag that identifies a particular tenant may also be included or forwarded to a corresponding identifier in the packet's overlay encapsulation protocol header.
For the case of a RoCE packet having more than one VLAN tag (e.g., Q-in-Q tagged RoCE packet 830), TOR switch 642 maps VLAN IDs in the external tag of the RoCE packet to corresponding VNIs according to the VNI-VLAN mapping. The VNI-VLAN mapping (e.g., as a table) that may be stored in the memory of TOR switch 642 is a one-to-one correspondence between VNIs assigned on TOR switch 642 and the VLANs to which they are assigned. VLANs have local significance only on switches, so the same VLAN ID may map to a different VNI elsewhere in the fabric (although for convenience it may be desirable to use the same VNI-VLAN mapping on multiple switches in the fabric) and/or the same VNI may map to a different VLAN ID elsewhere in the fabric (e.g., another VLAN ID assigned to the same tenant). The VNIs copied into corresponding data fields of at least one outer header (e.g., an encapsulation protocol header) of the corresponding layer 2 encapsulated packet, thereby expanding multi-lease across the L3 network boundary. For the case where the overlay encapsulation protocol is VxLAN (or GENEVE), the VNI is carried in the 24-bit VNI field of the VxLAN (or GENEVE) header of the layer 3 encapsulated packet. For the case where the overlay encapsulation protocol is NVGRE, the VNI is carried in the 24-bit Virtual Subnet ID (VSID) field of the NVGRE header of the layer 3 encapsulated packet. For the case where the overlay encapsulation protocol is STT, the VNI is carried in the 64-bit context ID field of the STT header of the layer 3 encapsulated packet.
Further, as part of the processing at 708, at 708-2, for each received 802.1Q tagged RoCE packet, TOR switch 642 determines QoS information from the received packet and maps or translates the information into a field (or fields) that is added to the header of the packet at 708. In this way, the QoS information is mapped to a portion of the wrapper (e.g., an outer header) added to the packet at 708, which is visible to the various networking devices in the switch fabric 640.
For example, if the VxLAN protocol is used as a layer 3 encapsulation protocol for transporting packets through the switch fabric 640, then in 708 the layer 2 802.1q packet is converted to a VxLAN packet by adding a VxLAN envelope or wrapper (including a VxLAN header and several outer headers) to the packet received in 708. As part of this operation, at 708-2, TOR switch 642 determines the QoS information encoded in the received packet and maps (or encodes) the information to fields within the VxLAN wrapper added to the packet at 708. As described above, depending on the implementation, qoS information may be encoded in one or more different portions of the received layer 2 packet. For example, qoS information may be encoded in PCP or 802.1p bits of an 802.1Q tag and/or may be encoded in DSCP fields of an ethernet header of a received layer 2 packet. As part of 708-2, TOR switch 642 determines this QoS information and maps or translates it into field(s) that are added to the VxLAN wrapper of the packet at 708. In some implementations, the QoS information from the RoCE packet is mapped to the DSCP field in the outer IP header of the VxLAN wrapper. In this manner, the QoS information in the layer 2 802.1q tagged RoCE packet is included or forwarded to the layer 3 wrapper of the VxLAN packet in a manner that makes it visible to the various networking devices in the switch fabric 640.
Fig. 10 shows the format of a layer 3 encapsulated packet 1000 (also referred to as a VxLAN packet) produced by an ingress TOR switch that applies VxLAN as OEP. As shown in fig. 10, vxLAN packet 1000 includes an outer ethernet header 1010, an outer IP header 1020, an outer UDP header 1040, a VxLAN header 1050, an original packet (e.g., a RoCEv2 packet) 1060, and a Frame Check Sequence (FCS) 1070. In the process performed in 708, when the VxLAN is an overlay encapsulation protocol, as part of encapsulating the 802.1Q-tagged RoCE packet, the TOR switch places VxLAN header 1050 outside of the "original" 802.1Q-tagged RoCE packet, then places outer UDP header 1040 outside of the VxLAN header, then places outer IP header 1020 outside of the outer UDP header, and then adds outer ethernet header 1010 over the outer IP header.
The external ethernet header 1010 includes a destination MAC address field 1011, a source MAC address field 1012, (optionally) a VLAN type field 1013, (optionally) a VLAN ID tag 1014, and an ethernet type field 1015 carrying a value of 0x 0800. The outer IP header 1020 includes an eight bit version and header length data field 1021, a six bit DSCP data field 1022, a two bit ECN data field 1024, a sixteen bit length data field 1024, a sixteen bit identification data field 1025, a 16 bit slice flag and offset data field 1026, an eight bit Time To Live (TTL) data field 1027, an eight bit protocol data field 1028 carrying a value 17 (indicating UDP), a sixteen bit header checksum data field 1029, a four byte source IP address data field 1030 indicating the IP address of the ingress TOR switch, and a four byte destination IP address data field 1031 indicating the IP address of the egress TOR switch. The outer UDP header 1040 includes a source port field 1041 that may carry a hash value as information from the original RDMA packet, a destination (VxLAN) port field 1042 carrying a value 4789, a UDP length field 1043, and a checksum field 1044.VxLAN header 1050 includes an eight bit flag field 1051, a 24 bit VNI field 1053 carrying a VNI, and two reserved fields 1052 and 1054.
As part of creating a layer 3VxLAN packet by encapsulating an "original" 802.1Q-tagged RoCE packet in 708, the TOR switch pairs VLAN ID information (e.g., lease information), qoS information (e.g., traffic class), and congestion information in various fields from the original packet into one or more headers added to the original 802.1Q-tagged RoCE packet. For example, in some embodiments, the VLAN ID field maps to and is carried in the VNI field 1053, the QoS information from the RoCE packet is copied to (or mapped to and carried in the value of) the DSCP field 1022 in the IP header 1020, and congestion may be signaled by setting a bit (referred to as an ECN bit) in the ECN field 1023. In this way, the QoS information in the DSCP data field of the IP header of the RoCE packet (e.g., from DSCP field 902 in fig. 9) may be copied or otherwise mapped to the DSCP data field 1022 of the outer IP header 1020 of the encapsulated VxLAN packet. In embodiments where the QoS information in the RoCE packet is encoded in the PCP data field 813 of the 802.1Q tag of the RoCE packet, this information may also be mapped to the DSCP data field 1022 of the outer IP header 1020 of the VxLAN packet.
Thus, at 708, an overlay encapsulation protocol wrapper is added to the RoCE packet and VLAN ID and QoS information from the RoCE packet is mapped to the overlay encapsulation protocol wrapper and encoded therein in a manner that is visible to devices in the switch fabric. Referring to fig. 7B, at 710, the encapsulated layer 3 packet generated at 708 is routed through the switch fabric from the TOR switch that receives the packet from the source host machine to the TOR switch that is connected to the destination host machine. In some embodiments, the packets are forwarded and sent along a tunnel (e.g., a VxLAN tunnel if a VxLAN overlay encapsulation protocol is used) that passes the packets through the switch fabric from a TOR switch connected to the source host machine to a TOR switch connected to the destination host machine. The path taken by the packet in the switch fabric may traverse a plurality of networking devices, each device configured to receive the packet via an ingress port of the networking device and forward the packet to a next-hop networking device via an egress port of the networking device to facilitate transfer of the packet to a TOR switch connected to the destination host machine. For example, in the embodiment depicted in fig. 6, encapsulated layer 3 packets are forwarded from TOR switch 642 to TOR switch 644 via one or more intermediate switches 646 of switch fabric 640.
At 708, VLAN information (e.g., lease information) and QoS information are now visible to switches and networking devices in switch fabric 640 to route packets using layer 3 routing protocols by translating VLAN ID and QoS information from layer 2 802.1q RoCE packets into information carried in the wrapper of the layer 3 encapsulated packets. As part of the processing in 710, each networking device in switch fabric 640 that receives and forwards layer 3 encapsulated RoCE packets (including TOR switches 642 and 644) is configured to forward the packets at 710-1 based on layer 3 and according to QoS information specified in the packet's encapsulation wrapper. In some implementations, each networking device receiving a packet may have multiple RDMA data queues corresponding to different QoS priority levels. Upon receiving the packet, the networking device is configured to determine QoS information for the packet from one or more fields of the encapsulation wrapper (or from a layer 2 header, for the first TOR switch receiving the packet) and place the packet into an RDMA data queue corresponding to a priority specified by the QoS information. The packets are then fetched from the networking device and forwarded according to the particular priority of the queue. The plurality of RDMA data queues may be implemented using, for example, one or more buffers, and the networking device may further include queuing logic configured to distribute incoming packets among the plurality of queues according to a specified priority level (e.g., traffic class) and dequeuing logic configured to service the plurality of queues according to a desired scheduling scheme (e.g., weighted round robin scheduling and/or strict priority scheduling). When using a commercially available chip to implement networking devices, it may be desirable to make full use of a relatively small amount of buffer. For example, it may be desirable to reuse unused buffers (e.g., from queues that are not otherwise used) for use as at least a portion of the plurality of queues. For example, it may be desirable to implement at least a portion of environment 600 (e.g., TOR switches 642 and 644, switch fabric 640) to exclude support for multicast traffic, and in this case, buffers previously assigned for storing multicast traffic may instead be reprogrammed as storage for multiple queues.
As part of 710, each networking device in the switch fabric that receives and forwards the layer 3 encapsulated RoCE packets may signal congestion when experiencing congestion by marking the outer header of the layer 3 encapsulation wrapper of the packet in 710-2. In some embodiments, the RoCE protocol specifies congestion information using a mechanism called Explicit Congestion Notification (ECN), which is an IP protocol concept. According to this mechanism, ECN bits in the IP header of the packet are used to specify or encode congestion information. Thus, as part of the processing in 710-2, if a networking device receiving and forwarding a packet detects congestion (e.g., detects that the buffer occupancy exceeds a threshold), the networking device may signal congestion by setting an ECN bit in the outer IP header of the packet's overlay encapsulation protocol wrapper. For example, if the VxLAN is an overlay encapsulation protocol, a bit in ECN field 1023 of outer IP header 1020 may be set by the networking device (e.g., if the bit is not already set) to signal congestion.
In this way, congestion information is included in the packet and updated as the packet travels through the switch fabric, and carried along with the packet to the destination of the packet. For example, in the embodiment depicted in fig. 6, as a packet travels from source host machine 602 to destination host machine 622, the packet passes through a series of networking devices in switch fabric 640, any networking device through which the packet passes may signal congestion by setting a congestion bit (e.g., a bit in the ECN field of an outer IP header) in the layer 3 encapsulation wrapper of the packet. The series of networking devices includes TOR switches 642, TOR switches 644, and any intermediate switches 646 in the path traversed by the packet.
The indication or marking of congestion in the packet may be accomplished by any networking device in the switch fabric that routes the packet from the source host machine to the destination host machine. The egress TOR copies the ECN bits from the layer 3 wrapper to the IP header of the inner packet and sends the inner packet as layer 2 to the destination host machine. In this way, congestion information is carried in the packet up to the destination host machine. Congestion information is carried across the boundaries of a layer 2 network including source and destination computing instances and host machines, and a layer 3 network including TOR switches and intermediate switches in the switch fabric.
Conventionally, roCE relies on layer 2 Priority Flow Control (PFC) or ECN, or a combination of PFC and ECN, for congestion control. It may be desirable to implement TOR switches (e.g., TOR switches 642 and 644) to perform Priority Flow Control (PFC). When the receive buffer of the PFC-enabled switch fills to a threshold level, the switch sends a PAUSE frame for the corresponding priority class back to the sender. PFC provides a maximum of eight priority classes, and it may be desirable to implement a switch to use the PCP value of a packet receiving a buffer header as the priority class indicated by a PAUSE frame (alternatively, use a mapping of the DSCP value of the packet to the priority indicated by the PAUSE frame).
It may be desirable to implement congestion control on a per application basis so that congestion control that suspends RDMA traffic for one client does not suspend RDMA traffic for another client. Because the PAUSE frame causes the sender to PAUSE all traffic of the indicated priority class and thus potentially affect multiple clients, it may be desirable to configure each TOR switch to prevent any PAUSE frame from traveling outside the fabric side of the TOR switch. For example, it may be desirable to limit PFC to the host-facing interfaces of each TOR switch 642 and 644 (e.g., by configuring each input port of the TOR switch to pass only PAUSE frames down to the host or the other input port of the TOR switch). Locally including PAUSE frames in this manner may help avoid congestion from spreading in large architectures and/or avoid real-time locking.
At 712, a TOR switch (also referred to as an egress TOR switch because it represents an egress edge device of the switch fabric) connected to the destination host machine receives the layer 3 encapsulated RoCE packet. For example, in the embodiment depicted in fig. 6, the egress TOR switch 644 receives the packet.
At 714, for each received layer 3 encapsulated packet, the TOR switch determines congestion information from the layer 3 overlay encapsulation protocol wrapper of the packet and maps or translates the congestion information to fields within the RoCE frame tagged by layer 2 802.1q encapsulated by the layer 3 wrapper. In this manner, for each layer 3 encapsulated RoCE packet received by the egress TOR switch, congestion information encoded in a field (e.g., ECN field) of the layer 3 overlay encapsulation protocol wrapper header may have been signaled by one or more networking devices (e.g., TOR switch 642, one or more intermediate switches 646) along the path traversed by the packet in the switch fabric mapped to and retained in the header of the 802.1Q tagged RoCE layer 2 frame. In some implementations, congestion information determined from the layer 3 wrapper (e.g., from ECN field 1023 in IP header 1020 of the VxLAN packet) is copied to the ECN field (e.g., ECN field 903 depicted in fig. 9) of the IP header of the RoCE frame of the layer 2 802.1q label.
For each received layer 3 encapsulated RoCE packet, the egress TOR switch decapsulates the packet (e.g., outer ethernet header 1010, outer IP header 1020, outer UDP header 1040, vxLAN header 1050, and FCS 1070) by removing the encapsulation wrapper added to the packet at 708, recovering the inner layer 2 802.1q tagged RoCE packet at 716. For example, if a VxLAN wrapper is added to the packet, that wrapper is removed at 714 to leave an 802.1Q tagged RoCE packet. Since the congestion information is mapped to the header of the 802.1Q tagged RoCE packet in 714, the congestion information in the layer 3 overlay encapsulation protocol wrapper is not lost due to the decapsulation of the layer 3 encapsulated packet. Mapping (e.g., copying) of the congestion information in 714 may be performed before, during, or after decapsulation in 716.
As part of the processing in 716, the egress TOR switch 644 itself may set ECN congestion bits to signal congestion when experiencing congestion, in addition to translating congestion information from the layer 3 overlay encapsulation protocol wrapper into a header within the layer 2 802.1q tagged RoCE packet.
Referring to fig. 7C, at 718, the decapsulated layer 2.1 q-tagged RoCE packet is forwarded by the egress TOR switch to the destination host machine. For example, in the embodiment of fig. 6, the RoCE packets decapsulated by TOR switch 644 are forwarded to destination host machine 622. On the destination host machine, the packet is received and processed by the RoCE NIC on the destination host machine.
At 720, for each received 802.1Q tagged RoCE packet, the RoCE NIC on the destination host machine checks whether congestion is signaled in the header of the received packet. For example, if the ECN bit of the IP header of the packet is set, congestion may be signaled. If it is determined in 720 that the packet indicates congestion, then in 722 the RoCE NIC sends a response to the sender of the packet (e.g., to the RoCE NIC on the source host machine) indicating congestion and requests the sender to slow down the data transfer rate. In some implementations using the ECN protocol, the response takes the form of a congestion notification packet (CNP packet) sent from the RoCE NIC on the destination host machine to the RoCE NIC on the source host machine. For example, data Center Quantized Congestion Notification (DCQCN) may be implemented in a RoCE NIC (e.g., RDMA NIC card and corresponding software driver) to use ECN information for flow control by sending CNP packets to let the sender know about congestion. These CNP packets indicate to the source host machine that there is congestion in the network and request the source host machine to slow down the rate at which it sends RoCE packets. The CNP packet is sent to the appropriate sender identified from information (e.g., source MAC address and/or source IP address) in the received layer 2RoCE packet. Upon receiving such notification, in response, the sender (e.g., the RoCE NIC on the source host machine) may reduce transmission of the RoCE packets accordingly. Further details regarding CNP packets and how they are transmitted from the destination host machine to the source host machine are provided below.
In some embodiments, the sender may use an algorithm that calculates a percentage reduction in the data transmission rate. For example, after receiving the first CNP packet, the sender (e.g., the RoCE NIC on the source host machine) may reduce its transmission rate by a percentage. After receiving another CNP packet, it may further reduce its transmission rate by an additional percentage, and so on. In this way, the sender may perform adaptive rate control in response to receiving the CNP packet.
For each received 802.1Q tagged RoCE packet, the RoCE NIC on the destination host machine retrieves the RDMA data payload from the packet and transfers the data via the corresponding Virtual Function (VF) to the application memory on the destination host machine of the destination computing instance at 724. In some embodiments, the virtual function on the RoCE NIC corresponding to the destination compute instance is configured to control the RoCE engine of the RoCE NIC to transfer RDMA data payloads in DMA transfer to memory space of the application memory of the destination host machine. This operation completes RDMA data transfer from the source compute instance to the destination compute instance.
As described above, the VLAN identifier that can identify the tenant is included in the 802.1Q tag added to the RoCE packet: for example, in the VLAN ID field of an 802.1Q tag. VLAN ID or lease information is also mapped to VNI included in layer 3 overlay encapsulation protocol wrapper that is added to 802.1Q tagged RoCE packets by TOR switches connected to the source host machine. Mapping VLAN identifiers (or lease information) to identifiers in fields of a layer 3 encapsulation wrapper makes lease information visible to networking devices in a layer 3 switch fabric. These networked devices use this information to isolate traffic belonging to different customers or tenants.
QoS information associated with the packet is also saved from the RoCE NIC on the source host machine all the way to the RoCE NIC on the destination host machine. By encoding that information in the layer 3 overlay encapsulation protocol wrapper added to the RoCE packet marked by the ingress TOR switch, the QoS information encoded in the layer 2RoCE packet is made visible to networking devices in the switch fabric. This enables networking devices in the switch fabric to route RoCE traffic through the switch fabric using layer 3 routing protocols and according to QoS information associated with each packet.
Any networking device in the switch fabric may send out congestion signals on a per packet basis. This congestion information is maintained in the packet as the packet travels through the switch fabric from the TOR connected to the source host machine to the TOR connected to the destination host machine. At the TOR switch connected to the destination host machine, the congestion information from the layer 3 encapsulation wrapper is translated (e.g., copied) to the RoCE packet header (e.g., translated into ECN bits in the IP header of the RoCE packet) and thus saved and made available to the destination host machine. The destination host machine may then respond to the congestion information by sending a CNP packet.
Congestion notification information routing
RDMA data transfer is generally very sensitive to network latency that can be caused by congestion in a switch fabric network. RDMA congestion notification packets (CNP packets) are critical because they help signal RDMA congestion management (flow control). They are therefore particularly sensitive to packet losses and network delays, while not requiring large network bandwidths. Accordingly, CNP packets sent by the destination host machine in response to congestion indicated by the received packets are given high priority so that they arrive at the source host machine with minimal delay, and the sender may be notified to slow down the transmission of data so as to minimize or avoid packet loss due to congestion. Furthermore, priority queuing of CNP packets minimizes the likelihood that CNP packets themselves will be dropped due to congestion.
To achieve this, congestion notification packet traffic (CNP traffic) is assigned a high priority such that at each networking device in the switch fabric, CNP packets are assigned to queues with very high priority as they travel from the destination host machine to the source host machine. In some implementations, CNP traffic is assigned to the second highest queue (e.g., next to the network control queues) on each networking device in the switch fabric.
In addition, strict priority queuing techniques are used for CNP packets. Traffic assigned to strict priority queues starves other traffic according to the strict priority queues. For example, if a networking device in the switch fabric has packets from tenant #1, packets from tenant #2, and CNP packets, and if the networking device can only send one packet, then the tenant #1 and tenant #2 packets will be queued, and the CNP packets will be sent instead. In some embodiments, CNP packets are configured with QoS information indicating the particular class used for the CNP packet and will use strict priority queues to queue the packets for transmission.
However, in a strict queuing implementation. Care must be taken that strict priority queuing does not starve other traffic indefinitely. Thus, a limit can be imposed on how much CNP traffic is allowed through the switch fabric. In some embodiments, this limit is a fraction of the total bandwidth of the link. For example, a dedicated strict priority queue may be allocated to CNP traffic with a low bandwidth guarantee so that it does not starve the actual RDMA traffic class. Thus, if a malicious or misconfigured application begins to generate a large number of CNP packets, which may cause CNP traffic to starve other traffic (e.g., RDMA data traffic), then the limiting threshold minimizes the impact of this problem on other traffic.
The feedback delay occurs between the time the ECN-enabled device marks the packet and the time the sender receives the resulting CNP packet. In the worst case, this delay can lead to congestion collapse, which has long been a problem in high performance networks. To avoid lengthy feedback loops between ECN marking of the switch and CNP reflection of the receiving host and subsequent RDMA congestion management actions of the sending host, it may be desirable to configure low and deterministic ECN marking thresholds for networking devices of the switch fabric 640. For example, once congestion is detected, it may be desirable to configure each of the TOR switches and the intermediate switches to label each packet. The aggressive ECN marking policy ensures that the switch initiates ECN marking at the first hint of network congestion, thereby providing a tight loop for congestion management and helping to protect the network from congestion collapse.
Queue-based routing to avoid head-of-queue blocking
As described above, qoS information associated with packets is used by networking devices in the switch fabric to route packets using layer 3 routing protocols. The QoS information may identify a particular priority or class. This priority information may be used by networking devices in the switch fabric (e.g., any one or more (or all) of TOR switch 642, TOR switch 644, and intermediate switch 646) to identify a particular priority queue from among a plurality of queues for use by the networking devices for forwarding packets. For example, a networked device may maintain a set of queues, with each queue for a respective different priority class. Packets corresponding to different customers or tenants may have different assigned priorities or classes, and thus, different classifications of packets may be assigned to different queues on networking devices in the switch fabric. Because packets for different classes (e.g., different tenants, different applications) are allocated to different queues on the networking device in the switch fabric, congestion that can be generated by traffic for one class (e.g., one tenant) does not affect traffic for another class (e.g., another tenant). In some embodiments, it is also possible for RDMA packet streams from different tenants to be assigned to the same queue of a networking device (e.g., the same priority class according to their assignment) and/or RDMA packet streams from the same tenant (e.g., packet streams from different applications of the tenant) to be assigned to different queues of a networking device (e.g., according to their respective different assigned priority classes).
In some embodiments, to avoid the queue head blocking problem, multiple queues are used on the networking device to handle RDMA/RoCE traffic. Providing multiple queues for RDMA data traffic avoids queuing all RDMA/RoCE traffic into a single queue, which can lead to congestion. Multiple (e.g., four) RDMA queues also allow for multiple different applications requiring different levels of performance, all of which require lossless networking. In this way, the environment may provide a dedicated network path for latency-sensitive RDMA applications throughout the architecture of the cloud scale, and may do so while avoiding HOL blocking problems. Each core network queue may support configurable weighted bandwidth distribution.
In some cases, customers or tenants can control which priority queues will be used to route their traffic via QoS information set for the packets. In some implementations, a percentage of queues are set aside for RDMA traffic on networking devices in a switch fabric that has multiple queues (e.g., multiple queues) for packet transmission. For example, in one embodiment, if a switch in the switch fabric has eight queues, six queues may be set aside for RDMA traffic. These RDMA queues may be weighted circular queues, each of which takes a share of network bandwidth but cannot starve each other (e.g., to provide fairness across RDMA applications). In one such scheme, each RDMA queue is weighted equally such that each RDMA queue is serviced once per dequeue period. For example, 95% of the link capacity (shared by traffic allocated to different queues) may be allocated to six RDMA queues, each of which gets one sixth of 95% (e.g., via a weighted round robin scheme with the same weight). In addition, it may be desirable to ensure that the switch fabric is not overused so that there is sufficient bandwidth to handle traffic communicated via the switch fabric. Traffic from different customers or tenants may be assigned to the same RDMA queue, but will be differentiated based on VLAN IDs and/or VNIs encoded in the packets.
The switch fabric 640 may use network control traffic classes for underlying IP routing protocol functions (e.g., between TOR switches). In one example, the plurality of queues of the networking device include a network control queue to carry Ethernet VPN (EVPN) traffic, which may be used to propagate MAC address information across an underlying base layer network and/or to advertise Virtual Tunnel Endpoint (VTEP) flood lists. These network control protocols, such as Border Gateway Protocol (BGP), may allocate the highest traffic class. For example, a network control queue may be dedicated to a network control traffic class and implemented as a strict priority queue such that it is emptied before any RDMA queues are serviced. Since network control traffic does not consume a large amount of network bandwidth, the network control traffic class may allocate a small amount of the total bandwidth of the plurality of queues.
Fig. 9B illustrates an example 950 of multiple queues of a networking device (e.g., TOR switch 642 or 644, intermediate switch 646, etc.), which includes four RDMA queues 960-1 through 960-4 and a Network Control (NC) queue 964. As shown in fig. 9B, the plurality of queues 950 may also include a dedicated Congestion Notification (CN) queue 962 to carry CNP packets. Timely delivery of CNP packets to the sending host is critical to successful RDMA congestion management because if they are lost, then the flow control they signal will not occur. Thus, while CNP traffic is particularly sensitive to packet loss and network delay, it does not require a significant amount of network bandwidth. These requirements may be balanced by configuring CN queue 962 as a strict priority queue such that it is emptied before any RDMA queues (e.g., queues 960-1 through 960-4) are serviced, but only with low priority bandwidth guarantees such that it does not starve the actual RDMA traffic class. To prevent the congestion notification queue 962 from starving the network control queue 964 (e.g., in the event of a configuration error or other problem that results in excessive CNP traffic), it may be desirable to configure the congestion notification queue 962 to have a lower priority than the network control queue 964.
Additionally or alternatively, the plurality of queues of the networking device may also include a scavenger queue for non-RDMA traffic (e.g., other protocols such as TCP). The "scavenger" traffic class uses unused network bandwidth without adversely affecting the RDMA traffic class. Dequeuing logic of the networked device may be configured to service the scavenger queue at a lower priority than the RDMA queue, e.g., by assigning it a lower weight (e.g., the weight is guaranteed bandwidth) in a weighted round robin scheme. Fig. 9C illustrates an example 952 of a plurality of queues 950 of networking devices (e.g., TOR switches 642 or 644, intermediate switches 646, etc.) as described above, which further includes a scavenger queue 966.
As described herein, RDMA packets (layer 2RDMA packets, or layer 3 encapsulated packets carrying layer 2RDMA packets) carry QoS values (e.g., in PCP data fields and/or in DSCP data fields) that indicate the priority of the packets (e.g., traffic class), and the enqueuing logic of the networking device may be configured to distribute incoming packets in multiple RDMA queues of the networking device according to their QoS values. For examples where PCP data fields are used to carry QoS values and RDMA packets are distributed among RDMA queues 960-1 through 960-4 as shown in fig. 9B, a mapping such as the following may be used: RDMA packets with PCP value of 6 or 7 are stored to RDMA queue 960-1, RDMA packets with PCP value of 4 or 5 are stored to RDMA queue 960-2, RDMA packets with PCP value of 2 or 3 are stored to RDMA queue 960-3, and RDMA packets with PCP value of 0 or 1 are stored to RDMA queue 960-4. For examples in which DSCP data fields are used to carry QoS values and RDMA packets are distributed among RDMA queues 960-1 through 960-4 as shown in fig. 9B, a mapping such as the following may be used: RDMA packets with DSCP values in the range of 48 to 64 are stored to RDMA queue 960-1, RDMA packets with DSCP values in the range of 32 to 47 are stored to RDMA queue 960-2, RDMA packets with DSCP values in the range of 16 to 32 are stored to RDMA queue 960-3, and RDMA packets with DSCP values in the range of 0 to 15 are stored to RDMA queue 960-4. The skilled artisan will recognize that the two mappings described above are merely non-limiting examples, and that the distribution of RDMA packets among multiple queues of a networking device may be performed (e.g., by enqueuing logic) according to such a mapping or according to any other mapping of QoS values to RDMA queues.
Deterministic congestion (e.g., ECN bit) marking
In some embodiments, a deterministic congestion marking scheme is used, where congestion marking is performed on a per packet basis. Thus, for each packet, if a networking device in the switch fabric experiences or detects congestion, the networking device signals congestion by marking the fields of the packet: for example by marking ECN bits in the IP header of the layer 3 encapsulation wrapper of the RoCE packet. Thus, when congestion occurs, multiple packets arriving at the destination host machine will have their congestion bits set. In response to each such packet, the destination host machine may send a CNP packet. The sender may reduce its transmission rate in response to the CNP packet. The goal is to detect congestion early so that the sender can slow down transmission early, thereby reducing the likelihood of packet drops or losses.
The architecture of the switch fabric also plays a role in reducing the latency of RoCE packets and reducing packet loss. As described above, the switch fabric may be structured as a Clos network, such as the network depicted in fig. 5 and described above. For example, in a Clos network with a two-tier topology of tier 0 (TOR switches) and tier 1 switches (backbone switches) only, a RoCE packet may reach any destination host machine from any source host machine over three hops. Minimizing the transition number translates into very low latency, which is well suited for RoCE traffic.
In some embodiments, RDMA traffic belonging to the same flow follows the same path from the source host machine to the destination host machine because RDMA traffic is sensitive to packet reordering. This flow-based routing avoids the situation where packets arrive out of order at the destination host machine. For example, the ingress TOR switch may be configured to maintain the order of packets in each flow via an equal cost multi-path per flow (ECMP) scheme (e.g., an n-way ECMP scheme, where "n" is not to be confused with the number of layers "n" in the Clos network). The flow to which a packet belongs is generally defined by a combination of the source IP address, destination IP address, source port, destination port, and protocol identifier (also referred to as a 5-tuple) of the packet.
Example infrastructure as a service architecture
As noted above, infrastructure as a service (IaaS) is a particular type of cloud computing. The IaaS may be configured to provide virtualized computing resources over a public network (e.g., the internet). In the IaaS model, cloud computing providers may host infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., hypervisor layer), etc.). In some cases, the IaaS provider may also provide various services to accompany these infrastructure components (e.g., billing, monitoring, documentation, security, load balancing, clustering, etc.). Thus, as these services may be policy driven, iaaS users may be able to implement policies to drive load balancing to maintain availability and performance of applications.
In some cases, the IaaS client may access resources and services through a Wide Area Network (WAN), such as the internet, and may use the cloud provider's services to install the remaining elements of the application stack. For example, a user may log onto the IaaS platform to create Virtual Machines (VMs), install an Operating System (OS) on each VM, deploy middleware such as databases, create buckets for workloads and backups, and even install enterprise software into that VM. The customer may then use the provider's services to perform various functions including balancing network traffic, solving application problems, monitoring performance, managing disaster recovery, and the like.
In most cases, the cloud computing model will require participation of the cloud provider. The cloud provider may, but need not, be a third party service that specifically provides (e.g., provisions, rents, sells) IaaS. An entity may also choose to deploy a private cloud, thereby becoming its own infrastructure service provider.
In some examples, the IaaS deployment is a process of placing a new application or a new version of an application onto a prepared application server or the like. It may also include a process of preparing a server (e.g., installation library, daemon, etc.). This is typically managed by the cloud provider, below the hypervisor layer (e.g., servers, storage, network hardware, and virtualization). Thus, the guest may be responsible for processing (OS), middleware, and/or application deployment (e.g., on a self-service virtual machine (e.g., that may be started on demand), etc.).
In some examples, iaaS provisioning may refer to obtaining computers or virtual hosts for use, even installing the required libraries or services on them. In most cases, the deployment does not include provisioning, and provisioning may need to be performed first.
In some cases, the IaaS supply presents two different challenges. First, there are initial challenges to provisioning an initial infrastructure set before anything runs. Second, once everything has been provisioned, there is a challenge to evolve the existing infrastructure (e.g., add new services, change services, remove services, etc.). In some cases, both of these challenges may be addressed by enabling the configuration of the infrastructure to be defined in a declarative manner. In other words, the infrastructure (e.g., which components are needed and how they interact) may be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., which resources depend on which resources, and how they work in concert) can be described in a declarative manner. In some cases, once the topology is defined, workflows may be generated that create and/or manage the different components described in the configuration file.
In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more Virtual Private Clouds (VPCs) (e.g., potential on-demand pools of configurable and/or shared computing resources), also referred to as core networks. In some examples, one or more security group rules may also be supplied to define how to set security of the network and one or more Virtual Machines (VMs). Other infrastructure elements, such as load balancers, databases, etc., may also be supplied. As more and more infrastructure elements are desired and/or added, the infrastructure may evolve gradually.
In some cases, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Furthermore, the described techniques may enable infrastructure management within these environments. In some examples, a service team may write code that is desired to be deployed to one or more, but typically many, different production environments (e.g., across various different geographic locations, sometimes across the entire world). However, in some examples, the infrastructure on which the code is to be deployed must first be set up. In some cases, provisioning may be done manually, resources may be provisioned with a provisioning tool, and/or code may be deployed with a deployment tool once the infrastructure is provisioned.
Fig. 11 is a block diagram 1100 illustrating an example schema of the IaaS architecture in accordance with at least one embodiment. The service operator 1102 may be communicatively coupled to a secure host lease 1104, which may include a Virtual Cloud Network (VCN) 1106 and a secure host subnet 1108. In some examples, the service operator 1102 may use one or more client computing devices, which may be portable handheld devices (e.g.,cellular phone, & lt & gt>Computing tablet, personal Digital Assistant (PDA)) or wearable device (e.g., google +.>Head mounted display), running software (such as Microsoft Windows) And/or various mobile operating systems (such as iOS, windows Phone, android, blackBerry, palm OS, etc.), and support the internet, email, short Message Service (SMS), and/or the like>Or other communication protocol. Alternatively, the client computing device may be a general purpose personal computer, including, for example, microsoft running various versions of MicrosoftApple/>And/or a personal computer and/or a laptop computer of a Linux operating system. The client computing device may be running a variety of commercially available +.>Or a UNIX-like operating system, including but not limited to a workstation computer of any of a variety of GNU/Linux operating systems such as, for example, google Chrome OS. Alternatively or additionally, the client computing device may be any other electronic device, such as a thin client computer, an internet-enabled gaming system (e.g., with or without +. >Microsoft Xbox game console of gesture input device), and/or can be enabled by a controllablePersonal messaging devices to access the VCN 1106 and/or a network of the internet.
The VCN 1106 may include a local peer-to-peer gateway (LPG) 1110, which may be communicatively coupled to a Secure Shell (SSH) VCN 1112 via the LPG 1110 contained in the SSH VCN 1112. The SSH VCN 1112 may include an SSH subnetwork 1114, and the SSH VCN 1112 may be communicatively coupled to the control plane VCN 1116 via an LPG 1110 contained in the control plane VCN 1116. Further, the SSH VCN 1112 may be communicatively coupled to the data plane VCN 1118 via the LPG 1110. The control plane VCN 1116 and the data plane VCN 1118 may be included in a service lease 1119 that may be owned and/or operated by the IaaS provider.
The control plane VCN 1116 may include a control plane demilitarized zone (DMZ) layer 1120 that serves as a peripheral network (e.g., a portion of a corporate network between a corporate intranet and an external network). DMZ-based servers can assume limited responsibility and help control vulnerabilities. Further, DMZ layer 1120 may include one or more Load Balancer (LB) subnets 1122, a control plane application (app) layer 1124 that may include application (app) subnet(s) 1126, a control plane data layer 1128 that may include Database (DB) subnet(s) 1130 (e.g., front end DB subnet(s) and/or back end DB subnet (s)). LB subnet(s) 1122 contained in control plane DMZ layer 1120 may be communicatively coupled to application subnet(s) 1126 contained in control plane application layer 1124 and internet gateway 1134 that may be contained in control plane VCN 1116, and application subnet(s) 1126 may be communicatively coupled to DB subnet(s) 1130 and serving gateway 1136 and Network Address Translation (NAT) gateway 1138 contained in control plane data layer 1128. Control plane VCN 1116 may include a serving gateway 1136 and a NAT gateway 1138.
The control plane VCN 1116 may include a data plane mirror application (app) layer 1140, which may include application subnet(s) 1126. The application subnet(s) 1126 contained in the data plane mirror application layer 1140 can include Virtual Network Interface Controllers (VNICs) 1142 that can execute computing instances 1144. The computing instance 1144 may communicatively couple the application subnet(s) 1126 of the data plane mirror application layer 1140 to the application subnet(s) 1126 that may be included in the data plane application (app) layer 1146.
Data plane VCN 1118 may include a data plane application layer 1146, a data plane DMZ layer 1148, and a data plane data layer 1150. Data plane DMZ layer 1148 may include LB subnet(s) 1122, which may be communicatively coupled to application subnet(s) 1126 of data plane application layer 1146 and internet gateway 1134 of data plane VCN 1118. Application subnet(s) 1126 may be communicatively coupled to a serving gateway 1136 of data plane VCN 1118 and NAT gateway 1138 of data plane VCN 1118. Data plane data layer 1150 may also include DB subnetwork(s) 1130 that may be communicatively coupled to application subnetwork(s) 1126 of data plane application layer 1146.
The control plane VCN 1116 and the internet gateway 1134 of the data plane VCN 1118 may be communicatively coupled to a metadata management service 1152, and the metadata management service 1152 may be communicatively coupled to the public internet 1154. Public internet 1154 may be communicatively coupled to NAT gateway 1138 of control plane VCN 1116 and data plane VCN 1118. The control plane VCN 1116 and the service gateway 1136 of the data plane VCN 1118 may be communicatively coupled to a cloud service 1156.
In some examples, the control plane VCN 1116 or the service gateway 1136 of the data plane VCN 1118 may make Application Programming Interface (API) calls to the cloud services 1156 without going through the public internet 1154. The API call from service gateway 1136 to cloud service 1156 may be unidirectional: the service gateway 1136 may make API calls to the cloud service 1156, and the cloud service 1156 may send the requested data to the service gateway 1136. However, cloud service 1156 may not initiate an API call to service gateway 1136.
In some examples, secure host lease 1104 may be directly connected to service lease 1119, service lease 1119 otherwise may be quarantined. The secure host subnetwork 1108 may communicate with the SSH subnetwork 1114 through the LPG 1110, the LPG 1110 may enable bi-directional communication over otherwise isolated systems. Connecting the secure host subnet 1108 to the SSH subnet 1114 may enable the secure host subnet 1108 to access other entities within the service lease 1119.
The control plane VCN 1116 may allow a user of the service lease 1119 to set or otherwise provision desired resources. The desired resources provisioned in the control plane VCN 1116 may be deployed or otherwise used in the data plane VCN 1118. In some examples, the control plane VCN 1116 may be isolated from the data plane VCN 1118, and the data plane mirror application layer 1140 of the control plane VCN 1116 may communicate with the data plane application layer 1146 of the data plane VCN 1118 via VNICs 1142, which VNICs 1142 may be included in the data plane mirror application layer 1140 and the data plane application layer 1146.
In some examples, a user or customer of the system may make a request, such as a create, read, update, or delete (CRUD) operation, through the public internet 1154 that may communicate the request to the metadata management service 1152. The metadata management service 1152 may communicate a request to the control plane VCN 1116 via an internet gateway 1134. The request may be received by LB subnet(s) 1122 contained in control plane DMZ layer 1120. LB subnet(s) 1122 may determine that the request is valid and, in response to the determination, LB subnet(s) 1122 may transmit the request to application subnet(s) 1126 contained in control plane application layer 1124. If the request is authenticated and calls to the public internet 1154 are required, then the call to the public internet 1154 may be transmitted to the NAT gateway 1138, which may make calls to the public internet 1154. Memory where the request may desire to store may be stored in DB subnet(s) 1130.
In some examples, the data plane mirror application layer 1140 may facilitate direct communication between the control plane VCN 1116 and the data plane VCN 1118. For example, it may be desirable to apply changes, updates, or other suitable modifications to the configuration to the resources contained in the data plane VCN 1118. Via the VNIC 1142, the control plane VCN 1116 may communicate directly with resources contained in the data plane VCN 1118 and, thus, may perform changes, updates, or other appropriate modifications to the configuration.
In some embodiments, the control plane VCN 1116 and the data plane VCN 1118 may be included in a service lease 1119. In this case, the user or customer of the system may not own or operate the control plane VCN 1116 or the data plane VCN 1118. Alternatively, the IaaS provider may own or operate the control plane VCN 1116 and the data plane VCN 1118, both of which may be contained in the service lease 1119. This embodiment may enable isolation of networks that may prevent a user or customer from interacting with other users or other customers' resources. Furthermore, this embodiment may allow users or clients of the system to store databases privately without relying on the public internet 1154 for storage that may not have the desired level of security.
In other embodiments, LB subnet(s) 1122 contained in control plane VCN 1116 may be configured to receive signals from service gateway 1136. In this embodiment, the control plane VCN 1116 and the data plane VCN 1118 may be configured to be invoked by customers of the IaaS provider without invoking the public internet 1154. This embodiment may be desirable to customers of the IaaS provider because the database(s) used by the customers may be controlled by the IaaS provider and may be stored on the service lease 1119, which 1119 may be isolated from the public internet 1154.
Fig. 12 is a block diagram 1200 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. Service operator 1202 (e.g., service operator 1102 of fig. 11) may be communicatively coupled to secure host lease 1204 (e.g., secure host lease 1104 of fig. 11), which secure host lease 1204 may include Virtual Cloud Network (VCN) 1206 (e.g., VCN 1106 of fig. 11) and secure host subnetwork 1208 (e.g., secure host subnetwork 1108 of fig. 11). The VCN 1206 may include a local peer-to-peer gateway (LPG) 1210 (e.g., LPG 1110 of fig. 11) that may be communicatively coupled to a Secure Shell (SSH) VCN 1212 (e.g., SSH VCN 1112 of fig. 11) via LPG 1110 contained in SSH VCN 1212. The SSH VCN 1212 may include an SSH subnetwork 1214 (e.g., SSH subnetwork 1114 of fig. 11), and the SSH VCN 1212 may be communicatively coupled to the control plane VCN 1216 (e.g., control plane VCN 1116 of fig. 11) via an LPG 1210 contained in the control plane VCN 1216. The control plane VCN 1216 may be included in a service lease 1219 (e.g., service lease 1119 of fig. 11), and the data plane VCN 1218 (e.g., data plane VCN 1118 of fig. 11) may be included in a customer lease 1221 that may be owned or operated by a user or customer of the system.
Control plane VCN 1216 may include a control plane DMZ layer 1220 (e.g., control plane DMZ layer 1120 of fig. 11), which may include LB subnet(s) 1222 (e.g., LB subnet(s) 1122 of fig. 11), a control plane application (app) layer 1224 (e.g., control plane application layer 1124 of fig. 11) which may include application (app) subnet(s) 1226 (e.g., application subnet(s) 1126 of fig. 11), and a control plane data layer 1228 (e.g., control plane data layer 1128 of fig. 11) which may include Database (DB) subnet(s) 1230 (e.g., similar to DB subnet(s) 1130 of fig. 11). The LB subnetwork(s) 1222 included in the control plane DMZ layer 1220 may be communicatively coupled to the application subnetwork(s) 1226 included in the control plane application layer 1224 and the internet gateway 1234 (e.g., the internet gateway 1134 of fig. 11) that may be included in the control plane VCN 1216, and the application subnetwork(s) 1226 may be communicatively coupled to the DB subnetwork(s) 1230 included in the control plane data layer 1228 and the serving gateway 1236 (e.g., the serving gateway of fig. 11) and the Network Address Translation (NAT) gateway 1238 (e.g., the NAT gateway 1138 of fig. 11). Control plane VCN 1216 may include a serving gateway 1236 and a NAT gateway 1238.
The control plane VCN 1216 may include a data plane mirror application (app) layer 1240 (e.g., data plane mirror application layer 1140 of fig. 11) that may include application subnet(s) 1226. The application subnet(s) 1226 included in the data plane mirror application layer 1240 can include Virtual Network Interface Controllers (VNICs) 1242 (e.g., VNICs of 1142) that can execute computing instance 1244 (e.g., similar to computing instance 1144 of fig. 11). The compute instance 1244 may facilitate communication between the application subnet(s) 1226 of the data plane mirror application layer 1240 and the application subnet(s) 1226 that may be included in the data plane application (app) layer 1246 (e.g., the data plane application layer 1146 of fig. 11) via the VNICs 1242 included in the data plane mirror application layer 1240 and the VNICs 1242 included in the data plane application layer 1246.
The internet gateway 1234 contained in the control plane VCN 1216 may be communicatively coupled to a metadata management service 1252 (e.g., the metadata management service 1152 of fig. 11), and the metadata management service 1252 may be communicatively coupled to the public internet 1254 (e.g., the public internet 1154 of fig. 11). Public internet 1254 may be communicatively coupled to NAT gateway 1238 contained in control plane VCN 1216. The service gateway 1236 contained in the control plane VCN 1216 may be communicatively coupled to a cloud service 1256 (e.g., cloud service 1156 of fig. 11).
In some examples, the data plane VCN 1218 may be included in a customer lease 1221. In this case, the IaaS provider may provide a control plane VCN 1216 for each customer, and the IaaS provider may set a unique compute instance 1244 contained in the service lease 1219 for each customer. Each computing instance 1244 may allow communication between control plane VCN 1216 contained in service lease 1219 and data plane VCN 1218 contained in customer lease 1221. Computing instance 1244 may allow resources provisioned in control plane VCN 1216 contained in service lease 1219 to be deployed or otherwise used in data plane VCN 1218 contained in customer lease 1221.
In other examples, a customer of the IaaS provider may have a database residing in customer lease 1221. In this example, control plane VCN 1216 may include a data plane mirror application layer 1240, which may include application subnet(s) 1226. The data plane mirror application layer 1240 may reside in the data plane VCN 1218, but the data plane mirror application layer 1240 may not reside in the data plane VCN 1218. That is, the data plane mirror application layer 1240 may access the customer lease 1221, but the data plane mirror application layer 1240 may not exist in the data plane VCN 1218 or be owned or operated by the customer of the IaaS provider. The data plane mirror application layer 1240 may be configured to make calls to the data plane VCN 1218, but may not be configured to make calls to any entity contained in the control plane VCN 1216. The customer may desire to deploy or otherwise use resources provisioned in the control plane VCN 1216 in the data plane VCN 1218, and the data plane mirror application layer 1240 may facilitate the customer's desired deployment or other use of resources.
In some embodiments, a customer of the IaaS provider may apply a filter to the data plane VCN 1218. In this embodiment, the customer may determine what the data plane VCN 1218 may access, and the customer may restrict access to the public Internet 1254 from the data plane VCN 1218. The IaaS provider may not be able to apply filters or otherwise control access of the data plane VCN 1218 to any external networks or databases. Application of filters and controls by customers to the data plane VCN 1218 contained in customer lease 1221 may help isolate the data plane VCN 1218 from other customers and public internet 1254.
In some embodiments, cloud services 1256 may be invoked by service gateway 1236 to access services that may not exist on public internet 1254, control plane VCN 1216, or data plane VCN 1218. The connection between cloud service 1256 and control plane VCN 1216 or data plane VCN 1218 may not be real-time or continuous. Cloud service 1256 may exist on different networks owned or operated by the IaaS provider. Cloud service 1256 may be configured to receive calls from service gateway 1236 and may be configured not to receive calls from public internet 1254. Some cloud services 1256 may be isolated from other cloud services 1256, and control plane VCN 1216 may be isolated from cloud services 1256 that may not be in the same area as control plane VCN 1216. For example, control plane VCN 1216 may be located in "zone 1" and cloud service "deployment 11" may be located in zone 1 and "zone 2". If a service gateway 1236 contained in control plane VCN 1216 in region 1 makes a call to deployment 11, the call may be transmitted to deployment 11 in region 1. In this example, control plane VCN 1216 or deployment 11 in region 1 may not be communicatively coupled or otherwise in communication with deployment 11 in region 2.
Fig. 13 is a block diagram 1300 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. Service operator 1302 (e.g., service operator 1102 of fig. 11) may be communicatively coupled to secure host lease 1304 (e.g., secure host lease 1104 of fig. 11), which secure host lease 1304 may include Virtual Cloud Network (VCN) 1306 (e.g., VCN 1106 of fig. 11) and secure host subnetwork 1308 (e.g., secure host subnetwork 1108 of fig. 11). The VCN 1306 may include an LPG 1310 (e.g., LPG 1110 of fig. 11) that may be communicatively coupled to an SSH VCN 1312 (e.g., SSH VCN 1112 of fig. 11) via the LPG 1310 contained in the SSH VCN 1312. The SSH VCN 1312 may include an SSH subnetwork 1314 (e.g., SSH subnetwork 1114 of fig. 11), and the SSH VCN 1312 may be communicatively coupled to the control plane VCN 1316 (e.g., control plane VCN 1116 of fig. 11) via the LPG 1310 contained in the control plane VCN 1316 and to the data plane VCN 1318 (e.g., data plane 1118 of fig. 11) via the LPG 1310 contained in the data plane VCN 1318. The control plane VCN 1316 and the data plane VCN 1318 may be included in a service lease 1319 (e.g., service lease 1119 of fig. 11).
The control plane VCN 1316 may include a control plane DMZ layer 1320 (e.g., control plane DMZ layer 1120 of fig. 11) that may include Load Balancer (LB) subnet(s) 1322 (e.g., LB subnet(s) 1122 of fig. 11), a control plane application (app) layer 1324 (e.g., control plane application layer 1124 of fig. 11) that may include application (app) subnet(s) 1326 (e.g., similar to application subnet(s) 1126 of fig. 11), and a control plane data layer 1328 (e.g., control plane data layer 1128 of fig. 11) that may include DB subnet(s) 1330. The LB subnet(s) 1322 included in the control plane DMZ layer 1320 may be communicatively coupled to the application subnet(s) 1326 included in the control plane application layer 1324 and the internet gateway 1334 (e.g., the internet gateway 1134 of fig. 11) that may be included in the control plane VCN 1316, and the application subnet(s) 1326 may be communicatively coupled to the DB subnet(s) 1330 and the service gateway 1336 (e.g., the service gateway of fig. 11) and the Network Address Translation (NAT) gateway 1338 (e.g., the NAT gateway 1138 of fig. 11) included in the control plane data layer 1328. Control plane VCN 1316 may include a serving gateway 1336 and a NAT gateway 1338.
The data plane VCN 1318 may include a data plane application (app) layer 1346 (e.g., data plane application layer 1146 of fig. 11), a data plane DMZ layer 1348 (e.g., data plane DMZ layer 1148 of fig. 11), and a data plane data layer 1350 (e.g., data plane data layer 1150 of fig. 11). Data plane DMZ layer 1348 may include trusted application (app) subnet(s) 1360 and untrusted application (app) subnet(s) 1362 that may be communicatively coupled to data plane application layer 1346 and LB subnet(s) 1322 of internet gateway 1334 contained in data plane VCN 1318. Trusted application subnet(s) 1360 may be communicatively coupled to serving gateway 1336 contained in data plane VCN 1318, NAT gateway 1338 contained in data plane VCN 1318, and DB subnet(s) 1330 contained in data plane data layer 1350. Untrusted application subnet(s) 1362 may be communicatively coupled to service gateway 1336 contained in data plane VCN 1318 and DB subnet(s) 1330 contained in data plane data layer 1350. Data plane data layer 1350 may include DB subnetwork(s) 1330 that may be communicatively coupled to service gateway 1336 included in data plane VCN 1318.
The untrusted application subnet(s) 1362 may include one or more primary VNICs 1364 (1) - (N) that may be communicatively coupled to tenant Virtual Machines (VMs) 1366 (1) - (N). Each tenant VM 1366 (1) - (N) may be communicatively coupled to a respective application (app) subnet 1367 (1) - (N) that may be contained in a respective container outlet VCN 1368 (1) - (N), which may be contained in a respective customer lease 1370 (1) - (N). The respective auxiliary VNICs 1372 (1) - (N) may facilitate communication between the untrusted application subnet(s) 1362 contained in the data plane VCN 1318 and the application subnets contained in the container egress VCNs 1368 (1) - (N). Each container egress VCN 1368 (1) - (N) may include a NAT gateway 1338, which NAT gateway 1338 may be communicatively coupled to a public internet 1354 (e.g., public internet 1154 of fig. 11).
An internet gateway 1334 included in the control plane VCN 1316 and included in the data plane VCN 1318 may be communicatively coupled to a metadata management service 1352 (e.g., the metadata management system 1152 of fig. 11), which metadata management service 1352 may be communicatively coupled to the public internet 1354. Public internet 1354 may be communicatively coupled to NAT gateway 1338 contained in control plane VCN 1316 and contained in data plane VCN 1318. The service gateway 1336 contained in the control plane VCN 1316 and contained in the data plane VCN 1318 may be communicatively coupled to the cloud service 1356.
In some embodiments, the data plane VCN 1318 may be integrated with the customer lease 1370. In some cases, such as where support may be desired while executing code, such integration may be useful or desirable to customers of the IaaS provider. The customer may provide code that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects to operate. In response thereto, the IaaS provider may determine whether to run the code given to the IaaS provider by the customer.
In some examples, a customer of the IaaS provider may grant temporary network access to the IaaS provider and request functionality attached to the data plane layer application 1346. Code that runs this function may be executed in VM 1366 (1) - (N) and may not be configured to run anywhere else on data plane VCN 1318. Each VM 1366 (1) - (N) may be connected to one guest lease 1370. The respective containers 1371 (1) - (N) contained in VMs 1366 (1) - (N) may be configured to run code. In this case, there may be dual isolation (e.g., containers 1371 (1) - (N) running code, where containers 1371 (1) - (N) may be contained at least in VMs 1366 (1) - (N) contained in untrusted application subnet(s) 1362), which may help prevent incorrect or otherwise undesirable code from damaging the IaaS provider's network or damaging the network of a different customer. Containers 1371 (1) - (N) may be communicatively coupled to customer lease 1370 and may be configured to transmit or receive data from customer lease 1370. Containers 1371 (1) - (N) may not be configured to transmit or receive data from any other entity in data plane VCN 1318. After the run code is complete, the IaaS provider may terminate or otherwise dispose of containers 1371 (1) - (N).
In some embodiments, trusted application subnet(s) 1360 may run code that may be owned or operated by the IaaS provider. In this embodiment, trusted application subnet(s) 1360 may be communicatively coupled to DB subnet(s) 1330 and configured to perform CRUD operations in DB subnet(s) 1330. The untrusted application subnet(s) 1362 may be communicatively coupled to the DB subnet(s) 1330, but in this embodiment, the untrusted application subnet(s) may be configured to perform read operations in the DB subnet(s) 1330. Containers 1371 (1) - (N), which may be included in VMs 1366 (1) - (N) of each guest and may run code from the guest, may not be communicatively coupled with DB subnet(s) 1330.
In other embodiments, the control plane VCN 1316 and the data plane VCN 1318 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 1316 and the data plane VCN 1318. However, communication may occur indirectly through at least one method. LPG 1310 may be established by IaaS providers, which may facilitate communication between control plane VCN 1316 and data plane VCN 1318. In another example, the control plane VCN 1316 or the data plane VCN 1318 may invoke the cloud service 1356 via the service gateway 1336. For example, a call from the control plane VCN 1316 to the cloud service 1356 may include a request for a service that may communicate with the data plane VCN 1318.
Fig. 14 is a block diagram 1400 illustrating another example mode of the IaaS architecture in accordance with at least one embodiment. The service operator 1402 (e.g., the service operator 1102 of fig. 11) may be communicatively coupled to a secure host lease 1404 (e.g., the secure host lease 1104 of fig. 11), which secure host lease 1404 may include a Virtual Cloud Network (VCN) 1406 (e.g., the VCN 1106 of fig. 11) and a secure host subnet 1408 (e.g., the secure host subnet 1108 of fig. 11). The VCN 1406 may include an LPG 1410 (e.g., the LPG 1110 of fig. 11), which LPG 1410 may be communicatively coupled to the SSH VCN 1412 via the LPG 1410 contained in the SSH VCN 1412 (e.g., the SSH VCN 1112 of fig. 11). The SSH VCN 1412 may include an SSH subnetwork 1414 (e.g., the SSH subnetwork 1114 of fig. 11), and the SSH VCN 1412 may be communicatively coupled to the control plane VCN 1416 (e.g., the control plane VCN 1116 of fig. 11) via the LPG 1410 contained in the control plane VCN 1416 and to the data plane VCN 1418 (e.g., the data plane 1118 of fig. 11) via the LPG 1410 contained in the data plane VCN 1418. The control plane VCN 1416 and the data plane VCN 1418 may be included in a service lease 1419 (e.g., service lease 1119 of fig. 11).
The control plane VCN 1416 may include a control plane DMZ layer 1420 (e.g., control plane DMZ layer 1120 of fig. 11) that may include LB subnet(s) 1422 (e.g., LB subnet(s) 1122 of fig. 11), a control plane application (app) layer 1424 (e.g., control plane application layer 1124 of fig. 11) that may include application (app) subnet(s) 1426 (e.g., application subnet(s) 1126 of fig. 11), and a control plane data layer 1428 (e.g., control plane data layer 1128 of fig. 11) that may include DB subnet(s) 1430 (e.g., DB subnet(s) 1330 of fig. 13). The LB subnet(s) 1422 included in the control plane DMZ layer 1420 may be communicatively coupled to the application subnet(s) 1426 included in the control plane application layer 1424 and the internet gateway 1434 (e.g., the internet gateway 1134 of fig. 11) that may be included in the control plane VCN 1416, and the application subnet(s) 1426 may be communicatively coupled to the DB subnet(s) 1430 and the service gateway 1436 (e.g., the service gateway of fig. 11) and the Network Address Translation (NAT) gateway 1438 (e.g., the NAT gateway 138 of fig. 11) included in the control plane data layer 1428. The control plane VCN 1416 may include a service gateway 1436 and a NAT gateway 1438.
Data plane VCN 1418 may include a data plane application (app) layer 1446 (e.g., data plane application layer 1146 of fig. 11), a data plane DMZ layer 1448 (e.g., data plane DMZ layer 1148 of fig. 11), and a data plane data layer 1450 (e.g., data plane data layer 1150 of fig. 11). The data plane DMZ layer 1448 may include trusted application (app) subnet 1460 (e.g., trusted application subnet(s) 1360 of fig. 13) and untrusted application (app) subnet 1462 (e.g., untrusted application subnet(s) 1362 of fig. 13) that may be communicatively coupled to data plane application layer 1446 and LB subnet 1422 of internet gateway 1434 included in data plane VCN 1418. Trusted application subnet(s) 1460 may be communicatively coupled to service gateway 1436 contained in data plane VCN 1418, NAT gateway 1438 contained in data plane VCN 1418, and DB subnet(s) 1430 contained in data plane data layer 1450. The untrusted application subnet(s) 1462 may be communicatively coupled to the service gateway 1436 contained in the data plane VCN 1418 and the DB subnet(s) 1430 contained in the data plane data layer 1450. Data plane data layer 1450 may include DB subnetwork(s) 1430 which may be communicatively coupled to service gateway 1436 included in data plane VCN 1418.
The untrusted application subnet(s) 1462 may include a host VNIC 1464 (1) - (N) that may be communicatively coupled to a tenant Virtual Machine (VM) 1466 (1) - (N) residing within the untrusted application subnet(s) 1462. Each tenant VM 1466 (1) - (N) may run code in a respective container 1467 (1) - (N) and be communicatively coupled to an application subnet 1426 that may be included in a data plane application layer 1446 included in a container outlet VCN 1468. The respective auxiliary VNICs 1472 (1) - (N) may facilitate communications between the untrusted application subnet(s) 1462 contained in the data plane VCN 1418 and the application subnets contained in the container egress VCN 1468. The container egress VCN may include a NAT gateway 1438 that may be communicatively coupled to the public internet 1454 (e.g., public internet 1154 of fig. 11).
The internet gateway 1434 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 may be communicatively coupled to a metadata management service 1452 (e.g., the metadata management system 1152 of fig. 11), which metadata management service 1452 may be communicatively coupled to the public internet 1454. Public internet 1454 may be communicatively coupled to NAT gateway 1438 contained in control plane VCN 1416 and contained in data plane VCN 1418. The service gateway 1436 contained in the control plane VCN 1416 and contained in the data plane VCN 1418 may be communicatively coupled to the cloud service 1456.
In some examples, the pattern shown by the architecture of block 1400 of fig. 14 may be considered an exception to the pattern shown by the architecture of block 1300 of fig. 13, and if the IaaS provider cannot directly communicate with the customer (e.g., disconnected areas), such a pattern may be desirable to the customer of the IaaS provider. The guests may access the corresponding containers 1467 (1) - (N) contained in each guest's VM 1466 (1) - (N) in real-time. The containers 1467 (1) - (N) may be configured to invoke respective auxiliary VNICs 1472 (1) - (N) contained in the application subnet(s) 1426 of the data plane application layer 1446, which may be contained in the container egress VCN 1468. The auxiliary VNICs 1472 (1) - (N) may transmit calls to the NAT gateway 1438, and the NAT gateway 1438 may transmit calls to the public internet 1454. In this example, containers 1467 (1) - (N), which may be accessed by clients in real-time, may be isolated from control plane VCN 1416 and may be isolated from other entities contained in data plane VCN 1418. Containers 1467 (1) - (N) may also be isolated from resources from other clients.
In other examples, a customer may use containers 1467 (1) - (N) to invoke cloud service 1456. In this example, a customer can run code in containers 1467 (1) - (N) that requests services from cloud service 1456. The containers 1467 (1) - (N) may transmit the request to the auxiliary VNICs 1472 (1) - (N), and the auxiliary VNICs 1472 (1) - (N) may transmit the request to a NAT gateway, which may transmit the request to the public internet 1454. The public internet 1454 may transmit the request to the LB subnet 1422(s) contained in the control plane VCN 1416 via the internet gateway 1434. In response to determining that the request is valid, the LB subnet(s) may transmit the request to the application subnet(s) 1426, which application subnet(s) 1426 may transmit the request to the cloud service 1456 via the service gateway 1436.
It should be appreciated that the IaaS architecture 1100, 1200, 1300, 1400 depicted in the figures may have other components than those depicted. Additionally, the embodiments shown in the figures are merely some examples of cloud infrastructure systems that may incorporate embodiments of the present disclosure. In some other embodiments, the IaaS system may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components.
In certain embodiments, the IaaS system described herein may include application suites, middleware, and database service products that are delivered to customers in a self-service, subscription-based, elastically extensible, reliable, highly available, and secure manner. An example of such an IaaS system is the Oracle Cloud Infrastructure (OCI) offered by the present assignee.
FIG. 15 illustrates an example computer system 1500 in which various embodiments may be implemented. The system 1500 may be used to implement any of the computer systems described above. As shown, computer system 1500 includes a processing unit 1504 that communicates with a number of peripheral subsystems via a bus subsystem 1502. These peripheral subsystems may include a processing acceleration unit 1506, an I/O subsystem 1508, a storage subsystem 1518, and a communication subsystem 1524. Storage subsystem 1518 includes tangible computer-readable storage media 1522 and system memory 1510.
Bus subsystem 1502 provides a mechanism for letting the various components and subsystems of computer system 1500 communicate with each other as intended. Although bus subsystem 1502 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 1502 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. Such architectures can include Industry Standard Architecture (ISA) bus, micro Channel Architecture (MCA) bus, enhanced ISA (EISA) bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as Mezzanine bus manufactured by the IEEE P1386.1 standard, for example.
The processing unit 1504, which may be implemented as one or more integrated circuits (e.g., a conventional microprocessor or microcontroller), controls the operation of the computer system 1500. One or more processors may be included in the processing unit 1504. These processors may include single-core or multi-core processors. In some embodiments, the processing unit 1504 may be implemented as one or more separate processing units 1532 and/or 1534, each including a single or multi-core processor therein. In other embodiments, the processing unit 1504 may also be implemented as a four-core processing unit formed by integrating two dual-core processors into a single chip.
In various embodiments, the processing unit 1504 may execute various programs in response to program code and may maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed may reside in the processor(s) 1504 and/or in the storage subsystem 1518. The processor(s) 1504 can provide the various functions described above, through appropriate programming. The computer system 1500 may additionally include a processing acceleration unit 1506, which may include a Digital Signal Processor (DSP), special-purpose processor, and the like.
The I/O subsystem 1508 may include user interface input devices and user interface output devices. The user interface input devices may include a keyboard, a pointing device such as a mouse or trackball, a touch pad or touch screen incorporated into a display, a scroll wheel, a click wheel, dials, buttons, switches, a keyboard, an audio input device with a voice command recognition system, a microphone, and other types of input devices. The user interface input device may include, for example, a motion sensing and/or gesture recognition device, such asA motion sensor enabling a user to control e.g. +. >360 to the input device of the game controller and interact therewith. The user interface input device may also include an eye gesture recognition device, such as detecting eye activity from a user (e.g., "blinking" when taking a photograph and/or making a menu selection) and converting the eye gesture to an input device (e.g., google) Google->A blink detector. Furthermore, the user interface input device may comprise a control unit enabling the user to communicate with the speech recognition system via voice commands (e.g. -/->Navigator) interactive voice recognition sensing device.
User interface input devices may also include, but are not limited to, three-dimensional (3D) mice, joysticks or sticks, game pads and drawing tablets, as well as audio/video devices such as speakers, digital cameras, digital video cameras, portable media players, webcams, image scanners, fingerprint scanners, bar code reader 3D scanners, 3D printers, laser rangefinders and gaze tracking devices. Further, the user interface input device may include, for example, a medical imaging input device such as a computed tomography, magnetic resonance imaging, positron emission tomography, medical ultrasound device. The user interface input device may also include, for example, an audio input device such as a MIDI keyboard, digital musical instrument, or the like.
The user interface output device may include a display subsystem, an indicator light, or a non-visual display such as an audio output device, or the like. The display subsystem may be a Cathode Ray Tube (CRT), a flat panel device such as one using a Liquid Crystal Display (LCD) or a plasma display, a projection device, a touch screen, or the like. In general, use of the term "output device" is intended to include all possible types of devices and mechanisms for outputting information from computer system 1500 to a user or other computer. For example, user interface output devices may include, but are not limited to, various display devices that visually convey text, graphics, and audio/video information, such as monitors, printers, speakers, headphones, car navigation systems, plotters, voice output devices, and modems.
Computer system 1500 may include a storage subsystem 1518 containing software elements, shown as being currently located in system memory 1510. The system memory 1510 may store program instructions that are loadable and executable on the processing unit 1504, as well as data generated during the execution of these programs.
Depending on the configuration and type of computer system 1500, system memory 1510 may be volatile (such as Random Access Memory (RAM)) and/or non-volatile (such as read-only memory (ROM), flash memory, etc.). RAM typically contains data and/or program modules that are immediately accessible to and/or presently being operated on and executed by processing unit 1504. In some implementations, the system memory 1510 may include a variety of different types of memory, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM). In some implementations, a basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within computer system 1500, during start-up, may be stored in ROM. By way of example, and not limitation, system memory 1510 also illustrates application programs 1512, which may include client applications, web browsers, middle tier applications, relational database management systems (RDBMS), and the like, program data 1514, and operating system 1516. By way of example, operating system 1516 may include various versions of Microsoft Windows Apple/>And/or Linux operating system, various commercially available +.>Or UNIX-like operating systems (including but not limited to various GNU/Linux operating systems, google +.>OS, etc.) and/or such as iOS,/-or the like>Phone、/>OS、/>15OS and->Mobile operating system of OS operating system.
Storage subsystem 1518 may also provide a tangible computer-readable storage medium for storing basic programming and data structures that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem 1518. These software modules or instructions may be executed by the processing unit 1504. Storage subsystem 1518 may also provide a repository for storing data used in accordance with the present disclosure.
The storage subsystem 1500 may also include a computer-readable storage media reader 1520 that may be further connected to a computer-readable storage media 1522. Along with and optionally in conjunction with system memory 1510, computer-readable storage medium 1522 may comprehensively represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information.
The computer-readable storage medium 1522 containing the code or portions of code may also include any suitable medium known or used in the art including storage media and communication media such as, but not limited to, volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This may include tangible computer-readable storage media such as RAM, ROM, electrically Erasable Programmable ROM (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer-readable media. This may also include non-tangible computer-readable media, such as data signals, data transmissions, or any other medium that may be used to transmit desired information and that may be accessed by computing system 1500.
For example, computer-readable storage media 1522 can include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and a removable, nonvolatile optical disk (such as a CD ROM, DVD, and a CD-ROM disk A disk or other optical medium) to which a data signal is read or written. The computer-readable storage media 1522 may include, but is not limited to,/i>Drives, flash memory cards, universal Serial Bus (USB) flash drives, secure Digital (SD) cards, DVD discs, digital audio tape, and the like. The computer-readable storage medium 1522 may also include non-volatile memory-based Solid State Drives (SSDs) (such as flash memory-based SSDs, enterprise flash drives, solid state ROMs, etc.), volatile memory-based SSDs (such as solid state RAM, dynamic RAM, static RAM), DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM-and flash memory-based SSDs. The disk drives and their associated computer-readable media can provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for computer system 1500.
Communication subsystem 1524 provides an interface to other computer systems and networks. The communication subsystem 1524 serves as an interface for receiving data from and transmitting data to other systems from the computer system 1500. For example, communication subsystem 1524 may enable computer system 1500 to connect to one or more devices via the internet. In some embodiments, the communication subsystem 1524 may include Radio Frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., advanced data network technology using cellular telephone technology, such as 3G, 4G, or EDGE (enhanced data rates for global evolution), wiFi (IEEE 802.11 family standards), or other mobile communication technologies, or any combination thereof), global Positioning System (GPS) receiver components, and/or other components. In some embodiments, the communication subsystem 1524 may provide wired network connectivity (e.g., ethernet) in addition to or in lieu of a wireless interface.
In some embodiments, the communication subsystem 1524 may also receive input communications in the form of structured and/or unstructured data feeds 1526, event streams 1528, event updates 1530, and the like, on behalf of one or more users who may use the computer system 1500.
For example, the communication subsystem 1524 may be configured to receive data feeds 1526, such as in real-time, from users of social networks and/or other communication servicesFeed, & lt & gt>Updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real-time updates from one or more third-party information sources.
In addition, the communication subsystem 1524 may also be configured to receive data in the form of a continuous data stream, which may include an event stream 1528 and/or an event update 1530 of real-time events that may be continuous or unbounded in nature without explicit termination. Examples of applications that generate continuous data may include, for example, sensor data applications, financial quoters, network performance measurement tools (e.g., network monitoring and traffic management applications), click stream analysis tools, automobile traffic monitoring, and so forth.
The communication subsystem 1524 may also be configured to output structured and/or unstructured data feeds 1526, event streams 1528, event updates 1530, and the like, to one or more databases, which may be in communication with one or more streaming data source computers coupled to the computer system 1500.
The computer system 1500 may be one of various types, including a handheld portable device (e.g.,cellular phone, & lt & gt>Computing tablet, PDA), wearable device (e.g., +.>Glass head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing system.
Due to the ever-changing nature of computers and networks, the description of computer system 1500 depicted in the drawings is intended only as a specific example. Many other configurations are possible with more or fewer components than the system depicted in the figures. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or combinations. In addition, connections to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, one of ordinary skill in the art will recognize other ways and/or methods of implementing the various embodiments.
While specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also included within the scope of the disclosure. Embodiments are not limited to operation within certain specific data processing environments, but may be free to operate within multiple data processing environments. Furthermore, while embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. The various features and aspects of the embodiments described above may be used alone or in combination.
In addition, while embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented in hardware alone, or in software alone, or in a combination thereof. The various processes described herein may be implemented in any combination on the same processor or on different processors. Thus, where a component or module is described as being configured to perform certain operations, such configuration may be accomplished by, for example, designing the electronic circuitry to perform the operations, performing the operations by programming programmable electronic circuitry (such as a microprocessor), or any combination thereof. The processes may communicate using a variety of techniques, including but not limited to conventional techniques for inter-process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various additions, subtractions, deletions and other modifications and changes may be made thereto without departing from the broader spirit and scope as set forth in the claims. Thus, while specific disclosed embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are intended to be within the scope of the following claims.
The use of the terms "a" and "an" and "the" and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Unless otherwise indicated, the terms "comprising," "having," "including," and "containing" are to be construed as open-ended terms (i.e., meaning "including, but not limited to"). The term "connected" should be interpreted as including in part or in whole, attached to, or connected together, even though something is intermediate. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
Disjunctive language, such as the phrase "at least one of X, Y or Z," unless expressly stated otherwise, is intended to be understood in the context of the general use of the term, terminology, etc., may be X, Y or Z, or any combination thereof (e.g., X, Y and/or Z). Thus, such disjunctive language is generally not intended nor should it suggest that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. One of ordinary skill in the art should be able to employ such variations as appropriate and may practice the disclosure in a manner other than that specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, unless otherwise indicated herein, the present disclosure includes any combination of the above elements in all possible variations thereof.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
In the foregoing specification, aspects of the present disclosure have been described with reference to specific embodiments thereof, but those skilled in the art will recognize that the present disclosure is not limited thereto. The various features and aspects of the disclosure described above may be used alone or in combination. Moreover, embodiments may be utilized in any number of environments and applications other than those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (21)

1. A method of data networking, the method comprising:
receiving, at an ingress switch and from a host machine executing a plurality of compute instances for a plurality of tenants, a first layer 2RDMA packet for a first tenant among the plurality of tenants;
converting the first layer 2RDMA packet into a first layer 3 encapsulated packet having at least one header; and
the first layer 3 encapsulated packet is forwarded to the switch fabric,
wherein the first layer 2RDMA packet includes a Virtual Local Area Network (VLAN) tag and a quality of service (QoS) data field, an
Wherein the converting comprises adding the at least one header to the first layer 2RDMA packet, the at least one header comprising:
Virtual network identifier based on information from VLAN tag, and
QoS values based on information from the QoS data field.
2. The method of claim 1, further comprising: at an intermediate switch of the switch fabric and in response to the indication of congestion, modifying a congestion notification data field of the at least one header of the first layer 3 encapsulated packet.
3. The method of claim 1, further comprising:
receiving, at the ingress switch, a layer 2RDMA packet including a VLAN tag and a QoS data field;
converting the layer 2RDMA packet into a layer 3 encapsulated packet having at least one header; and
forwarding the layer 3 encapsulated packet to the switch fabric,
wherein the VLAN tag of the second layer 2RDMA packet indicates a different VLAN than the VLAN tag of the first layer 2RDMA packet.
4. The method of claim 3, further comprising, at an intermediate switch of the switch fabric:
queuing the first layer 3 encapsulated packet to a first queue of an intermediate switch based on a QoS value of the at least one header of the first layer 3 encapsulated packet; and
the second layer 3 encapsulated packet is queued into a second queue of the intermediate switch that is different from the first queue based on the QoS value of the at least one header of the second layer 3 encapsulated packet.
5. A method according to claim 3, further comprising:
receiving, at an egress switch, a first layer 3 encapsulated packet;
decapsulating the first layer 3 encapsulated packet to obtain a first layer 2RDMA packet; and
forwarding the first layer 2RDMA packet to the first compute instance based on the VLAN tag of the first layer 2RDMA packet;
receiving, at the egress switch, the layer 3 encapsulated packet;
decapsulating the layer 3 encapsulated packet to obtain a layer 2RDMA packet; and
the layer 2RDMA packet is forwarded to a second compute instance different from the first compute instance based on the VLAN tag of the layer 2RDMA packet.
6. The method of claim 1, further comprising:
receiving, at an egress switch, a first layer 3 encapsulated packet;
decapsulating the first layer 3 encapsulated packet to obtain a first layer 2RDMA packet; and
the first layer 2RDMA packet is forwarded to the first compute instance based on the VLAN tag of the first layer 2RDMA packet.
7. The method of claim 6, further comprising,
the value of the congestion notification data field of the first layer 2RDMA packet is set based on information in the congestion notification data field of the at least one header of the first layer 3 encapsulated packet.
8. The method of any one of claims 1 to 7, wherein the QoS value is a Differentiated Services Code Point (DSCP) field of an outer IP header of the first layer 3 encapsulated packet,
wherein the converting comprises copying the DSCP field of the IP header of the first layer 2RDMA packet to the DSCP field of the outer IP header of the first layer 3 encapsulated packet.
9. The method of any of claims 1 to 7, wherein the first layer 3 encapsulated packet is a virtual extensible local area network (VxLAN) packet, and
wherein the virtual network identifier is a Virtual Network Identifier (VNI) of a VxLAN header of the first layer 3 encapsulated packet.
10. A method of data networking, the method comprising:
receiving, at an egress switch, a first layer 3 encapsulated packet;
decapsulating the first layer 3 encapsulated packet to obtain a first layer 2RDMA packet;
setting a value of a congestion notification data field of the first layer 2RDMA packet based on information in the congestion notification data field of at least one header of the first layer 3 encapsulated packet; and
after the setting, and based on the VLAN tag of the first layer 2RDMA packet, forwarding the first layer 2RDMA packet to a first compute instance executing on the host machine, wherein the first compute instance is among a plurality of compute instances executing on the host machine.
11. The method of claim 10, further comprising:
receiving, at the egress switch, the layer 3 encapsulated packet;
decapsulating the layer 3 encapsulated packet to obtain a layer 2RDMA packet; and
the layer 2RDMA packet is forwarded to a second compute instance different from the first compute instance based on the VLAN tag of the layer 2RDMA packet.
12. The method of claim 11, further comprising, at an egress switch:
draining the first layer 3 encapsulated packet into a first queue of an egress switch based on a quality of service (QoS) value of an outer header of the first layer 3 encapsulated packet; and
the second layer 3 encapsulated packet is queued into a second queue of the egress switch that is different from the first queue based on the QoS value of the outer header of the second layer 3 encapsulated packet.
13. The method of any of claims 10 to 12, wherein the method further comprises receiving, from the first computing instance, a congestion notification packet directed to a source address of the first layer 2RDMA packet.
14. A system for data networking, the system comprising:
a switch fabric; and
an ingress switch configured to:
receiving, from a host machine executing a plurality of compute instances for a plurality of tenants, a first layer 2RDMA packet for a first tenant among the plurality of tenants;
Converting the first layer 2RDMA packet into a first layer 3 encapsulated packet having at least one header; and
the first layer 3 encapsulated packet is forwarded to the switch fabric,
wherein the first layer 2RDMA packet includes a Virtual Local Area Network (VLAN) tag and a quality of service (QoS) data field, an
Wherein the ingress switch configured to convert the first layer 2RDMA packet is configured to add the at least one header to the first layer 2RDMA packet, the at least one header comprising:
virtual network identifier based on information from VLAN tag, and
QoS values based on information from the QoS data field.
15. The system of claim 14, wherein the switch fabric comprises an intermediate switch configured to modify the congestion notification data field of the at least one header of the first layer 3 encapsulated packet in response to the indication of congestion.
16. The system of claim 14, wherein the ingress switch is configured to:
receiving a layer 2RDMA packet comprising a VLAN tag and a QoS data field;
converting the layer 2RDMA packet into a layer 3 encapsulated packet having at least one header; and
forwarding the layer 3 encapsulated packet to the switch fabric,
Wherein the VLAN tag of the second layer 2RDMA packet indicates a different VLAN than the VLAN tag of the first layer 2RDMA packet.
17. The system of claim 16, wherein the switch fabric comprises an intermediate switch having a first queue and a second queue different from the first queue, the intermediate switch configured to:
queuing the first layer 3 encapsulated packet to a first queue of an intermediate switch based on a QoS value of the at least one header of the first layer 3 encapsulated packet; and
the second layer 3 encapsulated packet is queued into a second queue based on the QoS value of the at least one header of the second layer 3 encapsulated packet.
18. The system of any of claims 14 to 17, further comprising an egress switch configured to:
receiving a first layer 3 encapsulated packet;
decapsulating the first layer 3 encapsulated packet to obtain a first layer 2RDMA packet;
setting a value of a congestion notification data field of the first layer 2RDMA packet based on information in the congestion notification data field of the at least one header of the first layer 3 encapsulated packet; and
the first layer 2RDMA packet is forwarded to the first compute instance based on the VLAN tag of the first layer 2RDMA packet.
19. A non-transitory computer-readable medium storing a plurality of instructions executable by one or more processors, the plurality of instructions comprising instructions that when executed by the one or more processors cause the one or more processors to:
receiving a first layer 2RDMA packet from a host machine executing a plurality of compute instances for a plurality of tenants;
converting the first layer 2RDMA packet into a first layer 3 encapsulated packet having at least one header; and
the first layer 3 encapsulated packet is forwarded to the switch fabric,
wherein the first layer 2RDMA packet includes a Virtual Local Area Network (VLAN) tag and a quality of service (QoS) data field, an
Wherein the instructions, when executed by the one or more processors, cause the one or more processors to perform the converting, cause the one or more processors to add the at least one header to the first layer 2RDMA packet, the at least one header comprising:
virtual network identifier based on information from VLAN tag, and
QoS values based on information from the QoS data field.
20. The non-transitory computer-readable medium of claim 19, wherein the QoS value is a Differentiated Services Code Point (DSCP) field of an outer IP header of the first layer 3 encapsulated packet, and
Wherein the instructions, when executed by the one or more processors, cause the one or more processors to perform the converting, cause the one or more processors to copy a DSCP field of an IP header of the first layer 2RDMA packet to a DSCP field of an outer IP header of the first layer 3 encapsulated packet.
21. The non-transitory computer readable medium of any of claims 19 and 20, wherein the first layer 2RDMA packet is a RoCEv2 packet.
CN202180088766.8A 2020-12-30 2021-04-13 RDMA (RoCE) cloud-scale multi-tenancy for converged Ethernet Pending CN116724546A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US63/132,417 2020-12-30
US17/165,877 2021-02-02
US17/166,922 2021-02-03
USPCT/US2021/025459 2021-04-01
PCT/US2021/025459 WO2022146466A1 (en) 2020-12-30 2021-04-01 Class-based queueing for scalable multi-tenant rdma traffic
PCT/US2021/027069 WO2022146470A1 (en) 2020-12-30 2021-04-13 Cloud scale multi-tenancy for rdma over converged ethernet (roce)

Publications (1)

Publication Number Publication Date
CN116724546A true CN116724546A (en) 2023-09-08

Family

ID=87873881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180088766.8A Pending CN116724546A (en) 2020-12-30 2021-04-13 RDMA (RoCE) cloud-scale multi-tenancy for converged Ethernet

Country Status (1)

Country Link
CN (1) CN116724546A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117692937A (en) * 2024-02-04 2024-03-12 江苏未来网络集团有限公司 5G full-connection factory equipment network topology structure and construction and use methods thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117692937A (en) * 2024-02-04 2024-03-12 江苏未来网络集团有限公司 5G full-connection factory equipment network topology structure and construction and use methods thereof
CN117692937B (en) * 2024-02-04 2024-05-14 江苏未来网络集团有限公司 5G full-connection factory equipment network topology structure and construction and use methods thereof

Similar Documents

Publication Publication Date Title
US11991246B2 (en) Cloud scale multi-tenancy for RDMA over converged ethernet (RoCE)
EP4183121B1 (en) Systems and methods for a vlan switching and routing service
CN115699698A (en) Loop prevention in virtual L2 networks
US11516126B2 (en) Techniques for high performant virtual routing capabilities
US20220210063A1 (en) Layer-2 networking information in a virtualized cloud environment
US11496599B1 (en) Efficient flow management utilizing control packets
WO2023205003A1 (en) Network device level optimizations for latency sensitive rdma traffic
US20230344777A1 (en) Customized processing for different classes of rdma traffic
CN116724546A (en) RDMA (RoCE) cloud-scale multi-tenancy for converged Ethernet
WO2022146466A1 (en) Class-based queueing for scalable multi-tenant rdma traffic
US20230032441A1 (en) Efficient flow management utilizing unified logging
EP4272399A1 (en) Layer-2 networking storm control in a virtualized cloud environment
CN116686277A (en) Class-based queuing for extensible multi-tenant RDMA traffic
US20230344778A1 (en) Network device level optimizations for bandwidth sensitive rdma traffic
EP4272083A1 (en) Class-based queueing for scalable multi-tenant rdma traffic
US20230013110A1 (en) Techniques for processing network flows
US20220417138A1 (en) Routing policies for graphical processing units
US20230222007A1 (en) Publishing physical topology network locality information for graphical processing unit workloads
WO2023205004A1 (en) Customized processing for different classes of rdma traffic
WO2023205005A1 (en) Network device level optimizations for bandwidth sensitive rdma traffic
CN117597894A (en) Routing policies for graphics processing units
EP4360280A1 (en) Routing policies for graphical processing units
WO2023136964A1 (en) Publishing physical topology network locality information for graphical processing unit workloads
WO2022271990A1 (en) Routing policies for graphical processing units
WO2023244357A1 (en) Implementing communications within a container environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination