CN117118778A - Full mesh proxy-less connection between networks - Google Patents

Full mesh proxy-less connection between networks Download PDF

Info

Publication number
CN117118778A
CN117118778A CN202211450981.0A CN202211450981A CN117118778A CN 117118778 A CN117118778 A CN 117118778A CN 202211450981 A CN202211450981 A CN 202211450981A CN 117118778 A CN117118778 A CN 117118778A
Authority
CN
China
Prior art keywords
network
peer
information
controller
vpc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211450981.0A
Other languages
Chinese (zh)
Inventor
安舒曼·古普塔
帕夫林·拉多斯拉沃夫
卡纳安·萨塔纳坦
阿洛克·库马尔
尤斯·里克特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/859,558 external-priority patent/US20230379191A1/en
Application filed by Google LLC filed Critical Google LLC
Publication of CN117118778A publication Critical patent/CN117118778A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions

Abstract

The present disclosure provides full mesh connections between all endpoints in a VPC, including virtual machines, load balancers, routers, interconnects, virtual private networks, and the like. The connection may extend to an on-premise device, such as a device connected by a VPN and an interconnect. The connection is high performance, reliable and secure.

Description

Full mesh proxy-less connection between networks
Cross Reference to Related Applications
The present application claims benefit from the filing date of U.S. provisional patent application 63/344,842 filed on day 2022, 5/23, the disclosure of which is incorporated herein by reference.
Background
Virtual Private Clouds (VPCs) provide both connectivity and management boundaries in public clouds. However, sometimes a workload in a VPC requires resources connected to another VPC in a different administrative domain.
Disclosure of Invention
The present statement provides a full mesh connection between all endpoints in a VPC. Such endpoints include, for example, virtual machines, load balancers, routers, interconnects (interconnects), virtual private networks, and the like. Connectivity may extend to on-premise devices, such as devices connected by VPN and interconnect. The connection is high performance, reliable and secure.
One aspect of the present disclosure provides a system comprising a Software Defined Network (SDN) controller, a host controller, and a programmable packet switch, wherein the SDN controller, host controller, and programmable packet switch are communicatively coupled within a first network, wherein the host controller is programmed to receive defined connectivity information from a target network and establish a agentless peer-to-peer connection between a source network and the target network using the defined connectivity information. The system may include a first virtual machine within a first network. The first virtual machine and the host controller may reside on the same host. In some examples, the first network is a Virtual Private Cloud (VPC).
The defined connectivity information may include a destination address in the target network. It may further comprise forwarding information.
According to some embodiments, a programmable packet switch uses defined connectivity information to transfer packets (packets) from a first virtual machine in a source network to a second virtual machine in a target network.
According to some embodiments, the source network comprises a plurality of endpoints, and each endpoint in the source network is adapted to communicate directly with each endpoint in the target network using a agentless peer-to-peer connection.
According to some embodiments, the system may further comprise a plurality of SDN controllers in the full-sharded control plane, each of the plurality of SDN controllers being responsible for a subset of Virtual Private Cloud (VPC) networks in the peer group, each VPC network having an associated host controller, wherein each of the plurality of SDN controllers is coupled to each associated host controller.
According to some embodiments, the system may further comprise an extensible gateway between the programmable packet switch of the first network and the second programmable packet switch of the second network.
Another aspect of the present disclosure provides a method of establishing a peer-to-peer connection between a first network and a second network. The method may include receiving, at a host controller in a first network, programming information from the first controller within the first network, receiving, at the host controller in the first network, connectivity information from a second controller within a second network, establishing a agentless peer-to-peer connection between the first network and the second network using at least the received connectivity information, and providing packet information for direct communication between a first endpoint in the first network and a second endpoint in the second network to a packet switch within the first network. The first endpoint may be a virtual machine. Receiving connectivity information from the second controller may include subscribing to the second controller using programming information from the first controller. The connectivity information may include a destination address and forwarding information in the second network.
According to some embodiments, the method may further include delivering, by the programmable packet switch, the packet from the first virtual machine in the first network to the second virtual machine in the second network using the connectivity information.
Yet another aspect of the present disclosure provides a Software Defined Network (SDN) controller executable on one or more processors for performing a method of establishing a peer-to-peer connection between a first network and a second network comprising the SDN controller. The SDN controller may be configured to provide programming information to a host controller in a first network, the programming information identifying a second network and subscribing the host controller to a second SDN controller in the second network. According to some embodiments, an SDN controller includes a peer-to-peer multiplexer configured to receive information from a plurality of resources and to issue routing information for communication with the resources. According to some embodiments, an SDN controller comprises a peer manager maintaining one or more export policies of a first network, wherein a peer multiplexer multiplexes on resources based on the export policies. According to some embodiments, an SDN controller includes a routing manager configured to receive routing information from one or more other SDN controllers. According to some embodiments, the SDN controller is configured to identify routing conflicts based on the received routing information and resolve the conflicts using a predefined set of rules.
Drawings
Fig. 1 is a schematic diagram illustrating a connection between two VPCs according to aspects of the present disclosure.
Fig. 2 is a block diagram illustrating cross-VPC connections in a Software Defined Network (SDN) architecture in accordance with various aspects of the disclosure.
Fig. 3 is a block diagram illustrating example details of a cross-VPC connection in accordance with aspects of the disclosure.
Fig. 4 is a block diagram illustrating components of a host controller for establishing peering in accordance with various aspects of the disclosure.
FIG. 5 is a block diagram illustrating an example hardware environment in accordance with aspects of the present disclosure.
Fig. 6 is a block diagram illustrating connections between peer groups and SDN data planes through a fully fragmented SDN control plane in accordance with aspects of the present disclosure.
Fig. 7 is a block diagram illustrating route conflict resolution across peer VPCs in accordance with aspects of the disclosure.
Fig. 8 is a block diagram illustrating an extensible cross-VPC connection through a gateway in accordance with aspects of the present disclosure.
Fig. 9 is a block diagram illustrating cross-network multiplexing in accordance with various aspects of the disclosure.
Fig. 10 is a flow chart illustrating an example method in accordance with aspects of the present disclosure.
Detailed Description
Fig. 1 shows an example of a full mesh, agentless connection between two VPCs 110, 120. As shown, each VPC 110, 120 includes a plurality of different endpoints, such as a virtual machine 112, a Virtual Private Network (VPN) connection 114, and an in-premise load balancing (ILB) node 116 in the first VPC 110, and a virtual machine 122, an interconnect 124, and an ILB 126 in the second VPC 120. While these are some examples of endpoint types, it should be understood that other types of endpoints may be included in the VPC. One example of another type of endpoint includes a router. Although several endpoints are shown in each VPC 110, 120, it should be understood that any number of endpoints may be included in each VPC 110, 120, and that the number of computing devices in a first VPC may be different from the number of computing devices in a second VPC. Furthermore, it should be appreciated that the number of computing devices in each VPC may vary over time, e.g., as hardware is removed, replaced, upgraded, or expanded. Each endpoint in the first VPC 110 may be communicatively coupled with each endpoint in the second VPC 120.
Virtual machine 112, virtual machine 122 may each represent one or more virtual machines running on one or more hosts. Each virtual machine 112, 122 may run an operating system and applications. Although only a few virtual machines 112, 122 are shown, it should be understood that any number of virtual machines may be supported by any number of host computing devices.
ILBs 116, 126 may include one or both of a network load balancer, an application load balancer, or other types of load balancers. According to some examples, ILBs 116, 126 may allocate jobs or tasks among virtual machines 112, 122. For example, the computing devices may have different capabilities, such as different processing speeds, workloads, and the like. In some examples, ILBs 116, 126 may determine which virtual machine is responsible for handling the new task as commands to execute the new task are received. In addition, ILBs 116, 126 may monitor the computing power of each virtual machine 112, 122 and adjust as needed. For example, if one virtual machine fails, ILB 116, ILB 126, which is responsible for load balancing of that virtual machine, may transfer its process to one or more other virtual machines. In other examples, if one virtual machine is approaching capacity and another virtual machine has availability, ILB 116, ILB 126 may move the job or may simply send the new job to the virtual machine with availability.
VPN connection 114 may establish a connection between VPC110 and one or more in-deployments 119. For example, the in-house deployment 119 may include routers or other devices within an organization. VPN connection 114 establishes a protected network connection between the in-house deployment 119 and VPC110, for example by encrypting internet traffic.
Interconnect 124 may establish a connection between VPC 120 and on-premise 129. For example, interconnect 124 may establish a direct physical connection between the on-premise network and VPC 120, such as by supporting a service provider.
The in-house deployment 129 may be the same as or different from the in-house deployment 119. The on-premise equipment 119 may belong to the same or a different organization than the on-premise equipment 129. VPC peering between VPC110 and VPC 120 may provide for communication of information between VPCs 110, 120. By way of example only, the first VPC110 may be part of a first organization that is a company that provides database services, while the second VPC 120 may be part of a second organization that is a separate company that uses the Web front end of the database services. As another example, the first VPC110 may be a first division within an organization, such as a market segment, and the second VPC 120 may be a second division within the same organization, such as a sales segment. While these are only two examples of VPCs where a full mesh connection may be beneficial, it should be understood that the description herein may be applicable to any of a variety of other situations.
Although only two in-house deployment networks and VPCs are shown in fig. 1, it should be understood that multiple in-house deployment networks and VPCs may be virtually connected. Traditionally, a view of resources in another VPC is provided by replicating the resources. This can be expensive in terms of time, bandwidth consumption, memory consumption, etc. According to the present disclosure, such duplication may be avoided by only obtaining the information needed to establish a peer-to-peer connection with another VPC.
Fig. 2 shows an example of a cross-VPC connection in a Software Defined Network (SDN) architecture. The architecture has several control layers. At the first layer, each SDN controller 532, 552 is responsible for one of two or more networks 230, 250. At the second level, each network 230, a host controller (OHC) 234, 254 in the network 250 is used to obtain peer-to-peer information with other networks. When establishing a peer relationship, rather than replicating all of the information, the network 230, the OHCs 234, 254 of the network 250 seek only a limited amount of information required to establish the peer relationship. For example, if vm_a 238 in first network 230 seeks to peer with vm_b in second network 250, OHC 234 in first network 230 only obtains enough information from SDN controller 252 in second network 250 to allow vm_a 238 to communicate with vm_b 258. At the third level, once peering is established, each of the networks 230, 236, 238 in the network 250 exchanges packets between the networks 230, 250.
Each SDN controller 232, 252 may be an application managing flow control in an SDN architecture, for example. For example, each SDN controller 232, 252 may be a software system or collection of systems that collectively provide management of network states, a high-level data model that captures relationships between managed resources, policies, and other services provided by the controllers, application Programming Interfaces (APIs) exposing controller services to applications, and so forth. In some examples, the SDN controller may further provide a secure TCP control session between itself and an associated agent in the network. Each SDN controller 232, 252 may run on a server and use a protocol to tell the programmable packet switch 236, 238 where to send the packet. SDN controller 232, SDN controller 252 may direct traffic according to one or more forwarding policies. Interfaces or protocols allow SDN controller 232, SDN controller 252 to communicate with other devices in network 230, 250. For example, SDN controller 232 in first network 230 may communicate with OHC 234 using such an interface or protocol. SDN controller 232 programs OHC 234 to pull programming from a second SDN controller 252 in second network 250. Similarly, a second SDN controller 252 in a second network 250 programs an OHC 254 to pull programming from a first SDN controller 232 in a first network 230. Such programming information pulled by the OHCs 234, 254 is used to establish peering relationships.
Each OHC 234, 254 may run on the same host as its associated virtual machine 238, 258. For example, the first OHC 234 may run on the same first host as vm_a 238, while the second OHC 254 may run on the same second host as vm_b 258. Each host may have instances for controlling virtual machines 238, 258.
Each OHC 234, 254 may obtain information from SDN controller 232, SDN controller 252. For example, the OHCs 234, 254 obtain the first information from the SDN controllers 232, 252 in their own network. For example, such first information may include information about virtual machines 238, 258 in the same network. As another example, such first information may include programming information from SDN controllers 232, 252 in the same network that provides instructions for the OHC to pull second information from an SDN in another network. Such second information obtained from an SDN in another network may include connectivity information for establishing peering between networks.
According to some examples, OHCs 234, 254 in one network may subscribe to SDN controllers 232, 252 in another network. For example, the first OHC 234 may subscribe to an endpoint meta-service with the local virtual machine controller and receive updates about each vm_a 238 on the same host as the OHC 234. For each endpoint received from the endpoint meta-service, the OHC 234 subscribes to a peer-to-peer service, through which updates on the second network 250 are received, using a corresponding identifier.
When the first network 230 attempts to establish a peer-to-peer connection with the second network 250, the first OHC234 promotes only the connectivity information needed to establish the connection from the second SDN controller 252 in the second network 250. According to some examples, such connectivity information may include location information, load balancer information, transmission rate information, or any other information that may inform vm_a 238 how to reach peer vm_b 258. The location information may include, for example, an IP address (e.g., destination address), forwarding information, a node identifier, etc. For example, the OHC234 may subscribe to the destination and forwarding services of the second network 250 using the corresponding endpoint identifier, peer-to-peer network identifier, and host IP address. When a corresponding update is received from the destination and forwarding services, the OHC234 can use the update to create a flow from vm_a 238 to vm_b 258.
The OHCs 234, 254 push programming from the SDN controllers 232, 252 to the programmable packet switches 236, 256. For example, when the first OHC234 receives first information from the SDN controller 232 in its network 230 and second information from the SDN controller 252 in the second network 250, it may push the set of first and second information to the packet switch 236. Upon receiving the packet from VM_A 238, packet switch 236 may forward the packet directly to the location of VM_B 258 without any agent. For example only, the programmable packets may modify the header of the packet or encapsulate the packet with additional headers, where such header information for routing the packet directly to vm_b is received from SDN controller 252 through OHC 234.
The programmable packet switches 236, 256 may be, for example, data plane controls, application Specific Integrated Circuits (ASICs), network Interface Controllers (NICs), logic switches, or any other type of programmable switch. Programmable packet switch 236, programmable packet switch 256 may be adapted to execute any of a variety of programs, such as programs from SDN controller 232, SDN controller 252 for sending and/or receiving packets between networks 230, 250.
Each of vm_a 238 and vm_b 258 may be a virtual environment that functions as a virtual computer system with its own Central Processing Unit (CPU), memory, network interface, storage, etc. The first VM a 238 of the first network 230 may share a host with the OHC 234. Likewise, a second vm_b 258 in the second network 250 may share a host with the OHC 254. In other examples, the virtual machines, OHCs, and/or SDNs may reside on any combination of independent hosts coupled to enable communication between devices in each network.
Although two networks 230, 250 are shown, it should be understood that peering may be established between any number of networks. Peer-to-peer may be a two-way peer-to-peer relationship in which each network sees another network and also shares its own resources with the other network. In other examples, the peer may be a unidirectional peer, e.g., the first network peer enters the second network, but the second network peer is not allowed to enter the first network. While a defined number of components are shown within each network 230, 250, it should be understood that additional components may be present. For example only, each network 230, 250 may include multiple virtual machines or other resources. As another example, each network may include VPNs, interconnects, or other components for establishing peering between networks.
The architecture described in fig. 2 provides a high bandwidth peer-to-peer connection because each virtual machine can use the full egress bandwidth to communicate across the network. Accordingly, the network may communicate using the full structure cross-sectional bandwidth. Furthermore, the architecture is highly reliable since no agents are required to establish communication between networks.
Fig. 3 illustrates an embodiment of establishing a peer-to-peer connection, including communication with OHCs 234, 258 in the first and second VPCs, respectively. The OHC 234 may receive peer information from the first VPC that identifies other VPCs (e.g., VPC 2) for peering. Such peer information may include, for example, a virtual local area network extension identifier (VNID) of peer VPC 2.
The OHC 234 uses the received peer-to-peer information to obtain connectivity information, such as destination and forwarding information, for vm_b 258 in VPC 2. For example, OHC 234 may use the VNID of peer VPC 2 to obtain such destination and forwarding information. According to some examples, OHC 234 may subscribe to the destination and/or forwarding services of VPC 2 to obtain the update. The OHC 234 may receive such destination and forwarding information for each virtual machine or other endpoint in VPC 2. Using the obtained information, the OHC 234 can create a flow from vm_a238 to vm_b 258. For example, the flow may be created using the virtual Internet Protocol (IP) address of vm_a238, the VNID to which vm_a238 belongs, the physical IP address of the host containing vm_a238, the virtual IP address of vm_b 258, the peer VNID to which vm_b 258 belongs, and the physical IP address of the host containing VM 2. The reverse (from VM_B 258 to VM_A 238) flow may be similarly programmed when the second OHC 254 obtains peer-to-peer information from VPC 2 and connectivity information from VPC 1.
Fig. 4 shows additional details of an OHC 234 in a first network (e.g., a first VPC) for pairing with a second network (e.g., a second VPC).
As shown, the source VPC patch 420 includes a peer provider 421. The peer-to-peer provider 421 may be, for example, a service or module that provides peer-to-peer information to peer subscribers 441 of the host controller (OHC) 440. As described above, the peer information may include identifiers of other VPCs. In this example, peer provider 421 may provide peer subscriber 441 with an identifier of destination VPC patch 430.
Using peer information obtained by peer subscriber 441, OHC 440 subscribes to destination VPC patch 430. Thus, the OHC 440 can obtain connectivity information for each virtual machine or other endpoint in the destination VPC shard 430 and use the connectivity information to establish a direct peer-to-peer connection between the virtual machine or other endpoint in the source VPC shard 420 and each virtual machine or other endpoint in the destination VPC shard 430. Peer destination providers 432 in the destination VPC shards 430 provide destination information to destination subscribers 442 in the OHCs 440. This information may be provided for each virtual machine in destination VPC partition 430. Peer destination provider 432 may be, for example, a routing table or a module in communication with a routing table. The destination information provided to the destination subscriber 442 may be routing table information. The load balancer provider 433 in the destination VPC partition 430 provides load balancer information to the load balancer subscribers 443 in the OHC 440. Such load balancer information may include, for example, an indication of the current workload of a given virtual machine in the destination VPC, an indication of the capacity of the given virtual machine, or other information. The forwarding provider 444 provides forwarding information to forwarding subscribers 444 in the OHC 440.
Subscribers 441, 442, 443, 444 in the OHC 440 can each communicate with a corresponding set of suppliers 445, 446, 447, 448. According to some examples, suppliers 445, 446, 447, 448 may also communicate with each other. Peer provider 445 receives peer-to-peer information from peer subscriber 441. Destination provider 446 in OHC 440 receives destination information from destination subscriber 442 of OHC 440 and provides it to destination subscriber 462 in packet switch 460. The load balancer provider 447 in the OHC receives load balancing information from the load balancer subscriber 443 and provides it to the load balancer subscriber 463 of the packet switch 460. The forwarding provider 448 receives forwarding information from the forwarding subscribers 444 in the OHC 440 and provides it to the forwarding subscribers 464 in the packet switch 460. The packet switch 460 uses information from the OHC suppliers 446, 447, 448 to establish peer-to-peer connections with a single virtual machine or other endpoint in the destination VPC patch 430.
Fig. 5 is a block diagram of an example environment for implementing the systems described above. The system may be implemented on one or more devices with one or more processors in one or more locations, e.g., on one or more hosts 291, 292, 293 in VPC 330.
One or more hosts 291-293 in VPC330 may be coupled to one or more computing devices 512, one or more storage devices 513, and/or any of a variety of other types of devices via network 515. The storage device 513 may be a combination of volatile and nonvolatile memory and may be in the same or different physical location as the computing device 512. For example, storage device 513 may include any type of non-transitory computer readable medium capable of storing information, such as hard disks, solid state drives, tape drives, optical storage, memory cards, ROM, RAM, DVD, CD-ROM, writeable, and read-only memory.
The storage device 513, in some examples, may be part of the VPC 330. VPC330 may be configured to use hosts 291-293 and/or storage device 513 for several cloud computing platform services, such as hosting cloud storage for data backup, or hosting one or more virtual machines accessible by computing device 512 in communication therewith.
Each host 291-293 may include one or more processors, memory, and/or other components commonly found on hosts. The memory 392 may store information accessible by the processor 398, including instructions 396 executable by the processor 398. The memory 392 may also include data 394 that may be retrieved, manipulated, or stored by the processor 398. Memory 392 may be a non-transitory computer-readable medium such as volatile and non-volatile memory capable of storing information that is accessible to processor 398. The processor 398 may include one or more Central Processing Units (CPUs), graphics Processing Units (GPUs), field Programmable Gate Arrays (FPGAs), and/or Application Specific Integrated Circuits (ASICs), such as Tensor Processing Units (TPUs).
The instructions 396 may include one or more instructions that when executed by the processor 398 cause the one or more processors to perform actions defined by the instructions. The instructions 396 may be stored in an object code format for direct processing by the processor 398, or in other formats, including an interpretable script or a collection of separate source code modules, which can be interpreted or compiled in advance as desired. The instructions 396 may include instructions for implementing a system consistent with aspects of the present disclosure. The instructions 396 may be further executed to identify a particular process of the virtual machine 238.
The data 394 may be retrieved, stored, or modified by the processor 398 according to instructions 396. The data 394 may be stored in a computer register, in a relational or non-relational database as a table with a number of different fields and records, or as a JSON, YAML, proto or XML document. The data 394 may also be formatted in a computer-readable format such as, but not limited to, binary values, ASCII, or Unicode. In addition, the data 394 may include information sufficient to identify relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories, including other network locations, or information used by a function to calculate relevant data.
Although fig. 5 shows processor 398 and memory 392 located within host 292, processor 398 and memory 392 may include multiple processors and memories, which may operate in different physical housings and/or locations. For example, some of the instructions 396 and data 394 may be stored on a removable SD card and others may be stored on a read-only computer chip. Some or all of the instructions and data may be stored in a location that is physically remote from the processor 398 but still accessible to the processor 398. Similarly, the processor 398 may comprise a collection of processors that may operate concurrently and/or sequentially. Hosts 291-293 may each include one or more in-house deployment clocks that provide timing information that may be used for time measurement of the operation and programs run by the hosts.
Host 291, host 293 may be configured similarly to host 292, including memory, processors, and other components. Computing device 512 may also include one or more processors, memory, and other components.
Host 292 can be configured to receive requests to process data from computing device 512. For example, VM_A 238 may provide various services to a user through various user interfaces and/or APIs that expose platform services. For example, the one or more services may be a machine learning framework or a set of tools for managing software applications programmed to provide particular services.
Device 512 and host 292 may be capable of direct and indirect communication over network 515. The device may set a listening socket that can accept an initiating connection for sending and receiving information. The network 515 itself may include various configurations and protocols including the internet, world wide web, intranets, virtual private networks, wide area networks, local area networks, and private networks using one or more corporate proprietary communication protocols. The network 515 may support a variety of short-range and long-range connections. The short-range and long-range connections may be made over different bandwidths, such as 2.402GHz to 2.480GHz 2.4GHz and 5GHz; or using various communication standards, such as standards for wireless broadband communication. Additionally or alternatively, the network 515 may also support wired connections between devices, including through various types of ethernet connections.
Although a single VPC and computing device 512 are shown in fig. 5, aspects of the disclosure may be implemented according to various different configurations and numbers of computing devices, including in a sequential or parallel processing paradigm, or through a distributed network of multiple devices. In some embodiments, various aspects of the disclosure may be performed on a single device, as well as any combination thereof.
Fig. 6 is a block diagram illustrating a connection between peer groups and SDN data planes through a fully fragmented SDN control plane. In this example, the resource model includes a plurality of networks in peer group 600, such as VPCs 1-n. In peer group 600, each VPC 1-n is interconnected. Although a few VPCs are shown, it should be understood that any number of VPCs may be included in peer group 600.
Because only the defined amount of information needed for peering between VPCs is pulled, the SDN control plane may be fragmented. In this example, the fully fragmented SDN control plane includes a plurality of SDN controllers 232, SDN controllers 252, SDN controllers 272. Each SDN controller 232, SDN controller 252, SDN controller 272 is responsible for a subset of VPCs 1-n. For example, peer group 600 may be partitioned or partitioned such that each VPC 1-n is associated with SDN controller 232, SDN controller 252, SDN controller 272. Each SDN controller 232, SDN controller 252, SDN controller 272 may control one, two, three, or any other number of VPCs 1-n. For example, as shown, a first SDN controller 232 controls VPC 2 and another VPC, and a second SDN controller 252 controls VPC 1, VPC n, and another VPC. SDN controller 272 controls VPC 3, VPC 4, and another VPC.
Each SDN controller 232, SDN controller 252, SDN controller 272 may communicate with each of a plurality of OHCs 234, OHCs 244, OHCs 254, OHCs 264, OHCs 284, etc. in an SDN data plane. Each OHC may be associated with a VPC in peer group 600. For example, OHC 234 may be associated with VPC 1, OHC 244 may be associated with VPC 2, OHC 254 may be associated with VPC 3, and so on. The full mesh connection between each SDN controller 232, SDN controller 252, SDN controller 272 and each OHC 234-284 provides a connection between each VPC 1-n in peer group 600. For example, since each OHC 234-284 is coupled to each SDN controller 232, SDN controller 252, SDN controller 272, the OHC of any given VPC may pull connectivity information from the particular SDN controller responsible for another VPC that will peer with the given VPC. For example, if VPC 2 is to peer with VPC 3, then OHC 244 associated with VPC 2 may pull connectivity information for VPC 3 from SDN controller 272 to establish the peering relationship.
The fully sliced SDN control plane provides scalability to connect a large number of VPCs. For example only, thousands of peers or more may be connected. The SDN data plane, including OHCs 234-284, pulls minimal programming information from SDN controllers 232-272 as needed.
Fig. 7 provides an example of route conflict resolution across peer VPCs. For example, all virtual machines using an on-premise router in a VPC may have IP addresses within a particular range, and BGP advertisements may be issued to other devices to reach the virtual machines using such IP addresses. Such routes may be received and managed using route management in, for example, an SDN controller. All routes learned from the on-premise router of the VPC may be provided to the peer VPC, which may use the same address. In this case, a conflict may occur between the route of the on-premise router including the first VPC and the route of the on-premise router including the peer VPC.
If a routing conflict occurs, SDN controller 732, SDN controller 742, SDN controller 752, SDN controller 762 may communicate with each other to resolve the conflict. For example, each SDN controller 732, SDN controller 742, SDN controller 752, SDN controller 762 may include a routing manager. One or more or all of SDN controllers 732, SDN controller 742, SDN controller 752, SDN controller 762 may also include a peer routing manager. For example, the peer routing manager may be a separate module for resolving conflicts, while the routing manager is used for communication routes to virtual machines or other endpoints in the VPC. An SDN controller may be designated for resolving routing conflicts using a peer routing manager. For example, an SDN controller responsible for programming virtual machine outlets may do route conflict resolution. For example, SDN controllers 732, 742, 752, 762 may exchange routes with each other through other services, such as by using a routing manager to communicate routing information to a peer routing manager of the SDN controller responsible for resolution, shown in this example as SDN controller 732. Such routing information communicated between SDN controllers may include routing information from VPC 1 to complete a routing table, such as a destination address. SDN controller 732 uses a predefined set of rules to determine winning routes when there are conflicts and issues the winning routes to OHCs 734. In turn, the OHC734 issues winning routes and other relevant information pulled from SDN controllers 732-762 to programmable packet switch 736.
According to some examples, the predefined set of rules for resolving route conflicts may be related to the priority of the route. For example, one rule may be that routes in the local network have a higher priority than routes in the peer-to-peer network. This may ensure that local traffic is not redirected or carried to the remote network. According to some examples, each peer network may have an associated priority, and routes may be assigned a priority based on the priority. In some examples, the value of the identifier may be used to determine the priority, for example by assigning a priority to a higher or lower numerical identifier. According to a further embodiment, the priorities may be assigned according to a routing policy associated with the peer-to-peer network.
Fig. 8 shows an example of an extensible cross-VPC connection through a gateway. For example, one or more extensible gateways 840 may be implemented between the programmable packet switch 236 of the first VPC and the programmable packet switch 256 of the second VPC. When programmable packet switch 236 receives a packet from vm_a 238, packet switch 236 may load balance the packet to one or more extensible gateways 840. One or more of the extensible gateways 840 may have a complete network state and forward packets to a destination, such as the programmable packet switch 256. For example, the extensible gateway 840 may see each virtual machine, load balancer, or other endpoint in each VPC.
According to some examples, one or more of the extensible gateways 840 may be a pool of extensible gateways. Each extensible gateway may be, for example, a dedicated hardware unit, a virtual machine, a virtual router, or other gateway.
According to some examples, the extensible gateway 840 may be used to transfer traffic from a first VPC to one or more second VPCs when the first VPC becomes overloaded. For example, the extensible gateway 840 may monitor traffic demand and capacity within a first VPC and one or more second VPCs. When the load of VM_A 238 in a first VPC becomes too high relative to its capacity, the extensible gateway can transfer traffic to one or more second VPCs using a peer-to-peer connection established by OHC 234 of the first VPC. According to some examples, the extensible gateway 840 may prompt the OHC 234 to initiate peering and/or request connectivity information from the SDN controller 252 of the second VPC. Accordingly, VM_B 258 of the second VPC may share some of the load of VPC_A 238.
Fig. 9 shows an example of multiplexing across networks. Each VPC issues resources 950 to peer multiplexers 910 in SDN controller 902, e.g., from load balancers, virtual machines, BGP routes, peers. The peer-to-peer multiplexer 910 uses such published resources to derive the virtual network state 930. Virtual network state 930 may indicate all resources in the network, such as all virtual machines across all VPCs, all load balancers across all VPCs, all BGP routes across all VPCs, and so forth.
Peer multiplexer 910 in SDN controller 902 may be used to apply peer policies when processing updates of endpoints, ILBs, subnets and routes. Peer-to-peer multiplexer 610 may track which peer-to-peer related services need to be updated for each resource. Further, for each peer-to-peer policy, the peer-to-peer multiplexer 910 may create a peer-to-peer related service by extracting the resource status from the resource manager. According to the policy, peer-to-peer multiplexer 910 may filter out programming of peer-to-peer related services.
Peer manager 920 maintains a map of all peer policies related to the VPC. For example, for each peer-to-peer policy, it maintains a list of identifiers of the peer-to-peer networks that are using the policy. These policies may include, for example, filtering policies or other types of policies. The filtering policy may determine whether to allow a particular protocol, whether to allow a particular route, whether to allow a particular address or subnet, or any combination of these or other parameters, and so forth. Each pair of VPC pairings may have an independent policy, referred to as an activity export policy, which may be managed by peer manager 920.
SDN controller 902 may publish information about routes to resources such as virtual machines and load balancers. SDN controller 902 may also publish information for federal edge gateway protocol (BGP) routing. Federation refers to all SDNs communicating with each other to exchange routes and resolve route conflicts. For example, peer-to-peer multiplexer 910 may provide federal route exchange information, such as when communicating between the route manager and the peer route manager shown in fig. 7. The peer-to-peer multiplexer 910 may provide policies to the routing service, such as SDN controllers providing peer-to-peer information to the OHCs.
Different services may be published according to each connection policy of each VPC. For example, different services may be published for IPv 6-only policies, IP scope policies that are specifically exported, private address-only policies, and so on. Resources may be published simultaneously on multiple services. Multiplexing on the resources may be performed according to the derived policy of the VPC.
SDN controller 901 may provide policy-based optimization filtering. SDN controller 901 may use peer manager 920 to determine a unique set of active peer switching policies for each VPC. Peer multiplexer 910 then issues only the resource set for each unique switching policy for each VPC. SDN controller 902 and the programmable host responsible for the peer VPC then subscribe to the appropriate set of resources according to the exchange policy identifier. Such a multiplexer architecture results in a much smaller set of services being published than an architecture where a different set of resources is published between each pair of peer VPCs.
Fig. 10 illustrates an example method 1000 of establishing a peer-to-peer connection between VPCs. Although the operations are described in a particular order, the order may be modified. In some examples, operations may be performed in parallel. In some examples, operations may be added or omitted. In method 1000, operations are described from the perspective of an OHC in a source network establishing peering with a target network. The corresponding operations may be performed by other components, such as SDN controllers, packet switches, etc.
In block 1010, an OHC in a source VPC receives programming information from an SDN controller within the source VPC. The programming information may include, for example, an identification of the peer network, a policy identifier, and information for a target SDN controller reached by the source OHC to the target VPC.
In block 1020, the source OHC receives connectivity information from a target SDN controller in the target VPC. For example, such connectivity information may include a destination address of the VPC, such as an IP address of a target virtual machine in the target VPC. Such connectivity information may include forwarding information for sending the packet to the target VPC. The source OHC may subscribe to the services of the target VPC, thereby receiving updates of connectivity information from the target VPC.
In block 1030, the source OHC controller uses the received connectivity information to establish a peer-to-peer connection from the source VPC to the target VPC. Such peer-to-peer connections may be established based on the received defined connectivity information without requiring a copy of the entire target VPC. Furthermore, such peer-to-peer connections may be established without any agents.
In block 1040, the source OHC provides packet information to the packet switch in the source VPC. Such packet information may be used for direct communication between a first virtual machine in the source VPC and a second virtual machine in the target VPC. The packet information may include destination, forwarding, and authentication information received from, for example, a target SDN controller. Thus, the source packet switch can exchange packets directly with the target VPC without the need for a proxy.
The foregoing alternative embodiments are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages, unless otherwise specified. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. Furthermore, the terms "such as," "including," and the like, herein described embodiments and terms of the claims should not be construed as limiting the claimed subject matter to the particular embodiments; rather, these embodiments are merely illustrative of one of many possible embodiments. Furthermore, the same reference numbers in different drawings may identify the same or similar elements.

Claims (20)

1. A system, comprising:
a Software Defined Network (SDN) controller;
a host controller; and
a programmable packet switch;
wherein the SDN controller, the host controller, and the programmable packet switch are communicatively coupled within a first network;
wherein the host controller is programmed to receive defined connectivity information from a target network and to establish a agentless peer-to-peer connection between the first network and the target network using the defined connectivity information.
2. The system of claim 1, further comprising a first virtual machine within the first network.
3. The system of claim 2, wherein the first virtual machine and the host controller reside on the same host.
4. The system of claim 1, wherein the first network is a Virtual Private Cloud (VPC).
5. The system of claim 1, wherein the defined connectivity information includes a destination address in the target network.
6. The system of claim 5, wherein the defined connectivity information further comprises forwarding information.
7. The system of claim 1, wherein the programmable packet switch uses the defined connectivity information to transfer packets from a first virtual machine in the first network to a second virtual machine in the target network.
8. The system of claim 1, wherein the first network comprises a plurality of endpoints, and wherein each endpoint in the source network is adapted to communicate directly with each endpoint in the target network using the agentless peer-to-peer connection.
9. The system of claim 1, further comprising a plurality of SDN controllers in a fully sliced control plane, each of the plurality of SDN controllers being responsible for a subset of Virtual Private Cloud (VPC) networks in a peer group, each VPC network having an associated host controller, wherein each of the plurality of SDN controllers is coupled to each associated host controller.
10. The system of claim 1, further comprising an extensible gateway between the programmable packet switch of the first network and a second programmable packet switch of the target network.
11. A method of establishing a peer-to-peer connection between a first network and a second network, the method comprising:
receiving, at a host controller in the first network, programming information from a first controller within the first network;
receiving, at the host controller in the first network, connectivity information from a second controller within the second network;
establishing a agentless peer-to-peer connection between the first network and the second network using at least the connectivity information; and
Packet information is provided to a packet switch within the first network, the packet information for direct communication between a first endpoint in the first network and a second endpoint in the second network.
12. The method of claim 11, wherein the first endpoint is a virtual machine.
13. The method of claim 11, wherein receiving the connectivity information from the second controller comprises subscribing to the second controller using the programming information from the first controller.
14. The method of claim 11, wherein the connectivity information includes destination addresses and forwarding information in the second network.
15. The method of claim 11, further comprising transferring, by a programmable packet switch, packets from a first virtual machine in the first network to a second virtual machine in the second network using the connectivity information.
16. A Software Defined Network (SDN) controller executable on one or more processors for performing a method of establishing a peer-to-peer connection between a first network and a second network comprising the SDN controller, the SDN controller being configured to:
Providing programming information to a host controller in the first network, the programming information identifying the second network and having the host controller subscribe to a second SDN controller in the second network.
17. The SDN controller of claim 15, comprising a peer-to-peer multiplexer configured to receive information from a plurality of resources and issue routing information for communication with the resources.
18. The SDN controller of claim 17, comprising a peer manager maintaining one or more export policies of the first network, wherein the peer multiplexer multiplexes on the resources based on the export policies.
19. The SDN controller of claim 16, comprising a routing manager configured to receive routing information from one or more other SDN controllers.
20. The SDN controller of claim 19, wherein the SDN controller is configured to identify a routing conflict based on the received routing information and resolve the conflict using a predefined set of rules.
CN202211450981.0A 2022-05-23 2022-11-18 Full mesh proxy-less connection between networks Pending CN117118778A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/344,842 2022-05-23
US17/859,558 2022-07-07
US17/859,558 US20230379191A1 (en) 2022-05-23 2022-07-07 Full Mesh Proxyless Connectivity Between Networks

Publications (1)

Publication Number Publication Date
CN117118778A true CN117118778A (en) 2023-11-24

Family

ID=88797138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211450981.0A Pending CN117118778A (en) 2022-05-23 2022-11-18 Full mesh proxy-less connection between networks

Country Status (1)

Country Link
CN (1) CN117118778A (en)

Similar Documents

Publication Publication Date Title
US10142226B1 (en) Direct network connectivity with scalable forwarding and routing fleets
EP3932041B1 (en) Remote smart nic-based service acceleration
US10742446B2 (en) Interconnecting isolated networks with overlapping address ranges via scalable virtual traffic hubs
JP5976942B2 (en) System and method for providing policy-based data center network automation
US10938660B1 (en) Automation of maintenance mode operations for network devices
US10735499B2 (en) Virtual network interface multiplexing
US10715479B2 (en) Connection redistribution in load-balanced systems
US9584369B2 (en) Methods of representing software defined networking-based multiple layer network topology views
EP2710785B1 (en) Cloud service control and management architecture expanded to interface the network stratum
US11398956B2 (en) Multi-Edge EtherChannel (MEEC) creation and management
US11601365B2 (en) Wide area networking service using provider network backbone network
Al-Mashhadi et al. Design of cloud computing load balance system based on SDN technology
Lee et al. High-performance software load balancer for cloud-native architecture
US11824773B2 (en) Dynamic routing for peered virtual routers
US10009253B2 (en) Providing shared resources to virtual devices
US20210111946A1 (en) Systems and methods for integrating network switch management with computing resource management
US20230291769A1 (en) Distributed evaluation of networking security rules
EP1690383B1 (en) Distributed control plane architecture for network elements
CN117118778A (en) Full mesh proxy-less connection between networks
EP4283943A1 (en) Full mesh proxyless connectivity between networks
Zhang et al. Linux virtual server clusters
US20220321471A1 (en) Multi-tenant offloaded protocol processing for virtual routers
Shen et al. S-fabric: towards scalable and incremental SDN deployment in data centers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination