WO2020264323A1 - Gestion de connectivité de réseau fournisseur pour extensions de substrat de réseau de fournisseur - Google Patents

Gestion de connectivité de réseau fournisseur pour extensions de substrat de réseau de fournisseur Download PDF

Info

Publication number
WO2020264323A1
WO2020264323A1 PCT/US2020/039859 US2020039859W WO2020264323A1 WO 2020264323 A1 WO2020264323 A1 WO 2020264323A1 US 2020039859 W US2020039859 W US 2020039859W WO 2020264323 A1 WO2020264323 A1 WO 2020264323A1
Authority
WO
WIPO (PCT)
Prior art keywords
pse
provider network
sad
compute
network
Prior art date
Application number
PCT/US2020/039859
Other languages
English (en)
Inventor
Anthony Nicholas Liguori
Eric Samuel Stone
Richard H. Galliher
David James GOODELL
Patrick John LAWRENCE
Yang Lin
William Ashley
Steven Anthony KADY
Original Assignee
Amazon Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/457,827 external-priority patent/US11374789B2/en
Priority claimed from US16/457,824 external-priority patent/US11659058B2/en
Application filed by Amazon Technologies, Inc. filed Critical Amazon Technologies, Inc.
Priority to EP20742587.7A priority Critical patent/EP3987397A1/fr
Priority to CN202080047186.XA priority patent/CN114026826B/zh
Publication of WO2020264323A1 publication Critical patent/WO2020264323A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]

Definitions

  • virtualization technologies may allow a single physical computing machine to be shared among multiple users by providing each user with one or more virtual machines hosted by the single physical computing machine.
  • Each such virtual machine is a software simulation acting as a distinct logical computing system that provides users with the illusion that they are the sole operators and administrators of a given hardware computing resource, while also providing application isolation and security among the various virtual machines.
  • some virtualization technologies are capable of providing virtual resources that span two or more physical resources, such as a single virtual machine with multiple virtual processors that spans multiple distinct physical computing systems.
  • virtualization technologies may allow data storage hardware to be shared among multiple users by providing each user with a virtualized data store which may be distributed across multiple data storage devices, with each such virtualized data store acting as a distinct logical data store that provides users with the illusion that they are the sole operators and administrators of the data storage resource.
  • a wide variety of virtual machine types, optimized for different types of applications such as compute-intensive applications, memory-intensive applications, and the like may be set up at the data centers of some cloud computing provider networks in response to client requests.
  • higher-level services that rely upon the virtual computing services of such provider networks such as some database services whose database instances are instantiated using virtual machines of the virtual computing services, may also be made available to provider network clients.
  • FIG. 1 is a block diagram illustrating an example provider network extended by a provider substrate extension located within a network external to the provider network according to at least some embodiments.
  • FIG. 2 is a block diagram illustrating an example provider substrate extension according to at least some embodiments.
  • FIG. 3 is a block diagram illustrating an example connectivity between a provider network and a provider substrate extension according to at least some embodiments.
  • FIG. 4 is a block diagram illustrating an example system for configuring a provider network for communication with a provider substrate extension according to at least some embodiments.
  • FIG. 5 is a block diagram illustrating an example system for maintaining communications between a provider network and a provider substrate extension according to at least some embodiments.
  • FIG. 6 is a flow diagram illustrating operations of a method for configuring a provider network for communication with a provider substrate extension according to at least some embodiments.
  • FIG. 7 is a flow diagram illustrating operations of a method for communicating with a provider substrate extension for communication with a network external to a provider network according to at least some embodiments.
  • FIG. 8 illustrates an example provider network environment according to at least some embodiments.
  • FIG. 9 is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers according to at least some embodiments.
  • FIG. 10 is a block diagram illustrating an example computer system that may be used in at least some embodiments. DETAILED DESCRIPTION
  • the present disclosure relates to methods, apparatus, systems, and non-transitory computer-readable storage media for configuring a provider substrate extension for communication with a network external to a provider network.
  • a provider network operator provides its users (or customers) with the ability to utilize one or more of a variety of types of computing-related resources such as compute resources (e.g., executing virtual machines (VMs) and/or containers, executing batch jobs, executing code without provisioning servers), data/storage resources (e.g., object storage, block-level storage, data archival storage, databases and database tables, etc.), network-related resources (e.g., configuring virtual networks including groups of compute resources, content delivery networks (CDNs), Domain Name Service (DNS)), application resources (e.g., databases, application build/deployment services), access policies or roles, identity policies or roles, machine images, routers and other data processing resources, etc.
  • compute resources e.g., executing virtual machines (VMs) and/or containers, executing batch jobs, executing code
  • virtualization technologies may be used to provide users the ability to control or utilize compute instances (e.g., a VM using a guest operating system (OS) that operates using a hypervisor that may or may not further operate on top of an underlying host OS, a container that may or may not operate in a VM, an instance that can execute on“bare metal” hardware without an underlying hypervisor), where one or multiple compute instances can be implemented using a single electronic device.
  • compute instances e.g., a VM using a guest operating system (OS) that operates using a hypervisor that may or may not further operate on top of an underlying host OS, a container that may or may not operate in a VM, an instance that can execute on“bare metal” hardware without an underlying hypervisor
  • a user may directly utilize a compute instance provided by an instance management service (sometimes called a hardware virtualization service) hosted by the provider network to perform a variety of computing tasks.
  • instance management service sometimes called a hardware virtualization service
  • a user may indirectly utilize a compute instance by submitting code to be executed by the provider network (e.g., via an on-demand code execution service), which in turn utilizes a compute instance to execute the code - typically without the user having any control of or knowledge of the underlying compute instance(s) involved.
  • the provider network e.g., via an on-demand code execution service
  • the resources that support both the services offering computing-related resources to users and those computing-related resources provisioned to users may be generally referred to as the provider network substrate.
  • Such resources typically include hardware and software in the form of many networked computer systems.
  • the traffic and operations of the provider network substrate may broadly be subdivided into two categories in various embodiments: control plane traffic carried over a logical control plane and data plane operations carried over a logical data plane. While the data plane represents the movement of user data through the distributed computing system, the control plane represents the movement of control signals through the distributed computing system.
  • the control plane generally includes one or more control plane components distributed across and implemented by one or more control servers.
  • Control plane traffic generally includes administrative operations, such as establishing isolated virtual networks for various customers, monitoring resource usage and health, identifying a particular host or server at which a requested compute instance is to be launched, provisioning additional hardware as needed, and so on.
  • the data plane includes customer resources that are implemented on the provider network (e.g., computing instances, containers, block storage volumes, databases, file storage).
  • Data plane traffic generally includes non-administrative operations such as transferring data to and from the customer resources.
  • the control plane components are typically implemented on a separate set of servers from the data plane servers, and control plane traffic and data plane traffic may be sent over separate/distinct networks. In some embodiments, control plane traffic and data plane traffic can be supported by different protocols.
  • messages sent over the provider network include a flag to indicate whether the traffic is control plane traffic or data plane traffic.
  • the payload of traffic may be inspected to determine its type (e.g., whether control or data plane). Other techniques for distinguishing traffic types are possible.
  • on-prem While some customer applications are readily migrated to a provider network environment, some customer workloads need to remain on premises (“on-prem”) due to low latency, high data volume, data security, or other customer data processing requirements.
  • exemplary on-prem environments include customer data centers, robotics integrations, field locations, co-location facilities, telecommunications facilities (e.g., near cell towers), and the like.
  • the present disclosure relates to the deployment of a substrate like resources on-prem.
  • PSE provider substrate extension
  • resources e.g., hardware, software, firmware, configuration metadata, and the like
  • a customer can deploy on-prem (such as in a geographically separate location from the provider network) but that provides the same or similar functionality (e.g., virtualized computing resources) as are provided in the provider network.
  • resources may be physically delivered as one or more computer systems or servers delivered in a rack or cabinet such as those commonly found in on-prem locations.
  • the PSE can provide the customer with a set of features and capabilities that can be deployed on-prem similar to those features of a provider network described above.
  • a PSE represents a local extension of the capabilities of the provider network that can be set up at any desired physical location that can accommodate a PSE (e.g., with respect to physical space, electrical power, internet access, etc.).
  • a PSE may be considered to be virtually located in the same provider network data centers as the core provider network substrate while being physically located in a customer-selected deployment site.
  • the customer that is physically hosting the PSE can grant permissions to its own customers (e.g., other users of the provider network) to allow those users to launch instances to host their respective workloads within the PSE at the customer’s on-prem location and, in some cases, to allow those workloads to access the customer’s network.
  • its own customers e.g., other users of the provider network
  • a PSE may be pre-configured, e.g., by the provider network operator, with the appropriate combination of hardware, software and/or firmware elements to support various types of computing -related resources, and to do so in a manner that meets various local data processing requirements without compromising the security of the provider network itself or of any other customers of the provider network.
  • a PSE generally is managed through the same or a similar set of interfaces that the customer would use to access computing-related resources of within the provider network.
  • the customer can provision, manage, and operate computing-related resources within their on-prem PSE or PSEs at various deployment sites through the provider network using the same application programming interfaces (APIs) or console-based interface that they would otherwise use to provision, manage, and operate computing-related resources within the provider network.
  • APIs application programming interfaces
  • console-based interface that they would otherwise use to provision, manage, and operate computing-related resources within the provider network.
  • resources of the provider network instantiate various networking components to ensure secure and reliable communications between the provider network and the PSE.
  • Such components can establish one or more secure tunnels (e.g., VPNs) with the PSE.
  • Such components can further divide control plane traffic and data plane traffic and process each type of traffic differently based on factors including the direction of the traffic (e.g., to or from the PSE).
  • a control plane service dynamically provisions and configures these networking components for deployed PSEs.
  • Such a control plane service can monitor the networking components for each PSE and invoke self-healing or repair mechanisms designed to prevent communications with the PSE from being lost due to faults occurring within the provider network.
  • the PSE offers a variety of connectivity options to allow other resources of the customer (i.e., connected to the customer’s on-prem network) to communicate with computing-related resources hosted by the PSE.
  • a PSE gateway manages communications between the PSE and the other customer resources.
  • the customer can configure the PSE gateway by issuing one or more API calls to an interface of the provider network which results in control plane commands being sent to the PSE.
  • the PSE in turn handles traffic sent or relayed to the PSE by other devices in the customer’s on-prem site and vice versa.
  • PSEs can require secure networking tunnels from the customer site at which they are installed to the provider network substrate (e.g., the physical network of machines) in order to operate.
  • These tunnels can include virtual infrastructure components hosted both in virtualized computing instances (e.g., VMs) and on the substrate. Examples of tunnel components include VPCs and proxy computing instances and/or containers mnning on computing instances.
  • Each server in a PSE may use at least two tunnels, one for control plane traffic and one for data plane traffic.
  • intermediary resources positioned along the network path between the provider network substrate and the PSE can securely manage traffic flowing between the substrate and the PSE.
  • the provider network is a cloud provider network.
  • a cloud provider network refers to a large pool of accessible virtualized computing resources (such as compute, storage, and networking resources, applications, and services).
  • the cloud can provide convenient, on-demand network access to a shared pool of configurable computing resources that can be programmatically provisioned and released in response to customer commands. These resources can be dynamically provisioned and reconfigured to adjust to variable load.
  • Cloud computing can thus be considered as both the applications delivered as services over a publicly accessible network (e.g., the Internet, a cellular communication network) and the hardware and software in cloud provider data centers that provide those services.
  • a cloud provider network can be formed as a number of regions, where a region is a geographical area in which the cloud provider clusters data centers. Each region can include two or more availability zones connected to one another via a private high-speed network, for example a fiber communication connection.
  • An availability zone refers to an isolated failure domain including one or more data center facilities with separate power, separate networking, and separate cooling from those in another availability zone. Preferably, availability zones within a region are positioned far enough away from one other that the same natural disaster should not take more than one availability zone offline at the same time.
  • Customers can connect to availability zones of the cloud provider network via a publicly accessible network (e.g., the Internet, a cellular communication network).
  • a PSE as described herein can also connect to one or more availability zones via a publicly accessible network.
  • the cloud provider network can include a physical network (e.g., sheet metal boxes, cables) referred to as the substrate.
  • the cloud provider network can also include an overlay network of virtualized computing resources that run on the substrate.
  • network packets can be routed along a substrate network according to constructs in the overlay network (e.g., VPCs, security groups).
  • a mapping service can coordinate the routing of these network packets.
  • the mapping service can be a regional distributed look up service that maps the combination of overlay IP and network identifier to substrate IP so that the distributed substrate computing devices can look up where to send packets.
  • each physical host can have an IP address in the substrate network.
  • Hardware virtualization technology can enable multiple operating systems to run concurrently on a host computer, for example as virtual machines on the host.
  • a hypervisor, or virtual machine monitor, on a host allocates the host’s hardware resources amongst various virtual machines on the host and monitors the execution of the virtual machines.
  • Each virtual machine may be provided with one or more IP addresses in the overlay network, and the virtual machine monitor on a host may be aware of the IP addresses of the virtual machines on the host.
  • the virtual machine monitors may use encapsulation protocol technology to encapsulate and route network packets (e.g., client IP packets) over the network substrate between virtualized resources on different hosts within the cloud provider network.
  • the encapsulation protocol technology may be used on the network substrate to route encapsulated packets between endpoints on the network substrate via overlay network paths or routes.
  • the encapsulation protocol technology may be viewed as providing a virtual network topology overlaid on the network substrate.
  • the encapsulation protocol technology may include the mapping service that maintains a mapping directory that maps IP overlay addresses (public IP addresses) to substrate IP addresses (private IP addresses), which can be accessed by various processes on the cloud provider network for routing packets between endpoints.
  • certain embodiments may be capable of achieving various advantages, including some or all of the following: (a) enabling customers of a provider network operator to deploy a wide variety of applications in a location- independent manner using provider-managed infrastructure (e.g., PSEs) at sites selected by customers while still retaining the scalability, security, availability and other operational advantages made possible by a provider network, (b) reducing the amount of application data and results that have to be transferred over long distances, such as over links between customer data centers and provider network data centers, (c) improving the overall latencies and responsiveness of applications for which potentially large amounts of data may be consumed as input or produced as output, by moving the applications close to the data sources/destinations, and/or (d) improving the security of sensitive application data.
  • provider-managed infrastructure e.g., PSEs
  • FIG. 1 is a block diagram illustrating an example provider network extended by a provider substrate extension located within a network external to the provider network according to at least some embodiments.
  • customers can create one or more isolated virtual networks 102.
  • Customers can launch compute instances 101 within an IVN to execute their applications. These compute instances 101 are hosted by substrate addressable devices (SADs) that are part of the provider network substrate (not shown).
  • SADs substrate addressable devices
  • SADs that are part of the provider network substrate can host control plane services 104.
  • Exemplary control plane services 104 include an instance management service (sometimes referred to as a hardware virtualization service) that allows a customer or other control plane service to launch and configure instances and/or IVNs, an object storage service that provides object storage, a block storage services that provides the ability to attach block storage devices to instances, database services that provide various database types, etc.
  • instance management service sometimes referred to as a hardware virtualization service
  • object storage service that provides object storage
  • block storage services that provides the ability to attach block storage devices to instances
  • database services that provide various database types, etc.
  • the components illustrated within the provider network 100 can be treated as logical components. As mentioned, these components are hosted by the SADs of the provider network substrate (not shown).
  • the provider network substrate can host the instances 101 using containers or virtual machines that operate within isolated virtual networks (IVNs). Such containers or virtual machines are executed by SADs.
  • the provider network substrate can host one or more of the control plane services 104 using SADs in a bare metal configuration (e.g., without virtualization).
  • a SAD refers to the software (e.g., a server) executed by the hardware that is addressable via a network address of the provider network rather than of another network (e.g., a customer network, an IVN, etc.).
  • a SAD may additionally refer to the underlying hardware (e.g., computer system) executing the software.
  • the provider network 100 is in communication with a provider substrate extension (PSE) 188 deployed within customer network 185 and a PSE 198 deployed within customer network 195.
  • PSE provider substrate extension
  • Each PSE includes one or more substrate addressable devices (SADs), such as SADs 189A-189N illustrated within PSE 188.
  • SADs 189 facilitate the provisioning of computing-related resources within the PSE.
  • SADs 189 facilitate the provisioning of computing-related resources within the PSE.
  • FIG. 1 the illustration of a solid box-ellipses- dashed box combination for a component, such as is the case for SADs 189A-189N, generally is used to indicate that there may be one or more of those components in this and subsequent drawings (although references in the corresponding text may refer to the singular or plural form of the component and with or without the letter suffix).
  • a customer gateway/router 186 provides connectivity between the provider network 100 and the PSE 188 as well as between the PSE 188 and other customer resources 187 (e.g., other on-prem servers or services connected to the customer network 185).
  • a customer gateway/router 196 provides connectivity between the provider network 100 and the PSE 198 as well as between the PSE 198 and other customer resources 197.
  • control plane traffic 106 generally (though not always) is directed to SADs
  • data plane traffic 104 generally (though not always) is directed to instances.
  • SADs can vend an API that allows for the launch and termination of instances.
  • a control plane service 104 can send a command via the control plane to the API of such a SAD to launch a new instance in IVN 102.
  • An IVN may comprise a set of hosted (e.g., virtualized) resources that is logically isolated or separated from other resources of the provider network (e.g., other IVNs).
  • a control plane service can set up and configure IVNs, including assigning each IVN an identifier to distinguish it from other IVNs.
  • the provider network can offer various ways to permit communications between IVNs, such as by setting up peering relationships between IVNs (e.g., a gateway in one IVN configured to communicate with a gateway in another IVN).
  • IVNs can be established for a variety of purposes. For example, an IVN may be set up for a particular customer by setting aside a set of resources for exclusive use by the customer, with substantial flexibility with respect to networking configuration for that set of resources being provided to the customer. Within their IVN, the customer may set up subnets, assign desired private IP addresses to various resources, set up security rules governing incoming and outgoing traffic, and the like. At least in some embodiments, by default the set of private network addresses set up within one IVN may not be accessible from another IVN (or more generally from outside the IVN). [0034] Tunneling techniques facilitate the traversal of IVN traffic between instances hosted by different SADs on the provider network 100.
  • a newly launched instance within IVN 102 might have an IVN address A and be hosted by a SAD with a substrate address X, while the instance 101 might have IVN address B and be hosted by a SAD with a substrate address Y.
  • SAD X encapsulates a packet sent from the newly launched instance to the instance 101 (from IVN address A to IVN address B) within a payload of a packet having addressing information of the SADs that host the respective instances (from substrate address X to substrate address Y).
  • the packet sent between the SADs can further include an identifier of IVN 102 to indicate the data is destined for IVN 102 as opposed to another IVN hosted by the SAD with substrate address Y.
  • the SAD further encrypts the packet sent between instances within the payload of the packet sent between SADs using an encryption key associated with the IVN.
  • the encapsulation and encryption are performed by a software component of the SAD hosting the instance.
  • the provider network 100 includes one or more networking components to effectively extend the provider network substrate outside of the provider network 100 to the PSE connected to the customer’s on-prem network.
  • Such components can ensure that data plane and control plane operations that target a PSE are securely, reliably, and transparently communicated to the PSE.
  • a PSE interface 108, a PSE SAD proxy 110, and a PSE SAD anchor 112 facilitate data and control plane communications between the provider network 100 and the PSE 188.
  • a PSE interface 118, a PSE SAD proxy 120, and a PSE SAD anchor 122 facilitate data and control plane communications between the provider network 100 and the PSE 198.
  • PSE interfaces receive control and data plane traffic from the provider network, send such control plane traffic to a PSE SAD proxy, and send such data plane traffic to a PSE.
  • PSE interfaces also receive data plane traffic from the PSE and send such data plane traffic to the appropriate destination within the provider network.
  • PSE SAD proxies receive control plane traffic from PSE interfaces and send such control plane traffic to PSE SAD anchors.
  • PSE SAD anchors receive control plane traffic from PSE SAD proxies and send such control plane traffic to a PSE.
  • PSE SAD anchors also receive control plane traffic from a PSE and send such control plane traffic to a PSE SAD proxy.
  • PSE SAD proxies also receive control plane traffic from PSE SAD anchors and send such control plane traffic to the appropriate destination within the provider network.
  • Other embodiments may employ different combinations or configurations of networking components to facilitate communications between the provider network 100 and PSEs (e.g., the functionality of the PSE interface, PSE SAD proxy, and/or PSE SAD anchors may be combined in various ways such as by an application that performs the operations of both a PSE interface and a PSE SAD proxy, of both a PSE SAD proxy and a PSE SAD anchor, of all three components, and so on).
  • each PSE has one or more substrate network addresses for the SADs (e.g., SADs 189A-189N). Since those substrate addresses are not directly reachable via the provider network 100, the PSE interfaces 108, 118 masquerade with attached virtual network addresses (VNAs) matching the substrate addresses of the respective PSE. As illustrated, the PSE interface 108 has attached VNA(s) 150 that match the PSE 188 SAD address(es), and the PSE interface 118 has attached VNA(s) 152 that match the PSE 198 SAD address(es)).
  • VNAs virtual network addresses
  • a VNA is a logical construct enabling various networking-related attributes such as IP addresses to be programmatically transferred between instances. Such transfers can be referred to as“attaching” a VNA to an instance and“detaching” a VNA from an instance.
  • a PSE interface is effectively a packet forwarding component that routes traffic based on whether that traffic is control plane traffic or data plane traffic. Note that both control and data plane traffic are routed to a PSE interface as both are destined for a SAD given the substrate addressing and encapsulation techniques described above. In the case of control plane traffic, a PSE interface routes the traffic to the PSE SAD proxy based on the SAD address. In the case of data plane traffic, a PSE interface establishes and serves as an endpoint to one or more encrypted data plane traffic tunnels between the provider network 100 and PSEs (e.g., tunnel 191 between PSE interface 108 and PSE 188, tunnel 193 between PSE interface 118 and PSE 198).
  • a PSE interface For data plane traffic received from the provider network 100, a PSE interface encrypts the traffic for transmission over the tunnel to the PSE. For data plane traffic received from the PSE, the PSE interface decrypts the traffic, optionally validating the SAD-addressing of the packets, and sends the traffic to the identified SAD destination via the provider network 100. Note that if the PSE interface receives traffic from the PSE that does not conform to the expected format (e.g., protocol) used to transmit data plane traffic, the PSE interface can drop such traffic.
  • the expected format e.g., protocol
  • Each SAD in the PSE has a corresponding group of one or more PSE interfaces and each member of the group establishes one or more tunnels for data plane traffic with the PSE. For example, if there are four PSE interfaces for a PSE having four SADs, the PSE interfaces each establish a secure tunnel with a data plane traffic endpoint for each of the SADs (e.g., sixteen tunnels). Alternatively, a group of PSE interfaces may be shared by multiple SADs by attaching the associated VNAs to each member of the group.
  • Each PSE has one or more PSE SAD proxies and one or more PSE SAD anchors that handle control plane traffic between the provider network 100 and the SADs of a PSE.
  • Control plane traffic typically has a command-response or request-response form.
  • a control plane service of the provider network 100 can issue a command to a PSE SAD to launch an instance. Since management of PSE resources is facilitated from the provider network, control plane commands sent over the secure tunnel typically should not originate from a PSE.
  • a PSE SAD proxy acts as a stateful security boundary between the provider network 100 and a PSE (such a boundary is sometimes referred to as a data diode).
  • a PSE SAD proxy can employ one or more techniques such as applying various security policies or rules to received control plane traffic.
  • control plane services 104 can indirectly or directly offer a public-facing API to allow instances hosted by a PSE to issue commands to the provider network 100 via non-tunneled communications (e.g., over a public network such as the internet).
  • a PSE SAD proxy can provide a control plane endpoint API of its corresponding SAD within the PSE.
  • a PSE SAD proxy for a PSE SAD that can host instances can provide an API consistent with one that can receive control plane operations to launch, configure, and terminate instances.
  • the PSE SAD proxy can perform various operations. For some operations, the PSE SAD proxy can pass the operation and associated parameters without modification through to the destination SAD.
  • a PSE SAD proxy can verify that the parameters of a received API call from within the provider network 100 are well-formed relative to the API before passing through those operations.
  • the PSE SAD can act as an intermediary to prevent sensitive information from being sent outside of the provider network 100.
  • exemplary sensitive information includes cryptographic information such as encryption keys, network certificates, etc.
  • a PSE SAD proxy can decrypt data using a sensitive key and re encrypt the data using a key that can be exposed to a PSE.
  • a PSE SAD proxy can terminate a terminate a first secure session (e.g., a Transport Layer Security (TLS) Session) originating from within the provider network 100 and create a new secure session with the corresponding SAD using a different certificate to prevent provider network certificates from leaking to the PSE.
  • TLS Transport Layer Security
  • a PSE SAD proxy can receive certain API calls from within the provider network 100 that includes sensitive information and issue a substitute or replacement API call to the PSE SAD that replaces the sensitive information.
  • a PSE SAD proxy can drop all control plane commands or requests that originate from the PSE or only those commands or requests not directed to a public-facing control plane endpoint within the provider network, for example.
  • a PSE SAD proxy can process responses to control plane operations depending on the nature of an expected response, if any. For example, for some responses, the PSE SAD proxy can simply drop the response without sending any message to the originator of the corresponding command or request. As another example, for some responses the PSE SAD proxy can sanitize the response to ensure it complies with the expected response format for the corresponding command or request before sending the sanitized response to the originator of the corresponding command or request via control plane traffic 107. As yet another example, the PSE SAD proxy can generate a response (whether immediately or upon receipt of an actual response from the SAD) and send the generated response to the originator of the corresponding command or request via control plane traffic 107.
  • a PSE SAD proxy can track the state of communications between components of the provider network (e.g., control plane services 104) and each SAD of the PSE.
  • State data can include session keys for the duration of sessions, pending outbound API calls with an associated source and destination to track outstanding responses, the relationship between API calls received from within the provider network 100 and those API calls that were issued to a SAD with replaced or substituted sensitive information, etc.
  • the PSE SAD proxy can provide stateful communications for other PSE-to-provider network communications in addition to control plane traffic.
  • Such communications can include Domain Name System (DNS) traffic, Network Time Protocol (NTP) traffic, and operating system activation traffic (e.g., for Windows activation).
  • DNS Domain Name System
  • NTP Network Time Protocol
  • Windows activation traffic e.g., for Windows activation
  • a PSE SAD anchor can serve as the provider-network side endpoint for each of the available tunnel endpoints of the PSE.
  • PSE SAD anchor(s) 112 serve to tunnel control plane traffic to the PSE 188 via tunnel 190
  • PSE SAD anchor(s) 122 serve to tunnel control plane traffic to the PSE 1198 via tunnel 192.
  • Various embodiments can limit the radial impact of any attempted attacks originating from outside of the provider network (e.g., from should a PSE become comprised) both by using the techniques to process traffic described above as well as by isolating those networking components exposed to traffic from other portions of the provider network 100.
  • the networking components can operate within one or more IVNs to bound how far an attacker could penetrate thereby protecting both the operations of the provider network and of other customers.
  • various embodiments can instantiate the PSE interfaces, PSE SAD proxies, and the PSE SAD anchors as applications executed by virtual machines or containers that execute within one or more IVNs.
  • groups of PSE interfaces for different PSEs run within a multi-tenant IVN (e.g., the PSE interface IVN 132 for PSEs 188 and 198).
  • each group of PSE interfaces can run in a single-tenant IVN.
  • each group of PSE SAD proxies and each group of PSE SAD anchors for a given PSE run within single tenant IVNs (e.g., the PSE SAD proxy IVN 134 for PSE 188, the PSE SAD anchor IVN 136 for PSE 188, the PSE SAD proxy IVN 138 for PSE 198, and the PSE SAD proxy IVN 40 for PSE 198).
  • the redundancy provided by operating multiple instances for each of the networking components allows the provider network to periodically recycle the instances hosting those components without interrupting PSE-to-provider network communications. Recycling can involve, for example, restarting instances or launching new instances and reconfiguring the other instances with, for example, the address of the recycled instance. Periodic recycling limits the time window during which an attacker could leverage a compromised network component should one become compromised.
  • a PSE connectivity manager 180 manages the setup and configuration of the networking components providing the connectivity between the provider network 100 and the PSEs.
  • the PSE interfaces 108, 118, the PSE SAD proxies 110, 120, and the PSE SAD anchors 112, 122 can be hosted as instances by the provider network substrate.
  • the PSE connectivity manager 180 can request or initiate the launch of PSE interface(s), PSE SAD proxy/proxies, and PSE SAD anchor(s) for PSEs as PSEs are shipped to customers and/or as those PSEs come online and exchange configuration data with the provider network.
  • the PSE connectivity manager 180 can further configure the PSE interface(s), PSE SAD proxy/proxies, and PSE SAD anchor(s).
  • the PSE connectivity manager 180 can attach the VNA(s) that correspond to the SADs of the PSE to the PSE interface(s), provide the PSE interface(s) with the address of the PSE SAD proxy/proxies for the PSE SADs, and provide the PSE SAD proxy/proxies with the address of the PSE SAD anchor(s) for the PSE.
  • the PSE connectivity manager 180 can configure the IVNs of the various components to allow, for example, communications between the PSE interface IVN 132 and a PSE SAD proxy IVN for the PSE, and between the PSE SAD proxy IVN to the PSE SAD anchor IVN for the PSE.
  • the tunnel endpoints can have one or more attached VNAs or assigned physical network addresses that can receive traffic from outside of their respective network (e.g., from outside of the provider network for PSE interfaces and PSE SAD anchors, from outside of the customer network for the tunnel endpoints of the PSE).
  • the PSE 188 can have a single outward-facing network address and manage communications to multiple SADs using port address translation (PAT) or multiple outward facing network addresses.
  • PAT port address translation
  • Each PSE SAD anchor 112, 122 can have or share (e.g., via PAT) an outward-facing network address, and each PSE interface 108, 118 can have or share (e.g., via PAT) an outward-facing accessible network address.
  • FIG. 2 is a block diagram illustrating an example provider substrate extension according to at least some embodiments.
  • the PSE 188 includes one or more PSE frameworks 202 and one or more hosts 220.
  • each host 220 can be functionally (and, possibly, structurally) similar to at least some of the computer systems that form portions of the provider network substrate (e.g., those substrate resources that host instances within the provider network), while the PSE framework(s) 202 provide supporting infrastructure to emulate the provider network substrate within the PSE as well as to provide connectivity to the provider network via control and data plane traffic tunnels (e.g., tunnels 190-193 of FIG. 1).
  • control and data plane traffic tunnels e.g., tunnels 190-193 of FIG.
  • each PSE framework 202 can send or receive control or data plane traffic from each host 220 and vice versa in a mesh like architecture, as indicated by PSE control plane traffic 240 and PSE data plane traffic 242.
  • PSE control plane traffic 240 and PSE data plane traffic 242. Such redundancy allows for reliability levels that a customer might expect from the provider network.
  • the PSE framework 202 includes one or more control plane tunnel endpoints 204 that terminate encrypted tunnels carrying control plane traffic (e.g., tunnel 190, tunnel 192).
  • the provider network 100 hosts a PSE SAD anchor for each control plane tunnel endpoint 204.
  • the PSE SAD proxy or proxies e.g., proxies 110
  • can distribute control plane traffic to the PSE SAD anchors e.g., anchors 112
  • the PSE SAD anchors e.g., anchors 112
  • the PSE framework 202 further includes one or more data plane tunnel endpoints 206 that terminate encrypted tunnels carrying data plane traffic (e.g., tunnel 191, tunnel 193) from the PSE interfaces of the provider network, which may be connected in a mesh like architecture (e.g., a given PSE interface 108 establishes a tunnel with the data plane tunnel endpoint 206 of each PSE framework 202).
  • data plane traffic e.g., tunnel 191, tunnel 193
  • packets of control plane traffic and packets of data plane traffic can include SADs as both source and destinations— the latter being encapsulated in a packet having SAD-based addressing.
  • the PSE framework 202 is SAD 289
  • the host 220 is SAD 290.
  • SADs within the PSE 188 e.g., SAD 289, 290
  • can also provide secure session termination e.g., TLS termination
  • secure sessions established with the corresponding PSE SAD proxy or proxies within the provider network e.g., PSE SAD proxies 110.
  • SADs vend one or more control plane APIs to handle control plane operations directed to the SAD that manage the resources of the SAD.
  • a PSE manager 210 of a PSE framework 202 can vend a control plane API for management of the components of the PSE framework 202.
  • One such component is a PSE gateway 208 that routes control and/or data plane traffic into and out of the PSE 188, such as control plane traffic destined for SAD 289 to the PSE manager 210 and control or data plane traffic destined for SAD 290 to the host manager 222.
  • the PSE gateway 208 can further facilitate communications with the customer network, such as to or from the other customer resources 187 accessible via the network of the PSE deployment site (e.g., the customer network 185).
  • the API of the PSE manager 210 can include one or more commands to configure the PSE gateway 208 of the PSE framework 202.
  • Other components 212 of the PSE framework 202 can include various applications or services that take part in the operation of the substrate of the PSE for the hosts 220, such as DNS, Dynamic Host Configuration Protocol (DHCP), and/or NTP services.
  • DNS Dynamic Host Configuration Protocol
  • NTP Network Transfer Protocol
  • a host manager 222 can vend a control plane API for management of the components of the host 220.
  • the host manager 222 includes an instance manager 224 and a network manager 226.
  • the instance manager 224 can handle API calls related to management of the host 220, including commands to launch, configure, and/or terminate instances hosted by the host 220.
  • an instance management service in the provider network (not shown) can issue a control plane command to the instance manager 224 to launch an instance on host 220.
  • the host 220 is host to a customer instance 232 running inside of a customer IVN 233, a third-party (3P) instance 234 running inside of a 3P IVN 235, and a service instance 236 running inside of a service IVN 237.
  • each of these IVNs 233, 234, 235 can extend existing IVNs established within the provider network.
  • the customer instance 232 may be executing some customer application or workload
  • the 3P instance 234 may be executing the application or workload of another party that the customer has permitted to launch instances within the PSE 188
  • the service instance 236 may be executing a service of the provider network locally to offer to the PSE 188 (e.g., a block storage service, a database service, etc.).
  • the network manager 226 can handle SAD-addressed data plane traffic received by the host 220. For such traffic, the network manager can perform the requisite decapsulation of an IVN packet before sending it to the addressed, hosted instance. Furthermore, the network manager 226 can handle the routing of traffic sent by hosted instances. When a hosted instance attempts to send traffic to another locally hosted instance (e.g., on the same host), the network manager 226 can forward that traffic to the addressed instance.
  • the network manager 226 can locate the substrate address of the device hosting the non-local instance, encapsulate and optionally encrypt the corresponding packet into a SAD-addressed packet, and send that packet over the data plane (e.g., either to another host within the PSE or back to the provider network via the PSE gateway 208.
  • the network manager 226 can include or have access to various data that facilitates routing of data plane traffic (e.g., to look up the address of the SAD hosting an instance having a IVN network address in the destination of a packet received from a hosted instance).
  • FIG. 3 is a block diagram illustrating an example connectivity between a provider network and a provider substrate extension according to at least some embodiments.
  • FIG. 3 illustrates an exemplary connectivity between a provider network and a PSE.
  • the term“inbound” refers traffic received by the provider network from the PSE
  • the term“outbound” refers to traffic sent by the provider network to the PSE.
  • the PSE includes two PSE frameworks 202 and two hosts 220 for a total of four SADs.
  • the PSE frameworks provide tunnel endpoints 204A, 204B for control plane traffic tunnel endpoints 206A, 206B for data plane traffic.
  • the provider network includes a VNA, one or more PSE interfaces, and one or more PSE SAD proxies.
  • the provider network includes a PSE SAD VNA 304, two PSE interfaces 108A, 108B and two PSE SAD proxies 110A, 110B for a given PSE SAD.
  • the PSE interface(s) and PSE SAD proxy/proxies can be referred to as a slice as indicated, each slice corresponding to a particular SAD within the PSE.
  • the PSE interface(s) may be shared by all of the VNAs for a VPN rather than a single VNA for one of the SADs.
  • the PSE SAD VNA 304 serves as a front for a given PSE through which other components of the provider network can send traffic to and receive traffic from the corresponding SAD of the PSE.
  • a load balancer (not shown) can route outbound traffic sent to the PSE SAD VNA 304 to one of the PSE interfaces 108A, 108B.
  • the illustrated PSE interfaces 108A, 108B for a given slice and those for the other slices (not shown) operate within a PSE interface IVN 132.
  • the PSE interfaces 108A, 108B send data plane traffic to the PSE via data plane traffic tunnels and control plane traffic to the PSE by forwarding the control plane traffic to the PSE SAD proxies 110A, 110B of the slice.
  • the PSE interfaces 108A, 108B store (or have access to) the network addresses of the PSE SAD proxy/proxies of the associated SAD, the network addresses of the data plane tunnel endpoint(s), and one or more keys of or associated with the data plane tunnel endpoint(s) of the PSE for securing communications with those endpoint(s).
  • the PSE interfaces 110A, 110B establish a secure tunnel for data plane traffic with each data plane tunnel endpoint 206A, 206B resulting in N data plane tunnels where N is the number of PSE interfaces per-SAD (assuming each SAD has the same number of interfaces) multiplied by the number of data plane tunnel endpoints multiplied by the number of SADs.
  • N is the number of PSE interfaces per-SAD (assuming each SAD has the same number of interfaces) multiplied by the number of data plane tunnel endpoints multiplied by the number of SADs.
  • sixteen data plane tunnels are established between the PSE interfaces and the data plane tunnel endpoints (i.e., 2 PSE interfaces per-SAD x 2 data plane tunnel endpoints x 4 SADs).
  • the PSE SAD proxies 110A, 110B receive control plane traffic from the PSE interfaces 108A, 108B, perform various operations described elsewhere herein, and send the control plane traffic to the PSE via either of the two PSE SAD anchors 112A, 112B. Similarly, the PSE SAD proxies 110A, 110B receive control plane traffic from either of the two PSE SAD anchors 112A, 112B, perform various operations described elsewhere herein, and control plane traffic 107 to destinations within the provider network.
  • the illustrated PSE SAD proxies 110A, 110B for a given slice and those for the other slices (not shown) operate within a PSE SAD proxy IVN 134.
  • the PSE interfaces 108A, 108B store (or have access to) the network addresses of the PSE SAD anchor(s).
  • the PSE SAD proxies have access to a shared data store 306 or otherwise are capable of exchanging information. Such information exchange can be used for a number of reasons. For example, recall that the PSE SAD proxies can vend an API interface to emulate the API interface of the associated SAD within the PSE.
  • one PSE SAD proxy may need to access the state of communications that were previously handled by a different PSE SAD proxy (e.g., the PSE SAD proxy 110A sends a control plane operation to the PSE and the PSE SAD proxy 110B receives a response to the control plane operation from the PSE).
  • the PSE SAD proxies can check whether the inbound message is consistent with the expected state and, if so, send a message via control plane traffic 107 as described elsewhere herein. If not, the PSE SAD proxies 110A, 110B can drop the traffic.
  • the PSE SAD proxies can bridge separate secure sessions (e.g., TLS sessions) to prevent provider network certificates from being sent to the PSE.
  • secure sessions e.g., TLS sessions
  • the PSE SAD proxy handling the responsive message can use the same key that was established between the originator of the outbound message and the PSE SAD proxy that handled the outbound message in order to send a secure responsive message via control plane traffic 107 to the originator.
  • each PSE framework provides for a single control plane tunnel endpoint 204.
  • the provider network includes a PSE anchor.
  • the provider network includes two PSE anchors 112A, 112B.
  • the PSE SAD anchors 112A, 112B operate within a PSE SAD anchor IVN 136.
  • the PSE anchors 112 receive control plane traffic from each of the eight PSE SAD proxies (two per slice for each of the four SADs) and send that traffic to the PSE.
  • the PSE anchors also receive control plane traffic from the PSE and send that traffic to one of the two PSE SAD proxies associated with the SAD that sourced the traffic from the PSE.
  • the PSE anchors 112A, 112B store (or have access to) the network addresses of the PSE SAD proxy/proxies for each SAD, the network addresses of the control plane tunnel endpoint(s) of the PSE, and one or more keys of or associated with the control plane tunnel endpoint(s) of the PSE for securing communications with those endpoint(s).
  • the network components or provider network may employ load balancing techniques to distribute the workload of routing control and data plane traffic between the provider network and the PSE. For example, traffic sent to the PSE SAD VNA 304 can be distributed among the PSE interfaces 108A, 108B.
  • each PSE interface 108 can distribute traffic among the data plane tunnel endpoints 206 A, 206B. As yet another example, each PSE interface 108 can distribute traffic among the PSE SAD proxies 110A, 110B. As yet another example, each PSE SAD proxy 110 can distribute outbound traffic among the PSE SAD anchors 112A, 112B. As yet another example, the PSE SAD anchors 112 can distribute inbound traffic among the PSE SAD proxies 110A, 110B. In any case, such load balancing can be performed by the sending entity or by a load balancer (not shown). Exemplary load balancing techniques include employing a load balancer with a single VNA that distributes traffic to multiple components“behind” that address, providing each data sender with the address of multiple recipients and distributing the selected recipient at the application level, etc.
  • FIGS. 1-3 show the establishment of separate tunnels for control plane traffic and data plane traffic
  • other embodiments might employ a one or more tunnels for both control and data plane traffic.
  • the PSE interfaces can route data plane traffic to PSE SAD anchors for transmission over a shared to tunnel to the PSE, bypassing the additional operations carried out by the PSE SAD proxies on the control plane traffic.
  • FIG. 4 is a block diagram illustrating an example system for configuring a provider network for communication with a provider substrate extension according to at least some embodiments.
  • the PSE connectivity manager 180 dynamically manages the provider network- side lifecycle of the networking components that facilitate connections with PSEs. When a new PSE is created or launched, or when the contents of the PSE are modified (e.g., by adding, removing, or replacing hosts), the PSE connectivity manager 180 manages operations such as the provisioning of VNAs for PSE interfaces, creating various IVNs for isolation, launching instances to execute the applications performing the operations of the networking components described above, detecting and replacing faulty components, and so on.
  • the PSE connectivity manager 180 is a control plane service that performs such management operations without directly communicating with the PSE, providing additional security between the provider network and PSE.
  • a PSE configuration interface 450 provides an interface through which PSEs, such as PSE 445, can communicate with the provider network (e.g., via a public facing API) in order to establish tunneled communications.
  • PSEs such as PSE 445
  • the provider network e.g., via a public facing API
  • PSE configuration interface 450 issues commands to the PSE connectivity manager 180 with the data provided by the PSE indicating that tunneled communications can be established with the PSE 445.
  • the PSE connectivity manager 180 manages a PSE configuration data store 405.
  • the PSE configuration data store 405 can include, amongst other things, already known details on the hardware and software configuration of a PSE based on its as-built configuration, software updates that have been pushed to the PSE, hardware configuration data that has been received from the PSE, etc.
  • the PSE connectivity manager 180 can update the PSE configuration data store 405 with the data provided by a PSE via PSE configuration interface 450.
  • Exemplary PSE configuration data 490 assumes a PSE connected to a customer network via a single IP address and using PAT to address the individual SADs.
  • the PSE has an identifier PSE- 123 A to distinguish it from other PSEs that extend the provider network 100.
  • the PSE connectivity manager 180 Based on data received PSE configuration interface 450, the PSE connectivity manager 180 has indicated that that PSE has an IP address of I.2.3.4.
  • Existing PSE configuration data indicates that PSE-123A has four SADs with identifiers as shown. Each SAD has an associated substrate address, which may be reserved during the build of the PSE or negotiated with the provider network based on substrate address availability at the time the PSE reaches out to the PSE configuration interface 450. For example, the SAD having the identifier SAD-5bff has a substrate address of 192.168.100.1. Each SAD can have an associated type.
  • some SADs can terminate secure tunnels, some SADs that host instances may have varying compute, memory, and storage resources (e.g., a host with four processors and 128 gigabytes of memory for instances, a host with half that, etc.).
  • SADs of type A can terminate secure tunnels (e.g., like PSE frameworks 202).
  • PAT is used to address the SADs of the PSE
  • the port associated with each SAD is stored in the PSE configuration data 490 (e.g., SAD-5bff can be addressed at 1.2.3.4:50000, and so on).
  • the PSE connectivity manager 180 can initiate one or more workflows to stand up the networking components used to tunnel communications between the provider network 100 and the PSE.
  • the PSE connectivity manager 180 can initiate the execution of such workflows via a workflow execution service 410 as indicated at circle B.
  • a workflow can be treated as a“serverless” function that includes code that can be executed on demand.
  • Serverless functions can be executed on demand, without requiring the initiator to maintain dedicated infrastructure to execute the serverless function. Instead, the serverless functions can be executed on demand using resources maintained by the workflow execution service 410 (e.g., a compute instance, such as a virtual machine or container, etc.).
  • these resources may be maintained in a“ready” state (e.g., having a pre initialized runtime environment configured to execute the serverless functions), allowing the serverless functions to be executed in near real-time.
  • the resources that execute a workflow are shown as workflow executors 420 as initiated by the workflow execution service 410 as indicated at circle C.
  • the workflow execution service 410 may initiate one or more calls to instance management service(s) 425 depending on whether a workflow executor 420— whether a container, virtual machine, or other environment— needs to be launched for the workflow.
  • the PSE connectivity manager 180 can send a request to execute a specific workflow to the workflow execution service 410, the request including an identifier that can be used to locate the workflow (e.g., a Uniform Resource Locator (URL), Uniform Resource Identifier (URI), or other reference).
  • the workflow executor 420 assigned the task of executing the workflow can fetch the workflow from a PSE workflows data store 415.
  • the PSE connectivity manager 180 can send the workflow as part of the request to execute it.
  • the PSE connectivity manager 180 can include PSE-specific parameters that are used to configure the networking components for the PSE (e.g., the PSE IP address). Note that in some embodiments, the PSE connectivity manager 180 can execute the workflows directly without the use of the workflow execution service 410.
  • Workflows which may be referred to as scripts or functions, include a series of operations (e.g., API calls to other services, storing and retrieving data, etc.). Operation may reference other workflows that can be considered child workflows of the parent.
  • PSE interface(s), PSE SAD proxy /proxies, and PSE SAD anchor(s) can be software programs executed by instances such as virtual machines or containers. In one embodiment, PSE interface(s) are executed by virtual machines, PSE SAD proxy/proxies are executed by containers, and PSE SAD anchor(s) are executed by containers.
  • PSE interface(s) are executed by virtual machines
  • PSE SAD proxy/proxies are executed by containers
  • PSE SAD anchor(s) are executed by virtual machines.
  • Other instance types and/or configurations can host the networking components in other embodiments.
  • workflows can include calls to instance management service(s) 425 to launch and configure the instances for a given PSE as indicated at circle D.
  • Such instances can include one or more PSE interfaces 430, one or more PSE SAD proxies 435, and one or more PSE SAD anchors 440.
  • a first example workflow includes operations to set up communications with a new PSE.
  • the first example workflow operations include creating a VNA for each SAD of the PSE.
  • the first example workflow operations further include updating the PSE configuration data store 405 to assign each SAD the associated VNA.
  • the first example workflow operations further include, per SAD, launching one or more instances within an IVN to perform the operations of PSE interfaces as described herein.
  • the first example workflow operations further include associating the VNA for a given SAD with the one or more PSE interface instances for the SAD.
  • the first example workflow operations further include, per SAD, launching one or more instances within an IVN to perform the operations of a PSE SAD proxies as described herein.
  • the first example workflow operations further include updating the one or more PSE SAD proxy instances for a given SAD with an identification of and/or addressing information for a data store (e.g., to facilitate the exchange of state data, keys, etc.).
  • the first example workflow operations further include updating the one or more PSE interface instances for a given SAD with addressing information for the one or more PSE SAD proxy instances for the same SAD so the PSE interface instances can send control plane traffic to the proxy/proxies for the SAD.
  • the first example workflow operations further include, per control plane tunnel endpoint of the PSE, launching an instance within an IVN to perform the operations of a PSE SAD anchor as described herein.
  • the first example workflow operations further include updating the one or more PSE SAD anchor instances with addressing information for the one or more PSE SAD proxy instances so the PSE SAD anchor instances can send control plane traffic to the proxy/proxies for the SAD.
  • the first example workflow operations further include updating the one or more PSE SAD proxy instances with addressing information for the one or more PSE SAD anchor instances so the PSE SAD proxy instances can send control plane traffic to the anchor(s).
  • the first example workflow operations further include, in cases where the various instances are running within different IVNs, updating the IVN network settings to permit IVN-to-IVN traffic (e.g., from a PSE interface IVN to a PSE SAD proxy IVN, from a PSE SAD anchor IVN to a PSE SAD proxy IVN, and so on).
  • IVN-to-IVN traffic e.g., from a PSE interface IVN to a PSE SAD proxy IVN, from a PSE SAD anchor IVN to a PSE SAD proxy IVN, and so on.
  • the above operations of the first example workflow may be carried out in advance of receiving any communications from the PSE such as via the PSE configuration interface 450.
  • the PSE Once the PSE has reached out to the provider network (e.g., via the PSE configuration interface 450), several additional workflow operations can be performed.
  • the first example workflow operations further include updating the one or more PSE interface instances and the one or more PSE SAD anchor instances with the PSE addressing information (e.g., of the PSE at the customer network) and PSE public key information.
  • the first example workflow operations further include sending the PSE (e.g., by way of the PSE configuration interface 450) the addressing information of the one or more PSE SAD anchor instances and their associated public keys to facilitate the establishment of the tunnels between the PSE and the provider network.
  • the PSE e.g., by way of the PSE configuration interface 450
  • the addressing information of the one or more PSE SAD anchor instances and their associated public keys to facilitate the establishment of the tunnels between the PSE and the provider network.
  • a second example workflow includes operations to set up communications with a new SAD added to a PSE (e.g., due to a PSE upgrade, a replacement of an existing SAD within a PSE).
  • the second example workflow operations include creating a VNA for the SAD.
  • the second example workflow operations further include updating the PSE configuration data store 405 to assign the SAD the VNA.
  • the second example workflow operations further include launching one or more instances within an IVN to perform the operations of PSE interfaces as described herein (assuming the PSE interface(s) are SAD-specific and not shared amongst a group of SADs).
  • the second example workflow operations further include associating the VNA for a given SAD with the PSE interface instances(s).
  • the second example workflow operations further include updating any newly launched PSE interface instances with the PSE addressing information (e.g., of the PSE at the customer network) and PSE public key information.
  • the second example workflow operations further include updating the newly launched PSE interface instance(s) (if any) with the PSE addressing information (e.g., of the PSE at the customer network) and additional PSE public key information.
  • the second example workflow operations further include updating the existing and the newly launched (if any) PSE interface instances with the PSE addressing information (e.g., of the PSE at the customer network) and PSE public key information of the new SAD.
  • the second example workflow operations further include launching one or more instances within an IVN to perform the operations of a PSE SAD proxies as described herein.
  • the second example workflow operations further include updating the one or more PSE SAD proxy instances for a given SAD with an identification of and/or addressing information for a data store (e.g., to facilitate the exchange of state data, keys, etc.).
  • the second example workflow operations further include updating the one or more PSE interface instances associated with the new SAD with addressing information for the one or more PSE SAD proxy instances for the same SAD so the PSE interface instances can send control plane traffic to the proxy/proxies for the SAD.
  • the second example workflow operations further include updating the one or more PSE SAD anchor instances with addressing information for the newly launched one or more PSE SAD proxy instances so the PSE SAD anchor instances can send control plane traffic to the proxy/proxies for the new SAD.
  • the second example workflow operations further include updating the newly launched one or more PSE SAD proxy instances with addressing information for the one or more PSE SAD anchor instances so the PSE SAD proxy instances can send control plane traffic to the anchor(s).
  • the second example workflow operations further include launching an instance within an IVN to perform the operations of a PSE SAD anchor as described herein.
  • the second example workflow operations further include updating the existing and newly launched PSE SAD anchor instances with addressing information for the newly launched PSE SAD proxy instances so the PSE SAD anchor instances can send control plane traffic to the proxy/proxies for the SAD.
  • the second example workflow operations further include updating the existing and the newly launched PSE SAD proxy instances with addressing information for the newly launched PSE SAD anchor instance so the PSE SAD proxy instances can send control plane traffic to the anchor(s).
  • a third example workflow includes operations to tear down communications with a SAD (e.g., due to a removal or failure of a SAD from a PSE).
  • the third example workflow operations include detaching the VNA for the SAD from the PSE interface instance(s).
  • the third example workflow operations further include terminating and SAD-specific PSE interface instances.
  • the third example workflow operations further include terminating the PSE SAD proxy instance(s) for the SAD. If the removed SAD supports tunnels, the third example workflow operations further include terminating any tunnels between remaining PSE interface instance(s) and the SAD (e.g., if they have not shut down automatically).
  • the third example workflow operations include removing any associations between PSE SAD proxy instances and the PSE SAD anchor instance associated with the removed SAD.
  • the third example workflow operations further include terminating the PSE SAD anchor instance associated with the removed SAD.
  • a fourth example workflow includes operations to tear down communications with a PSE.
  • the fourth example workflow operations include repeating the operations of the third example workflow for each of the SADs of the PSE as identified in the PSE configuration data 405.
  • workflows can include calls to the PSE connectivity manager 180 to provide updates on the state of the configuration of network components (e.g., identifiers of instances, etc.) as indicated at circle E. Such state updates can be used to track the process of launching and configuring instances and to track which instances correspond to which network components for a given PSE.
  • the workflow calls to the PSE connectivity manager 180 can capture how, why, and/or when workflows (or segments or portions of a workflow) invoked and completed.
  • FIG. 5 is a block diagram illustrating an example system for maintaining communications between a provider network and a provider substrate extension according to at least some embodiments.
  • the PSE connectivity manager 180 can employ a self-healing reconciliation model to manage the provider-side infrastructure (e.g., VNAs, PSE interfaces, PSE SAD proxies, PSE SAD anchors, etc.).
  • the PSE connectivity manager 180 includes a reconciliation engine 505 that evaluates the actual state of the provider- side infrastructure against the desired or expected state of the provider-side infrastructure based on the configuration of the PSE as indicated in the PSE configuration data 405.
  • the provider-side infrastructure should have at least one PSE SAD anchor for each SAD in a PSE that supports tunnels in some embodiments.
  • the reconciliation engine 505 takes one or more actions to eliminate the delta between the desired and actual states.
  • the PSE connectivity manager 180 can monitor the status of the infrastructure supporting connectivity to a PSE, referred to here as the actual state. Such monitoring may be active or passive. Active monitoring techniques include sending test traffic to the various components (e.g., pings) and verifying the response is as expected. Passive monitoring techniques may inspect traffic patterns into or out of an instance, reported metrics related to network, CPU, and/or memory usage of the instance, or, if the instance is so configured, monitoring the receipt of “heartbeat” traffic sent from the instance to the PSE connectivity manager 180 that indicates the instance is active, etc.
  • Active monitoring techniques include sending test traffic to the various components (e.g., pings) and verifying the response is as expected.
  • Passive monitoring techniques may inspect traffic patterns into or out of an instance, reported metrics related to network, CPU, and/or memory usage of the instance, or, if the instance is so configured, monitoring the receipt of “heartbeat” traffic sent from the instance to the PSE connectivity manager 180 that indicates the instance is active, etc.
  • the PSE connectivity manager 180 may instantiate one or more watchdog applications or daemons that execute on the same instance as a network component or on a different instance but within the same IVN as the network component, for example. Such watchdog applications can report health status information to the PSE connectivity manager 180.
  • the reconciliation engine 505 can periodically (e.g., once approximately every 60 seconds) compare the actual state to a desired state of the networking components as indicated at circle B.
  • the desired state can refer to the networking components that should be operating for a given PSE (e.g., some specified number of PSE interfaces, some specified number of PSE SAD proxies for each SAD of the PSE, some specified number of PSE SAD anchors for each supported tunnel endpoint by the PSE, etc.).
  • the PSE connectivity manager 180 may determine that a PSE SAD proxy 535B is non-responsive or otherwise unhealthy.
  • the configuration data stored in the PSE configuration data store 405 may indicate that each SAD should have two PSE SAD proxies.
  • the reconciliation engine 505 can determine that the PSE SAD proxy 535B is not working and generate a change schedule.
  • a change schedule includes one or more workflows (or child workflows) including operations such as those described above with reference to FIG. 4.
  • Example change schedule 590 includes three high-level operations, each of which, in practice, can be composed of a number of operations.
  • a first operation indicated by circle 1 includes launching and configuring PSE SAD proxy 535C for the PSE.
  • a second operation indicated by circle 2 includes reconfiguring the PSE interface(s) 430 to send traffic to the PSE SAD proxy 535C instead of PSE SAD proxy 535B as well as reconfiguring the PSE SAD anchor 440 to send traffic to the PSE SAD proxy 535C instead of PSE SAD proxy 535B.
  • a third operation indicated by circle 3 includes terminating the instance hosting the PSE SAD proxy 535B.
  • the PSE connectivity manager 180 can invoke scheduled workflows as indicated at circle C such as described with reference to circle B of FIG. 4.
  • the workflow execution service 410 can launch workflow executors 420 as indicated at circle D such as described above with reference to circle C of FIG. 4.
  • the workflow executor(s) 420 can execute the workflows as indicated at circle E such as described above with reference to circle D of FIG. 4.
  • the workflow executor(s) 420 can also provide updates on the state of the configuration of network components as indicated at circle F such as described above with reference to circle E of FIG. 4.
  • FIG. 6 is a flow diagram illustrating operations of a method for configuring a provider network for communication with a provider substrate extension according to at least some embodiments. Some or all of the operations (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instmctions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof.
  • the code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instmctions executable by one or more processors.
  • the computer-readable storage medium is non- transitory.
  • one or more (or all) of the operations are carried out by computer programs or applications executed by one or more components of a provider network, such as services executed by computer systems located within a data center of the provider network.
  • the provider network may be a cloud provider network.
  • the one or more components of the provider network establish communications with an extension of the provider network.
  • the extension of the provider network includes one or more physical computing devices or systems and is remotely located from a data center (e.g., outside of the data center network) of the provider network, such as on the premises of a customer of the provider network.
  • one or more (or all) of the operations are performed by components of the provider network (e.g., the PSE connectivity manager 180, the workflow execution service 410, the workflow executors 420) of the other figures.
  • the operations include, at block 605, obtaining, by a first service of a provider network, an identification of one or more substrate addressable devices included in an extension of the provider network.
  • An extension of the provider network such as the PSEs described herein, can include one or more SADs.
  • the identification of those SADs can be based on a known configuration of the PSE or based on data received from the PSE.
  • a service of the provider network can manage the connectivity to the PSE, such as described herein for the PSE connectivity manager 180.
  • PSE SAD anchors are instantiated to serve as control plane traffic tunnel endpoints within the provider network
  • PSE interfaces are instantiated to serve as a local interface for the SADs within the provider network and separate control and data plane traffic
  • PSE SAD proxies are instantiated to, inter alia, enforce restrictions or security policies on control plane traffic leaving and entering the provider network for the PSE.
  • the operations further include, at block 610, based on the identification, initiating a launch of one or more compute instances within the provider network.
  • the PSE connectivity manager 180 can directly or indirectly launch one or more instances, such as virtual machines and/or containers, to support the PSE to provider network connectivity.
  • the PSE connectivity manager 180 can engage a workflow execution service 410 to carry out workflows that include operations to launch instances.
  • the PSE connectivity manager 180 can engage an instance management service to launch instances.
  • the one or more compute instances facilitate the communications between the provider network and the extension of the provider network via at least a third-party network (e.g., a customer network, the internet, etc.) by performing certain operations as outlined in operations 615 through 620.
  • a third-party network e.g., a customer network, the internet, etc.
  • the operations further include, at block 615, receiving a first control plane message directed to a first substrate addressable device of the one or more substrate addressable devices.
  • a provider network typically handles two types of traffic or operations, administrative traffic or operations that may be referred to as part of a control plane of the provider network, and non-administrative traffic or operations that may be referred to as part of a data plane of the provider network.
  • the provider network can employ a virtual network address to serve as an aggregation point for traffic originating within the provider network to be sent to a PSE.
  • the operations further include, at block 620, updating a message state data store based at least in part on the first control plane message.
  • a stateful proxy server for substrate addressable devices of the PSE.
  • Such a proxy server can track the state of traffic sent from the provider network to the PSE and from the PSE to the provider network, performing various operations such as monitoring control plane messages sent to the PSE.
  • the operations further include, at block 625, sending a second control plane message to the first substrate addressable device via a secure tunnel.
  • components of the provider network can establish one or more secure tunnels to a PSE.
  • a PSE SAD anchor can serve as an endpoint to a secure tunnel between the provider network and the PSE.
  • FIG. 7 is a flow diagram illustrating operations of a method for communicating with a provider substrate extension for communication with a network external to a provider network according to at least some embodiments. Some or all of the operations (or other processes described herein, or variations, and/or combinations thereof) are performed under the control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof.
  • the code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors.
  • the computer-readable storage medium is non-transitory.
  • one or more (or all) of the operations are carried out by computer programs or applications executed by one or more components of a provider network, such as services executed by computer systems located within a data center of the provider network.
  • the provider network may be a cloud provider network.
  • the one or more components of the provider network can facilitate communications between other components of the provider network and an extension of the provider network.
  • the extension of the provider network includes one or more physical computing devices or systems and is remotely located from a data center (e.g., outside of the data center network) of the provider network, such as on the premises of a customer of the provider network.
  • one or more (or all) of the operations are performed by components of the provider network (e.g., PSE interfaces, PSE SAD proxies, PSE SAD anchors) of the other figures.
  • the operations include, at block 705, receiving, in a provider network, a first message of a first type and having a first destination address, wherein the first destination address is associated with a virtual network address of the provider network and an address of a first device in an extension of the provider network, wherein the extension of the provider network is in communication with the provider network via at least a third-party network.
  • one connectivity configuration between a provider network and a PSE involves communicating via one or more secure tunnels (e.g., from a tunnel endpoint within the provider network to a tunnel endpoint within the PSE via a customer network, the internet, etc.).
  • One or more compute instances hosted within the provider network can perform various functions to facilitate communications between devices and/or hosted instances of the provider network and devices and/or hosted instances of the PSE.
  • a VNA can be attached to a compute instance hosted within the provider network to allow the compute instance to masquerade as the SAD within the PSE.
  • the operations further include, at block 710, updating a message state data store based on at least a portion of the first message.
  • a PSE SAD proxy can serve as a stateful communications boundary for certain traffic between the PSE and the provider network, performing various operation on traffic originating from other components within the provider network and destined to the PSE and on traffic originating from the PSE and destined for other components of provider network.
  • Such operations can include tracking the state of communications between sources and destinations. For example, a command to launch a compute instance hosted by a device of the PSE can originate within the provider network.
  • the PSE SAD proxy can track the command and associated response in a data store.
  • the operations further include, at block 715, sending a first payload of the first message to the first device a first secure tunnel through the third-party network.
  • a PSE SAD proxy can perform various operations depending on the nature of the traffic traversing the secure tunnel between the provider network and the PSE. For example, for some types of traffic, the PSE SAD proxy can relay a received message through to the PSE. For other types of traffic, the PSE SAD proxy can re-encapsulate the payload of a received message and send it in a new message to the PSE (e.g., to terminate and bridge secure sessions).
  • FIG. 8 illustrates an example provider network (or “service provider system”) environment according to at least some embodiments.
  • a provider network 800 may provide resource virtualization to customers via one or more virtualization services 810 that allow customers to purchase, rent, or otherwise obtain instances 812 of virtualized resources, including but not limited to computation and storage resources, implemented on devices within the provider network or networks in one or more data centers.
  • Local Internet Protocol (IP) addresses 816 may be associated with the resource instances 812; the local IP addresses are the internal network addresses of the resource instances 812 on the provider network 800.
  • the provider network 800 may also provide public IP addresses 814 and/or public IP address ranges (e.g., Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses) that customers may obtain from the provider 800.
  • IPv4 Internet Protocol version 4
  • IPv6 Internet Protocol version 6
  • the provider network 800 may allow a customer of the service provider (e.g., a customer that operates one or more client networks 850A-850C including one or more customer device(s) 852) to dynamically associate at least some public IP addresses 814 assigned or allocated to the customer with particular resource instances 812 assigned to the customer.
  • the provider network 800 may also allow the customer to remap a public IP address 814, previously mapped to one virtualized computing resource instance 812 allocated to the customer, to another virtualized computing resource instance 812 that is also allocated to the customer.
  • a customer of the service provider such as the operator of customer network(s) 850A-850C may, for example, implement customer-specific applications and present the customer’s applications on an intermediate network 840, such as the Internet.
  • Other network entities 820 on the intermediate network 840 may then generate traffic to a destination public IP address 814 published by the customer network(s) 850A-850C; the traffic is routed to the service provider data center, and at the data center is routed, via a network substrate, to the local IP address 816 of the virtualized computing resource instance 812 currently mapped to the destination public IP address 814.
  • Local IP addresses refer to the internal or“private” network addresses, for example, of resource instances in a provider network. Local IP addresses can be within address blocks reserved by Internet Engineering Task Force (IETF) Request for Comments (RFC) 1918 and/or of an address format specified by IETF RFC 4193, and may be mutable within the provider network. Network traffic originating outside the provider network is not directly routed to local IP addresses; instead, the traffic uses public IP addresses that are mapped to the local IP addresses of the resource instances.
  • the provider network may include networking devices or appliances that provide network address translation (NAT) or similar functionality to perform the mapping from public IP addresses to local IP addresses and vice versa.
  • NAT network address translation
  • Public IP addresses are Internet mutable network addresses that are assigned to resource instances, either by the service provider or by the customer. Traffic routed to a public IP address is translated, for example via 1: 1 NAT, and forwarded to the respective local IP address of a resource instance.
  • Some public IP addresses may be assigned by the provider network infrastructure to particular resource instances; these public IP addresses may be referred to as standard public IP addresses, or simply standard IP addresses.
  • the mapping of a standard IP address to a local IP address of a resource instance is the default launch configuration for all resource instance types.
  • At least some public IP addresses may be allocated to or obtained by customers of the provider network 800; a customer may then assign their allocated public IP addresses to particular resource instances allocated to the customer. These public IP addresses may be referred to as customer public IP addresses, or simply customer IP addresses. Instead of being assigned by the provider network 800 to resource instances as in the case of standard IP addresses, customer IP addresses may be assigned to resource instances by the customers, for example via an API provided by the service provider. Unlike standard IP addresses, customer IP addresses are allocated to customer accounts and can be remapped to other resource instances by the respective customers as necessary or desired. A customer IP address is associated with a customer’s account, not a particular resource instance, and the customer controls that IP address until the customer chooses to release it.
  • customer IP addresses allow the customer to mask resource instance or availability zone failures by remapping the customer’s public IP addresses to any resource instance associated with the customer’s account.
  • the customer IP addresses for example, enable a customer to engineer around problems with the customer’s resource instances or software by remapping customer IP addresses to replacement resource instances.
  • FIG. 9 is a block diagram of an example provider network that provides a storage service and a hardware virtualization service to customers according to at least some embodiments.
  • Hardware virtualization service 920 provides multiple computation resources 924 (e.g., VMs) to customers.
  • the computation resources 924 may, for example, be rented or leased to customers of the provider network 900 (e.g., to a customer that implements customer network 950).
  • Each computation resource 924 may be provided with one or more local IP addresses.
  • Provider network 900 may be configured to route packets from the local IP addresses of the computation resources 924 to public Internet destinations, and from public Internet sources to the local IP addresses of computation resources 924.
  • Provider network 900 may provide a customer network 950, for example coupled to intermediate network 940 via local network 956, the ability to implement virtual computing systems 992 via hardware virtualization service 920 coupled to intermediate network 940 and to provider network 900.
  • hardware virtualization service 920 may provide one or more APIs 902, for example a web services interface, via which a customer network 950 may access functionality provided by the hardware virtualization service 920, for example via a console 994 (e.g., a web-based application, standalone application, mobile application, etc.).
  • each virtual computing system 992 at customer network 950 may correspond to a computation resource 924 that is leased, rented, or otherwise provided to customer network 950.
  • a virtual computing system 992 and/or another customer device 990 may access the functionality of storage service 910, for example via one or more APIs 902, to access data from and store data to storage resources 918A- 918N of a virtual data store 916 (e.g., a folder or“bucket”, a virtualized volume, a database, etc.) provided by the provider network 900.
  • a virtual data store 916 e.g., a folder or“bucket”, a virtualized volume, a database, etc.
  • a virtualized data store gateway may be provided at the customer network 950 that may locally cache at least some data, for example frequently-accessed or critical data, and that may communicate with storage service 910 via one or more communications channels to upload new or modified data from a local cache so that the primary store of data (virtualized data store 916) is maintained.
  • a user via a virtual computing system 992 and/or on another customer device 990, may mount and access virtual data store 916 volumes via storage service 910 acting as a storage virtualization service, and these volumes may appear to the user as local (virtualized) storage 998. [00109] While not shown in FIG.
  • the virtualization service(s) may also be accessed from resource instances within the provider network 900 via API(s) 902.
  • a customer, appliance service provider, or other entity may access a virtualization service from within a respective virtual network on the provider network 900 via an API 902 to request allocation of one or more resource instances within the virtual network or within another virtual network.
  • FIG. 10 is a block diagram illustrating an example computer system that may be used in at least some embodiments.
  • a computer system can be used as a server that implements one or more of the control-plane and/or data-plane components that are used to support the provider substrate and/or PSE described herein and/or various virtualized components (e.g., virtual machines, containers, etc.).
  • Such a computer system can include a general- or special-purpose computer system that includes or is configured to access one or more computer-accessible media.
  • such a computer system can also be used to implement components outside of the provider substrate and/or provider substrate extension (e.g., the customer gateway/router 186, other customer resources 187, and the like).
  • the computer system computer system 1000 includes one or more processors 1010 coupled to a system memory 1020 via an input/output (I/O) interface 1030.
  • Computer system 1000 further includes a network interface 1040 coupled to I/O interface 1030. While FIG. 10 shows computer system 1000 as a single computing device, in various embodiments a computer system 1000 may include one computing device or any number of computing devices configured to work together as a single computer system 1000.
  • computer system 1000 may be a uniprocessor system including one processor 1010, or a multiprocessor system including several processors 1010 (e.g., two, four, eight, or another suitable number).
  • processors 1010 may be any suitable processors capable of executing instructions.
  • processors 1010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, ARM, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA.
  • ISAs instruction set architectures
  • each of processors 1010 may commonly, but not necessarily, implement the same ISA.
  • System memory 1020 may store instructions and data accessible by processor(s) 1010.
  • system memory 1020 may be implemented using any suitable memory technology, such as random-access memory (RAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory.
  • RAM random-access memory
  • SRAM static RAM
  • SDRAM synchronous dynamic RAM
  • program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above are shown stored within system memory 1020 as code 1025 and data 1026.
  • I/O interface 1030 may be configured to coordinate I/O traffic between processor 1010, system memory 1020, and any peripheral devices in the device, including network interface 1040 or other peripheral interfaces.
  • I/O interface 1030 may perform any necessary protocol, timing, or other data transformations to convert data signals from one component (e.g., system memory 1020) into a format suitable for use by another component (e.g., processor 1010).
  • I/O interface 1030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • I/O interface 1030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 1030, such as an interface to system memory 1020, may be incorporated directly into processor 1010.
  • Network interface 1040 may be configured to allow data to be exchanged between computer system 1000 and other devices 1060 attached to a network or networks 1050, such as other computer systems or devices as illustrated in FIG. 1, for example.
  • network interface 1040 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example.
  • network interface 1040 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks (SANs) such as Fibre Channel SANs, or via I/O any other suitable type of network and/or protocol.
  • SANs storage area networks
  • I/O any other suitable type of network and/or protocol.
  • a computer system 1000 includes one or more offload cards 1070 (including one or more processors 1075, and possibly including the one or more network interfaces 1040) that are connected using an I/O interface 1030 (e.g., a bus implementing a version of the Peripheral Component Interconnect - Express (PCI-E) standard, or another interconnect such as a QuickPath interconnect (QPI) or UltraPath interconnect (UPI)).
  • PCI-E Peripheral Component Interconnect - Express
  • QPI QuickPath interconnect
  • UPI UltraPath interconnect
  • the computer system 1000 may act as a host electronic device (e.g., operating as part of a hardware virtualization service) that hosts compute instances, and the one or more offload cards 1070 execute a virtualization manager that can manage compute instances that execute on the host electronic device.
  • the offload card(s) 1070 can perform compute instance management operations such as pausing and/or un-pausing compute instances, launching and/or terminating compute instances, performing memory transfer/copying operations, etc. These management operations may, in some embodiments, be performed by the offload card(s) 1070 in coordination with a hypervisor (e.g., upon a request from a hypervisor) that is executed by the other processors 1010A-1010N of the computer system 1000.
  • the virtualization manager implemented by the offload card(s) 1070 can accommodate requests from other entities (e.g., from compute instances themselves), and may not coordinate with (or service) any separate hypervisor.
  • the PSE framework 202 and at least a portion of the functionality of the host manager 222 execute on the one or more processors 1075 of the offload cards 1070 while the instances (e.g., 232, 234, 236) execute on the one or more processors 1010.
  • system memory 1020 may be one embodiment of a computer- accessible medium configured to store program instructions and data as described above. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media.
  • a computer-accessible medium may include non-transitory storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD coupled to computer system 1000 via I/O interface 1030.
  • a non- transitory computer- accessible storage medium may also include any volatile or non-volatile media such as RAM (e.g., SDRAM, double data rate (DDR) SDRAM, SRAM, etc.), read only memory (ROM), etc., that may be included in some embodiments of computer system 1000 as system memory 1020 or another type of memory.
  • a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link, such as may be implemented via network interface 1040.
  • Various embodiments discussed or suggested herein can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications.
  • User or client devices can include any of a number of general-purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols.
  • Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management.
  • TCP/IP Transmission Control Protocol / Internet Protocol
  • FTP File Transfer Protocol
  • UFP Universal Plug and Play
  • NFS Network File System
  • CIFS Common Internet File System
  • XMPP Extensible Messaging and Presence Protocol
  • AppleTalk etc.
  • the network(s) can include, for example, a local area network (LAN), a wide-area network (WAN), a virtual private network (VPN), the Internet, an intranet, an extranet, a public switched telephone network (PSTN), an infrared network, a wireless network, and any combination thereof.
  • LAN local area network
  • WAN wide-area network
  • VPN virtual private network
  • PSTN public switched telephone network
  • infrared network a wireless network, and any combination thereof.
  • the web server can run any of a variety of server or mid-tier applications, including HTTP servers, File Transfer Protocol (FTP) servers, Common Gateway Interface (CGI) servers, data servers, Java servers, business application servers, etc.
  • the server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, PHP, or TCL, as well as combinations thereof.
  • the server(s) may also include database servers, including without limitation those commercially available from Oracle(R), Microsoft(R), Sybase(R), IBM(R), etc.
  • the database servers may be relational or non-relational (e.g.,“NoSQL”), distributed or non-distributed, etc.
  • the environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (SAN) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate.
  • SAN storage-area network
  • each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and/or at least one output device (e.g., a display device, printer, or speaker).
  • CPU central processing unit
  • input device e.g., a mouse, keyboard, controller, touch screen, or keypad
  • output device e.g., a display device, printer, or speaker
  • Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random-access memory (RAM) or read-only memory (ROM), as well as removable media devices, memory cards, flash cards, etc.
  • RAM random-access memory
  • ROM read-only memory
  • Such devices can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above.
  • the computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information.
  • the system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or web browser.
  • Storage media and computer readable media for containing code, or portions of code can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instmctions, data structures, program modules, or other data, including RAM, ROM, Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disc-Read Only Memory (CD- ROM), Digital Versatile Disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a system device.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD- ROM Compact Disc-Read Only Memory
  • DVD Digital Versatile Disk
  • magnetic cassettes magnetic tape
  • Bracketed text and blocks with dashed borders are used herein to illustrate optional operations that add additional features to some embodiments. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments.
  • Reference numerals with suffix letters may be used to indicate that there can be one or multiple instances of the referenced entity in various embodiments, and when there are multiple instances, each does not need to be identical but may instead share some general traits or act in common ways. Further, the particular suffixes used are not meant to imply that a particular amount of the entity exists unless specifically indicated to the contrary. Thus, two entities using the same or different suffix letters may or may not have the same number of instances in various embodiments.
  • references to“one embodiment,”“an embodiment,”“an example embodiment,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • disjunctive language such as the phrase“at least one of A, B, or C” is intended to be understood to mean either A, B, or C, or any combination thereof (e.g., A, B, and/or C). As such, disjunctive language is not intended to, nor should it be understood to, imply that a given embodiment requires at least one of A, at least one of B, or at least one of C to each be present.
  • a computer-implemented method comprising:
  • a second compute instance to proxy control plane traffic to a first substrate addressable device of the one or more substrate addressable devices, wherein the second compute instance is to:
  • a computer-implemented method comprising:
  • the first control plane message includes an identifier of a source of the first control plane message and a call to an application programming interface (API) of the first substrate addressable device;
  • API application programming interface
  • updating the message state data store includes storing the identifier of the source and an indication of the call to the API.
  • a second operation to send an identification of the fourth compute instance to at least one of the one or more compute instances other than the third compute instance.
  • a system comprising:
  • the extension management service including instructions that upon execution cause the extension management service to:
  • the instance management service based on the identification, initiate a launch of one or more compute instances within the provider network via the instance management service, the one or more compute instances to connect the provider network to the extension of the provider network across at least a third-party network, the one or more compute instances to:
  • the first control plane message includes an identifier of a source of the first control plane message and a call to an application programming interface (API) of the first substrate addressable device, and wherein the update of the message state data store includes storing the identifier of the source and an indication of the call to the API.
  • API application programming interface
  • extension management service includes further instructions that upon execution cause the extension management service to cause an attachment of a virtual network address to at least one compute instance of the one or more compute instances, wherein the virtual network address matches a substrate address of at least one substrate addressable device of the one or more substrate addressable devices.
  • the extension management service includes further instructions that upon execution cause the extension management service to send a request to execute a workflow to a workflow execution service of the provider network, the request including an operation to launch at least one compute instance of the one or more compute instances via the instance management service, wherein a workflow executor managed by the workflow execution service executes the workflow.
  • the extension management service includes further instructions that upon execution cause the extension management service to: monitor an actual state of the one or more compute instances;
  • a third compute instance of the one or more compute instances is causing the actual state of the one or more compute instances to not match a desired state of the one or more compute instances, wherein the desired state of the one or more compute instances is based at least in part on the identification; and generate a schedule that identifies one or more operations to modify at least one compute instance of the one or more compute instances to reconcile a difference between the actual state and the desired state.
  • to monitor the actual state of the one or more compute instances includes at least one of sending a request for a response to a first compute instance of the one or more compute instances or receiving a message from the first compute instance of the one or more compute instances.
  • a second operation to send an identification of the fourth compute instance to at least one of the one or more compute instances other than the third compute instance.
  • extension management service includes further instructions that upon execution cause the extension management service to: receive, from the extension of the provider network, a public key associated with a tunnel endpoint of the extension;
  • a computer-implemented method comprising :
  • a provider network receiving, in a provider network, a first packet including a first control plane message payload and a first destination address, wherein the first destination address matches a virtual network address of the provider network and a substrate address of a first device in an extension of the provider network, wherein the extension of the provider network is in communication with the provider network via at least a third-party network;
  • a computer-implemented method comprising:
  • the first payload includes a call to an application programming interface (API) of the first device;
  • API application programming interface
  • updating the message state data store includes storing the first source address, the first destination address, and an indication of the call to the API.
  • a system comprising:
  • a second one or more computing devices of an extension of the provider network wherein the extension of the provider network is in communication with the provider network via at least a third-party network;
  • the first one or more computing devices include instructions that upon execution on a processor cause the first one or more computing devices to: receive, in the provider network, a first message of a first type and having a first destination address, wherein the first destination address is associated with a virtual network address of the provider network and an address of a first device of the second one or more computing devices;
  • first one or more computing devices include further instmctions that upon execution on a processor cause the first one or more computing devices to: receive, in the provider network, a second message of the first type from the first device via the first secure tunnel;
  • the first message includes a first source address associated with a second device in the provider network
  • first one or more computing devices include further instructions that upon execution on a processor cause the first one or more computing devices to:
  • the first payload includes a call to an application programming interface (API) of the first device;
  • API application programming interface
  • update of the message state data store includes storing the first source address, the first destination address, and an indication of the call to the API.
  • the first one or more computing devices include further instructions that upon execution on a processor cause the first one or more computing devices to:
  • first one or more computing devices include further instmctions that upon execution on a processor cause the first one or more computing devices to: receive, in the provider network, a second message of a second type and having the first destination address, the second message including a second payload that includes an identifier of a first compute instance hosted by the first device; and send the second payload to the first device via a second secure tunnel through the third- party network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne un premier service d'un réseau fournisseur obtenant une identification d'un ou plusieurs dispositifs adressables par un substrat compris dans une extension du réseau fournisseur. Sur la base de l'identification, un lancement d'une ou plusieurs instances de calcul dans le réseau fournisseur est initié. La ou les instances de calcul doivent connecter le réseau fournisseur à l'extension du réseau fournisseur à travers au moins un réseau tiers par réception d'un premier message de plan de commande destiné à un premier dispositif adressable par un substrat du ou des dispositifs adressables par un substrat, par mise à jour d'une mémoire de données d'état de message sur la base, au moins en partie, du premier message de plan de commande, et par envoi d'un second message de plan de commande au premier dispositif adressable par un substrat par l'intermédiaire d'un tunnel sécurisé.
PCT/US2020/039859 2019-06-28 2020-06-26 Gestion de connectivité de réseau fournisseur pour extensions de substrat de réseau de fournisseur WO2020264323A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20742587.7A EP3987397A1 (fr) 2019-06-28 2020-06-26 Gestion de connectivité de réseau fournisseur pour extensions de substrat de réseau de fournisseur
CN202080047186.XA CN114026826B (zh) 2019-06-28 2020-06-26 提供商网络底层扩展的提供商网络连接管理

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US16/457,827 US11374789B2 (en) 2019-06-28 2019-06-28 Provider network connectivity to provider network substrate extensions
US16/457,827 2019-06-28
US16/457,824 2019-06-28
US16/457,824 US11659058B2 (en) 2019-06-28 2019-06-28 Provider network connectivity management for provider network substrate extensions

Publications (1)

Publication Number Publication Date
WO2020264323A1 true WO2020264323A1 (fr) 2020-12-30

Family

ID=71662357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/039859 WO2020264323A1 (fr) 2019-06-28 2020-06-26 Gestion de connectivité de réseau fournisseur pour extensions de substrat de réseau de fournisseur

Country Status (3)

Country Link
EP (1) EP3987397A1 (fr)
CN (1) CN114026826B (fr)
WO (1) WO2020264323A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11917446B1 (en) 2019-11-29 2024-02-27 Amazon Technologies, Inc. Mobility of cloud compute instances hosted within communications service provider networks
EP4049139B1 (fr) * 2019-11-29 2024-04-17 Amazon Technologies Inc. Placement basé sur la latence d'instances de calcul en nuage dans des réseaux de fournisseurs de services de communication

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140108665A1 (en) * 2012-10-16 2014-04-17 Citrix Systems, Inc. Systems and methods for bridging between public and private clouds through multilevel api integration
US20150089034A1 (en) * 2013-09-23 2015-03-26 Amazon Technologies, Inc. Client-premise resource control via provider-defined interfaces

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6156652A (en) * 1998-10-09 2000-12-05 The United States Of America As Represented By The Secretary Of The Air Force Post-process metallization interconnects for microelectromechanical systems
US9438506B2 (en) * 2013-12-11 2016-09-06 Amazon Technologies, Inc. Identity and access management-based access control in virtual networks
WO2018020290A1 (fr) * 2016-07-25 2018-02-01 Telefonaktiebolaget Lm Ericsson (Publ) Chemin de commande rapide et convergence de chemin de données dans des réseaux de recouvrement de couche 2

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140108665A1 (en) * 2012-10-16 2014-04-17 Citrix Systems, Inc. Systems and methods for bridging between public and private clouds through multilevel api integration
US20150089034A1 (en) * 2013-09-23 2015-03-26 Amazon Technologies, Inc. Client-premise resource control via provider-defined interfaces

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11917446B1 (en) 2019-11-29 2024-02-27 Amazon Technologies, Inc. Mobility of cloud compute instances hosted within communications service provider networks
EP4049139B1 (fr) * 2019-11-29 2024-04-17 Amazon Technologies Inc. Placement basé sur la latence d'instances de calcul en nuage dans des réseaux de fournisseurs de services de communication

Also Published As

Publication number Publication date
CN114026826A (zh) 2022-02-08
CN114026826B (zh) 2023-07-14
EP3987397A1 (fr) 2022-04-27

Similar Documents

Publication Publication Date Title
US11539552B1 (en) Data caching in provider network substrate extensions
US11659058B2 (en) Provider network connectivity management for provider network substrate extensions
US9756018B2 (en) Establishing secure remote access to private computer networks
US10949125B2 (en) Virtualized block storage servers in cloud provider substrate extension
US8407366B2 (en) Interconnecting members of a virtual network
US9521037B2 (en) Providing access to configurable private computer networks
US11620081B1 (en) Virtualized block storage servers in cloud provider substrate extension
US20170099260A1 (en) Providing location-specific network access to remote services
US20160006610A1 (en) Providing local secure network access to remote services
US11431497B1 (en) Storage expansion devices for provider network substrate extensions
US10949131B2 (en) Control plane for block storage service distributed across a cloud provider substrate and a substrate extension
US11411771B1 (en) Networking in provider network substrate extensions
CN114026826B (zh) 提供商网络底层扩展的提供商网络连接管理
US11374789B2 (en) Provider network connectivity to provider network substrate extensions
JP7440195B2 (ja) クラウドプロバイダ基板拡張における仮想化ブロックストレージサーバー
US20240205051A1 (en) Resource sharing between cloud-hosted virtual networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20742587

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020742587

Country of ref document: EP

Effective date: 20220120