WO2020161561A1 - Local service announcement in a stretched cluster - Google Patents
Local service announcement in a stretched cluster Download PDFInfo
- Publication number
- WO2020161561A1 WO2020161561A1 PCT/IB2020/050616 IB2020050616W WO2020161561A1 WO 2020161561 A1 WO2020161561 A1 WO 2020161561A1 IB 2020050616 W IB2020050616 W IB 2020050616W WO 2020161561 A1 WO2020161561 A1 WO 2020161561A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- service
- multiple sites
- user
- site
- local
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- Computer clusters interconnected by a communication network are the basis for computer clouds where distributed applications can deploy services.
- Such computer clusters are typically centralized at one location with a single traffic gateway for ingress and egress.
- a service (reachable via an IP address or DNS entry) is announced via the gateway, so that traffic destined to the service passes through the gateway and is load- balanced over the compute nodes in the cluster.
- egress traffic from the nodes exits the cluster via the central gateway.
- a cloud can be distributed over multiple geographical sites forming a set of clusters, one at each site.
- instances of the same service can be deployed on several locations. Instances can be placed based on application criteria, such as latency, bandwidth, price of computing, etc. Specifically, some applications prefer placing service instances close to a user of a service so that network resources in terms of bandwidth and latency are optimized.
- a method includes providing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network, deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol, VIP, address corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
- VIP virtual Internet Protocol
- policies in the control plane are used to deploy the service based on one or more of geography, resources and/or cost.
- the service is announced on the multiple sites simultaneously.
- determining the closest instance of the service includes determining a cost corresponding to each of one or more routes from the service to the user, and assigning a lowest-cost route of the one or more routes to the user.
- the cost corresponding to each of the one or more routes includes an amount of hops
- the lowest-cost route is a route of the one or more routes that is determined to have a fewest amount of hops.
- the amount of hops for each of the one or more routes is determined using a routing protocol, such as Border Gateway Protocol, BGP.
- a routing protocol such as Border Gateway Protocol, BGP.
- a load-balancing function distributes traffic among local instances of a service at the site.
- egress traffic from the site is directed to a local gateway of the site by distributing a default route from the local gateway.
- the VIP address corresponding to the service is the same at each of the multiple sites.
- a system for managing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network includes at least one processor circuit, and a non-transitory computer readable memory containing instructions executable by the at least one processor circuit to perform operations including deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol address (VIP) corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
- VIP virtual Internet Protocol address
- Some embodiments provide a non-transitory computer readable medium containing computer program instructions executable by at least one processor circuit to perform operations including providing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network, deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol address, VIP, corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
- VIP virtual Internet Protocol address
- Figure 1 is a block diagram illustrating an example of a stretched cluster with a central site connected to two local sites.
- Figure 2 is a block diagram illustrating an example of a stretched cluster using open source technology.
- Figure 3 is a flow diagram illustrating a method for stretching a cluster.
- Figure 4A illustrates elements of a master node in a stretched cluster according to some embodiments.
- Figure 4B illustrates functional modules of a master node in a stretched cluster according to some embodiments.
- Figure 5 schematically illustrates elements of a wireless communication network in which some embodiments may be implemented.
- a computer cluster or simply cluster, is a set of compute nodes, or worker nodes, that work together so that they can be viewed from the perspective of a client as a single system.
- multiple worker nodes may be configured to perform the same task.
- the operation and scheduling of worker nodes is controlled by one or more master nodes in the cluster which provide a control plane for the cluster.
- the nodes in a cluster are usually connected to each other through fast local area networks, with each node running its own instance of an operating system. In some instances, all of the nodes in a cluster may use the same hardware and the same operating system, although in some cases, the nodes may have different hardware and/or use different operating systems.
- a distributed cluster is a cluster in which the worker nodes are situated at different geographic sites. Because the worker nodes at the distributed sites must work cooperatively, the sites are typically connected via internet protocol (IP) based high speed communication networks.
- IP internet protocol
- the communication network connecting sites in a distributed cluster may employ an interior gateway routing protocol, such as the Open Shortest Path First (OSPF) protocol, or Intermediate System to Intermediate System (IS-IS) protocol.
- OSPF Open Shortest Path First
- IS-IS Intermediate System to Intermediate System
- a distributed cloud can be designed to use multiple clusters with separate control planes and separate addressing. This is common in public clouds where different availability zones can be used to deploy a service. However, separate service addresses are used, and separate management of the different clusters is thus necessary.
- Advanced traffic steering techniques such as common naming, domain name server (DNS) or anycast, can be used to provide a unified service endpoint.
- DNS domain name server
- anycast can be used to provide a unified service endpoint.
- the clusters are still separate, and workloads must be scheduled separately.
- the traffic steering techniques are often complicated.
- a single distributed cluster with a single control plane has the problem of a centralized ingress and egress, so that traffic must enter the distributed cluster at a central location. Even though a service instance is placed close to users, the traffic to and from the service instance will still have to travel via a single gateway. If all traffic passing in and out of the cluster passes a central gateway, this may result long traffic paths with long latencies and extra bandwidth consumption due to conditions such as hairpinning. In the worst case, the same traffic may pass the same links multiple times. Thus, in distributed clusters with a centralized gateway and distributed users, a distributed application service may suffer from large network latencies and/or traffic congestion
- a single computing cluster may be geographically distributed so that compute nodes are situated in multiple sites with a communication network connecting the sites.
- the network may have links with significant latency and bandwidth constraints.
- Such a cluster may be referred to as a "stretched" cluster, because the cluster is stretched across multiple sites with an internal network spanning more than one site and with a single control plane provided by one or more master nodes. This is in contrast to multiple cluster deployments in which several clusters with separate control planes are formed, one on each site.
- a network topology of a stretched cluster may be a central site with several geographically distributed sites (hub- and-spoke). However, the techniques described herein may also be applied to more general topologies.
- a distributed application may deploy a service on a such a stretched cluster. Instances of the service may run at different sites that are geographically distributed. Because the service is operated in a single cluster with the same control plane, the service may be scheduled and managed uniformly. Policies can be used in the control plane to deploy the service in different ways, according to geography, resources or cost, for example.
- Users accessing the service are also geographically distributed and may access the service from different geographic locations.
- a service instance inside the cluster may access external services (such as a download server) residing in different locations. It is beneficial for a user to be able to access a service instance that is located closer to the user, so that the network latency and bandwidth consumption may be reduced.
- a service may be dynamically deployed over many sites, and announcement of routes is coupled to the service instance deployment.
- a local gateway is situated where the service is announced to users accessing the service.
- the service may be announced with a virtual IP address (VIP) that is not bound to any specific infrastructure. Since a service may be located at several sites, the same service, with the same VIP address, may be announced on multiple sites simultaneously.
- VIP virtual IP address
- a cost is assigned to a route depending on the number of network hops.
- a service closer to a user (in hops) may therefore be preferred.
- a user accessing the service VIP address will therefore be directed to its closest instance. Since the deployment of services is performed dynamically, instances can come and go. For example, if a service is no longer available at a local site, the route announcing the service will be withdrawn, and a user accessing the service will instead be directed to a location further away. In contrast, if a new service is deployed at the local site, a route will be announced on the local gateway, and a user requesting that service will be directed to the local site.
- Border Gateway Protocol BGP
- the techniques described herein allow for dynamic coupling of the distributed deployment of a service using a single cloud control plane with a dynamic routing protocol announcing a VIP address of that service. Services are announced locally in a geographically distributed computer cluster. Dynamic coupling of a service VIP to a routing protocol with local announcement enables more efficient use of network resources, while maintaining centralized Life Cycle Management (LCM) of the system. Furthermore, if several service instances are located at a local site, a load-balancing function can distribute the traffic among the local instances, but only keeping within the local cluster. Finally, egress traffic may also directed to a local gateway by distributing a default route from the local gateway.
- LCM Life Cycle Management
- a stretched cluster provides a single cloud control plane so that services can deployed uniformly using a single VIP address.
- the local gateways provide a means to access the service instances locally, so that users can access the services more efficiently compared to accessing the service via a central gateway. "Hairpinning," which leads to extra latency and bandwidth consumption may be avoided.
- the dynamic coupling of service deployment and dynamic routing protocols leads to a dynamic solution that adapts to changes in the service deployment.
- FIG. 1 is a block diagram illustrating an example of a stretched cluster 100 with a central site 115 connected to two remote sites 125A, 125B via a communication network 105.
- Each site hosts one or more worker nodes 120 (also referred to as compute nodes 120).
- the central site 115 also hosts one or more master nodes 110 which provide single control plane functionality to the cluster 100.
- the master node(s) 110 perform functions such as hardware selection, deployment, load balancing and maintenance.
- the cluster 100 may in particular embodiments be a Kubernetes (K8s) cluster. Kubernetes is an open-source platform for automated deployment, scaling, and operation of application containers across clusters of nodes. Although a K8s cluster is shown for purposes of illustration, it will be appreciated that other types of cluster architectures could be used in some embodiments.
- K8s Kubernetes
- the central site 115 and the remote sites 125A, 125B have local gateways
- VIPs virtual IP addresses
- Users such as clients 130A, 130B may access the service at the closest site.
- client 130B may access an instance of the service running on a worker 120 at remote site 125B via gateway 140B
- client 130A may access in instance of the service running on a worker 120 at central site 115 via gateway 140A.
- the cluster can attract traffic to different places within the cluster that may normally not attract traffic when using only a single ingress point.
- Edge nodes may include embedded systems, radio nodes, or other systems having very limited resources. Many edge nodes are limited such that they cannot run a Kubernetes master control function.
- the Kubernetes cluster 100 is stretched out, with a centralized control plane provided by the master nodes 110, to include remote sites 125A, 125B that include only Kubernetes workers 120.
- the stretched cluster 100 is characterized by edge placement of workloads at worker nodes 120 using labels.
- the cluster 100 avoids centralized ingress with the addition of a gateway 120B at the edge of the cluster at the remote site 125B, which provides traffic attraction to the site closest to the client. This arrangement also provides resiliency if service at the edge is disrupted.
- Figure 2 is a block diagram illustrating an example of a stretched cluster 200 according to some embodiments that uses open source technology.
- the distributed cluster 200 includes a central K8s site 210 and remote K8s sites 220.
- the central site 210 includes a master node 110 and two worker nodes 120.
- Each remote site 220 includes a plurality of worker nodes 120.
- Each site is connected via a switch 140 to a router 150 at the edge of an OSPF network 205 for internal routing.
- the routers 150 also provide a gateway function for the cluster, so that services may be advertised to clients through the routers 150 via local breakout (LBO) nodes 160.
- LBO local breakout
- Calico CNI is used for internal networking.
- Calico with IP-in- IP mode is used to establish an overlay network, such that there is no interaction with the IP infrastructure.
- Calico automatically establishes full mesh iBGP between Kubernetes nodes 120.
- ECMP multi-path routing
- a load balancer may use BGP peering with certain nodes.
- Applications are deployed with an external traffic policy of "local" (externalTrafficPolicy: Local). Traffic policies direct how the load balancer should work.
- An external traffic policy indicates whether the load balancer should select services running on the same host, the same site or across different sites. The The application's service IP address is advertised over eBGP, thereby attracting only the relevant traffic to the local Kubernetes worker node 120.
- BGP Kubernetes on distributed small edge nodes.
- Kubernetes may be combined to enable local traffic breakout and geographic workload placement using Kubernetes labels.
- this architecture provides additional application resiliency if the edge nodes 120 are disrupted.
- Figure 3 is a flow diagram illustrating operations of systems/methods according to some embodiments for providing a stretched cluster arrangement.
- a plurality of compute nodes are provided for a single cluster that is geographically distributed over multiple sites that are connected by a communication network.
- a service is deployed at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service.
- the shared control plane includes policies to deploy the service based on one or more of geography, resources or cost.
- a load-balancing function may be applied at each site to distribute traffic among local instances of a service at the site.
- egress traffic from the site is directed to a local gateway of the site by distributing a default route from the local gateway.
- a virtual Internet Protocol address (VIP) corresponding to the service is announced via the local gateways at each of the multiple sites.
- VIP virtual Internet Protocol address
- the service is announced on the multiple sites simultaneously, where the same VIP is announced for the service at each of the multiple sites.
- the control plane determines a closest instance of the service for a user of a site of the multiple sites.
- the closest instance may be determined based on determining a cost of each route from the user to the service.
- the cost may be determined by a routing protocol, such as Border Gateway Protocol (BGP), that identifies a number of hops for each route and assigns a cost to each route based on the identified amount of hops for the route. Accordingly, routes having fewer hops may be determined to have a lower cost and therefore be "closer,” and routes having more hops may be determined to have a higher cost and be "farther.”
- Border Gateway Protocol BGP
- the control plane assigns the closest instance of the service to the user.
- the assigned closest instance may be the service having the least amount of hops between the service and the user.
- FIGs 4A and 4B illustrate elements of a master node 110 in a stretched cluster according to some embodiments.
- a master node 110 includes a processing circuit 112 and a memory circuit 114 that stores computer readable program instructions that, when executed by the processing circuit 112 cause the master node 110 to perform operations described herein.
- the master node further includes a communication interface 116 for communicating with one or more worker nodes at a local or remote site in a cluster 100 via communication network 105 ( Figure 1).
- Figure 4B illustrates various functional modules that are stored in the memory circuit 114 and executed by the processing circuit 112.
- the functional modules include a service deployment module 122 for deploying services at worker nodes 120 in the cluster 100, a service announcement module 124 for announcing the service via gateways 140, and a service assignment module 126 for determining a closest instance of the service for a user and assigning the closest instance of the service to the user.
- a service deployment module 122 for deploying services at worker nodes 120 in the cluster 100
- a service announcement module 124 for announcing the service via gateways 140
- a service assignment module 126 for determining a closest instance of the service for a user and assigning the closest instance of the service to the user.
- a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device.
- network node 460 and wireless device (WD) 410 are depicted with additional detail.
- the wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network.
- the wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system.
- the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures.
- particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile
- UMTS Telecommunications System
- LTE Long Term Evolution
- WLAN wireless local area network
- WiMax Worldwide Interoperability for Microwave Access
- Bluetooth Z-Wave and/or ZigBee standards.
- Network 406 may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
- PSTNs public switched telephone networks
- WANs wide-area networks
- LANs local area networks
- WLANs wireless local area networks
- wired networks wireless networks, metropolitan area networks, and other networks to enable communication between devices.
- Network node 460 and WD 410 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network.
- the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
- network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network.
- network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
- APs access points
- BSs base stations
- eNBs evolved Node Bs
- gNBs NR NodeBs
- Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
- a base station may be a relay node or a relay donor node controlling a relay.
- a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
- RRUs remote radio units
- RRHs Remote Radio Heads
- Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
- Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
- DAS distributed antenna system
- network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs.
- MSR multi-standard radio
- RNCs radio network controllers
- BSCs base station controllers
- BTSs base transceiver stations
- transmission points transmission nodes
- MCEs multi-cell/multicast coordination entities
- core network nodes e.g., MSCs, MMEs
- O&M nodes e.g., OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs.
- network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network.
- network node 460 includes processing circuitry 470, device readable medium 480, interface 490, auxiliary equipment 484, power source 486, power circuitry 487, and antenna 462.
- network node 460 illustrated in the example wireless network of Figure 5 may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein.
- network node 460 may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium 480 may comprise multiple separate hard drives as well as multiple RAM modules).
- network node 460 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
- network node 460 comprises multiple separate components (e.g., BTS and BSC components)
- one or more of the separate components may be shared among several network nodes.
- a single RNC may control multiple NodeB's.
- each unique NodeB and RNC pair may in some instances be considered a single separate network node.
- network node 460 may be configured to support multiple radio access technologies (RATs).
- RATs radio access technologies
- Network node 460 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 460, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 460.
- Processing circuitry 470 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node.
- processing circuitry 470 may include processing information obtained by processing circuitry 470 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
- Processing circuitry 470 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 460 components, such as device readable medium 480, network node 460 functionality.
- processing circuitry 470 may execute instructions stored in device readable medium 480 or in memory within processing circuitry 470. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein.
- processing circuitry 470 may include a system on a chip (SOC).
- SOC system on a chip
- processing circuitry 470 may include one or more of radio frequency (RF) transceiver circuitry 472 and baseband processing circuitry 474.
- RF transceiver circuitry 472 and baseband processing circuitry 474 may be on the same chip or set of chips, boards, or units
- processing circuitry 470 executing instructions stored on device readable medium 480 or memory within processing circuitry 470.
- some or all of the functionality may be provided by processing circuitry 470 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner.
- processing circuitry 470 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 470 alone or to other components of network node 460, but are enjoyed by network node 460 as a whole, and/or by end users and the wireless network generally.
- Device readable medium 480 may comprise any form of volatile or non volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 470.
- volatile or non volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non
- Device readable medium 480 may store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 470 and, utilized by network node 460.
- Device readable medium 480 may be used to store any calculations made by processing circuitry 470 and/or any data received via interface 490.
- processing circuitry 470 and device readable medium 480 may be considered to be integrated.
- Interface 490 is used in the wired or wireless communication of signalling and/or data between network node 460, network 406, and/or WDs 410. As illustrated, interface 490 comprises port(s)/terminal(s) 494 to send and receive data, for example to and from network 406 over a wired connection. Interface 490 also includes radio front end circuitry 492 that may be coupled to, or in certain embodiments a part of, antenna 462. Radio front end circuitry 492 comprises filters 498 and amplifiers 496. Radio front end circuitry 492 may be connected to antenna 462 and processing circuitry 470. Radio front end circuitry may be configured to condition signals communicated between antenna 462 and processing circuitry 470.
- Radio front end circuitry 492 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 492 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 498 and/or amplifiers 496. The radio signal may then be transmitted via antenna 462. Similarly, when receiving data, antenna 462 may collect radio signals which are then converted into digital data by radio front end circuitry 492. The digital data may be passed to processing circuitry 470. In other embodiments, the interface may comprise different components and/or different combinations of components.
- network node 460 may not include separate radio front end circuitry 492, instead, processing circuitry 470 may comprise radio front end circuitry and may be connected to antenna 462 without separate radio front end circuitry 492. Similarly, in some embodiments, all or some of RF transceiver circuitry 472 may be considered a part of interface 490. In still other embodiments, interface 490 may include one or more ports or terminals 494, radio front end circuitry 492, and RF transceiver circuitry 472, as part of a radio unit (not shown), and interface 490 may communicate with baseband processing circuitry 474, which is part of a digital unit (not shown).
- Antenna 462 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 462 may be coupled to radio front end circuitry 490 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna 462 may comprise one or more omni directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GFIz and 66 GFIz.
- An omni-directional antenna may be used to transmit/receive radio signals in any direction
- a sector antenna may be used to transmit/receive radio signals from devices within a particular area
- a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line.
- the use of more than one antenna may be referred to as MIMO.
- antenna 462 may be separate from network node 460 and may be connectable to network node 460 through an interface or port.
- Antenna 462, interface 490, and/or processing circuitry 470 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna 462, interface 490, and/or processing circuitry 470 may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment.
- Power circuitry 487 may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node 460 with power for performing the functionality described herein. Power circuitry 487 may receive power from power source 486. Power source 486 and/or power circuitry 487 may be configured to provide power to the various components of network node 460 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source 486 may either be included in, or external to, power circuitry 487 and/or network node 460.
- network node 460 may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry 487.
- power source 486 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry 487. The battery may provide backup power should the external power source fail.
- Other types of power sources such as photovoltaic devices, may also be used.
- network node 460 may include additional components beyond those shown in Figure 5 that may be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
- network node 460 may include user interface equipment to allow input of
- network node 460 information into network node 460 and to allow output of information from network node 460. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 460.
- wireless device refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices.
- the term WD may be used interchangeably herein with user equipment (UE).
- Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air.
- a WD may be configured to transmit and/or receive information without direct human interaction.
- a WD may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network.
- Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE) a vehicle-mounted wireless terminal device, etc.
- VoIP voice over IP
- PDA personal digital assistant
- PDA personal digital assistant
- a wireless cameras a gaming console or device
- a music storage device a playback appliance
- a wearable terminal device a wireless endpoint
- a mobile station a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (L
- a WD may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to- infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device.
- D2D device-to-device
- V2V vehicle-to-vehicle
- V2I vehicle-to- infrastructure
- V2X vehicle-to-everything
- a WD may represent a machine or other device that performs monitoring and/or
- the WD may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device.
- M2M machine-to-machine
- the WD may be a UE implementing the 3GPP narrow band internet of things (NB-loT) standard.
- NB-loT narrow band internet of things
- Such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.).
- a WD may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
- a WD as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal.
- a WD as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal.
- wireless device 410 includes antenna 411, interface 414, processing circuitry 420, device readable medium 430, user interface equipment 432, auxiliary equipment 434, power source 436 and power circuitry 437.
- WD 410 may include multiple sets of one or more of the illustrated components for different wireless technologies supported by
- WD 410 such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within WD 410.
- Antenna 411 may include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface 414.
- antenna 411 may be separate from WD 410 and be connectable to WD 410 through an interface or port.
- Antenna 411, interface 414, and/or processing circuitry 420 may be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals may be received from a network node and/or another WD.
- radio front end circuitry and/or antenna 411 may be considered an interface.
- interface 414 comprises radio front end circuitry 412 and antenna 411.
- Radio front end circuitry 412 comprise one or more filters 418 and amplifiers 416.
- Radio front end circuitry 414 is connected to antenna 411 and processing circuitry 420, and is configured to condition signals communicated between antenna 411 and processing circuitry 420.
- Radio front end circuitry 412 may be coupled to or a part of antenna 411.
- WD 410 may not include separate radio front end circuitry 412; rather, processing circuitry 420 may comprise radio front end circuitry and may be connected to antenna 411.
- some or all of RF transceiver circuitry 422 may be considered a part of interface 414.
- Radio front end circuitry 412 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 412 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 418 and/or amplifiers 416. The radio signal may then be transmitted via antenna 411. Similarly, when receiving data, antenna 411 may collect radio signals which are then converted into digital data by radio front end circuitry 412. The digital data may be passed to processing circuitry 420. In other embodiments, the interface may comprise different components and/or different combinations of components.
- Processing circuitry 420 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD 410 components, such as device readable medium 430, WD 410 functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry 420 may execute instructions stored in device readable medium 430 or in memory within processing circuitry 420 to provide the functionality disclosed herein.
- processing circuitry 420 includes one or more of RF transceiver circuitry 422, baseband processing circuitry 424, and application processing circuitry 426.
- the processing circuitry may comprise different components and/or different combinations of components.
- processing circuitry 420 of WD 410 may comprise a SOC.
- RF transceiver circuitry 422, baseband processing circuitry 424, and application processing circuitry 426 may be on separate chips or sets of chips.
- part or all of baseband processing circuitry 424 and application processing circuitry 426 may be combined into one chip or set of chips, and RF transceiver circuitry 422 may be on a separate chip or set of chips.
- part or all of RF transceiver circuitry 422 and baseband processing circuitry 424 may be on the same chip or set of chips, and application processing circuitry 426 may be on a separate chip or set of chips.
- part or all of RF transceiver circuitry 422, baseband processing circuitry 424, and application processing circuitry 426 may be combined in the same chip or set of chips.
- RF transceiver circuitry 422 may be a part of interface 414.
- RF transceiver circuitry 422 may condition RF signals for processing circuitry 420.
- processing circuitry 420 executing instructions stored on device readable medium 430, which in certain embodiments may be a computer- readable storage medium.
- some or all of the functionality may be provided by processing circuitry 420 without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner.
- processing circuitry 420 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 420 alone or to other components of WD 410, but are enjoyed by WD 410 as a whole, and/or by end users and the wireless network generally.
- Processing circuitry 420 may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry 420, may include processing information obtained by processing circuitry 420 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD 410, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
- processing information obtained by processing circuitry 420 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD 410, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
- Device readable medium 430 may be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 420.
- Device readable medium 430 may include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (RAM).
- ROM Read Only Memory
- mass storage media e.g., a hard disk
- removable storage media e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)
- any other volatile or non-volatile, non- transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 420.
- processing circuitry 420 and device readable medium 430 may be considered to be integrated.
- User interface equipment 432 may provide components that allow for a human user to interact with WD 410. Such interaction may be of many forms, such as visual, audial, tactile, etc. User interface equipment 432 may be operable to produce output to the user and to allow the user to provide input to WD 410. The type of interaction may vary depending on the type of user interface equipment 432 installed in WD 410. For example, if WD 410 is a smart phone, the interaction may be via a touch screen; if WD 410 is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected).
- usage e.g., the number of gallons used
- a speaker that provides an audible alert
- User interface equipment 432 may include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment 432 is configured to allow input of information into WD 410, and is connected to processing circuitry 420 to allow processing circuitry 420 to process the input information. User interface equipment 432 may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment 432 is also configured to allow output of information from WD 410, and to allow processing circuitry 420 to output information from WD 410. User interface equipment 432 may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment 432, WD 410 may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein.
- Auxiliary equipment 434 is operable to provide more specific functionality which may not be generally performed by WDs. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment 434 may vary depending on the embodiment and/or scenario.
- Power source 436 may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, may also be used.
- WD 410 may further comprise power circuitry 437 for delivering power from power source 436 to the various parts of WD 410 which need power from power source 436 to carry out any functionality described or indicated herein.
- Power circuitry 437 may in certain embodiments comprise power management circuitry.
- Power circuitry 437 may additionally or alternatively be operable to receive power from an external power source; in which case WD 410 may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable.
- Power circuitry 437 may also in certain embodiments be operable to deliver power from an external power source to power source 436. This may be, for example, for the charging of power source 436. Power circuitry 437 may perform any formatting, converting, or other modification to the power from power source 436 to make the power suitable for the respective components of WD 410 to which power is supplied.
- any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
- Each virtual apparatus may comprise a number of these functional units.
- These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
- the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
- Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
- the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Systems and methods for providing service announcements in stretched clusters are provided. A method is disclosed that includes providing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network, deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol address corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
Description
LOCAL SERVICE ANNOUNCEMENT IN A STRETCHED CLUSTER
RELATED APPLICATION
[0001] The present application claims the benefit of and priority to U.S. Provisional Patent Application No. 62/800,713, filed February 4, 2019, entitled "LOCAL SERVICE ANNOUNCEMENT IN A STRETCHED CLUSTER," the disclosure of which is hereby incorporated herein by reference in its entirety.
BACKGROUND
[0002] Computer clusters interconnected by a communication network are the basis for computer clouds where distributed applications can deploy services. Such computer clusters are typically centralized at one location with a single traffic gateway for ingress and egress. In such a cluster, a service (reachable via an IP address or DNS entry) is announced via the gateway, so that traffic destined to the service passes through the gateway and is load- balanced over the compute nodes in the cluster. Likewise, egress traffic from the nodes exits the cluster via the central gateway.
[0003] A cloud can be distributed over multiple geographical sites forming a set of clusters, one at each site. In a distributed cloud, instances of the same service can be deployed on several locations. Instances can be placed based on application criteria, such as latency, bandwidth, price of computing, etc. Specifically, some applications prefer placing service instances close to a user of a service so that network resources in terms of bandwidth and latency are optimized.
[0004] However, the area of placing the same service in multiple location or separate clusters, may be problematic for traffic steering purposes. That is, conventional technology
does not adequately ensure that a user of a single service reaches its closest instance, and improvements are needed.
SU MMARY
[0005] A method according to some embodiments includes providing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network, deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol, VIP, address corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
[0006] In some embodiments, policies in the control plane are used to deploy the service based on one or more of geography, resources and/or cost.
[0007] In some embodiments, the service is announced on the multiple sites simultaneously.
[0008] In some embodiments, determining the closest instance of the service includes determining a cost corresponding to each of one or more routes from the service to the user, and assigning a lowest-cost route of the one or more routes to the user.
[0009] In some embodiments, the cost corresponding to each of the one or more routes includes an amount of hops, and the lowest-cost route is a route of the one or more routes that is determined to have a fewest amount of hops.
[0010] In some embodiments, the amount of hops for each of the one or more routes is determined using a routing protocol, such as Border Gateway Protocol, BGP.
[0011] In some embodiments, a load-balancing function distributes traffic among local instances of a service at the site.
[0012] In some embodiments, egress traffic from the site is directed to a local gateway of the site by distributing a default route from the local gateway.
[0013] In some embodiments, the VIP address corresponding to the service is the same at each of the multiple sites.
[0014] A system for managing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network, includes at least one processor circuit, and a non-transitory computer readable memory containing instructions executable by the at least one processor circuit to perform operations including deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol address (VIP) corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
[0015] Some embodiments provide a non-transitory computer readable medium containing computer program instructions executable by at least one processor circuit to perform operations including providing a plurality of compute nodes at a single cluster that is geographically distributed over multiple sites that are connected by a communication network, deploying a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service, announcing, via the local gateways at each of the multiple sites, a virtual Internet Protocol address, VIP, corresponding to the service, determining a closest instance of the service for a user of a site of the multiple sites, and assigning the closest instance of the service to the user.
DESCRIPTION OF THE DRAWINGS
[0016] Figure 1 is a block diagram illustrating an example of a stretched cluster with a central site connected to two local sites.
[0017] Figure 2 is a block diagram illustrating an example of a stretched cluster using open source technology.
[0018] Figure 3 is a flow diagram illustrating a method for stretching a cluster.
[0019] Figure 4A illustrates elements of a master node in a stretched cluster according to some embodiments.
[0020] Figure 4B illustrates functional modules of a master node in a stretched cluster according to some embodiments.
[0021] Figure 5 schematically illustrates elements of a wireless communication network in which some embodiments may be implemented.
DETAILED DESCRIPTION OF EMBODIMENTS
[0022] Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other
embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
[0023] A computer cluster, or simply cluster, is a set of compute nodes, or worker nodes, that work together so that they can be viewed from the perspective of a client as a single system. In a computer cluster, multiple worker nodes may be configured to perform the same task. The operation and scheduling of worker nodes is controlled by one or more master nodes in the cluster which provide a control plane for the cluster.
[0024] The nodes in a cluster are usually connected to each other through fast local area networks, with each node running its own instance of an operating system. In some instances, all of the nodes in a cluster may use the same hardware and the same operating system, although in some cases, the nodes may have different hardware and/or use different operating systems.
[0025] A distributed cluster is a cluster in which the worker nodes are situated at different geographic sites. Because the worker nodes at the distributed sites must work cooperatively, the sites are typically connected via internet protocol (IP) based high speed communication networks. The communication network connecting sites in a distributed cluster may employ an interior gateway routing protocol, such as the Open Shortest Path First (OSPF) protocol, or Intermediate System to Intermediate System (IS-IS) protocol. The OSPF routing protocol calculates the shortest route to a destination through the network based on an algorithm that analyzes a network topology map.
[0026] There currently exist certain challenges for implementing distributed cloud networks using clusters. A distributed cloud can be designed to use multiple clusters with
separate control planes and separate addressing. This is common in public clouds where different availability zones can be used to deploy a service. However, separate service addresses are used, and separate management of the different clusters is thus necessary.
[0027] Advanced traffic steering techniques such as common naming, domain name server (DNS) or anycast, can be used to provide a unified service endpoint. However, the clusters are still separate, and workloads must be scheduled separately. Thus, the traffic steering techniques are often complicated.
[0028] A single distributed cluster with a single control plane, on the other hand, has the problem of a centralized ingress and egress, so that traffic must enter the distributed cluster at a central location. Even though a service instance is placed close to users, the traffic to and from the service instance will still have to travel via a single gateway. If all traffic passing in and out of the cluster passes a central gateway, this may result long traffic paths with long latencies and extra bandwidth consumption due to conditions such as hairpinning. In the worst case, the same traffic may pass the same links multiple times. Thus, in distributed clusters with a centralized gateway and distributed users, a distributed application service may suffer from large network latencies and/or traffic congestion
[0029] Certain aspects of the present disclosure and their embodiments may provide solutions to these or other challenges.
[0030] A single computing cluster may be geographically distributed so that compute nodes are situated in multiple sites with a communication network connecting the sites. The network may have links with significant latency and bandwidth constraints. Such a cluster may be referred to as a "stretched" cluster, because the cluster is stretched across multiple sites
with an internal network spanning more than one site and with a single control plane provided by one or more master nodes. This is in contrast to multiple cluster deployments in which several clusters with separate control planes are formed, one on each site. A network topology of a stretched cluster may be a central site with several geographically distributed sites (hub- and-spoke). However, the techniques described herein may also be applied to more general topologies.
[0031] A distributed application may deploy a service on a such a stretched cluster. Instances of the service may run at different sites that are geographically distributed. Because the service is operated in a single cluster with the same control plane, the service may be scheduled and managed uniformly. Policies can be used in the control plane to deploy the service in different ways, according to geography, resources or cost, for example.
[0032] Users accessing the service are also geographically distributed and may access the service from different geographic locations. Moreover, a service instance inside the cluster may access external services (such as a download server) residing in different locations. It is beneficial for a user to be able to access a service instance that is located closer to the user, so that the network latency and bandwidth consumption may be reduced.
[0033] According to some embodiments, a service may be dynamically deployed over many sites, and announcement of routes is coupled to the service instance deployment. At each geographical site comprising the cluster, a local gateway is situated where the service is announced to users accessing the service. The service may be announced with a virtual IP address (VIP) that is not bound to any specific infrastructure. Since a service may be located at
several sites, the same service, with the same VIP address, may be announced on multiple sites simultaneously.
[0034] Using a routing protocol, such as Border Gateway Protocol (BGP), a cost is assigned to a route depending on the number of network hops. A service closer to a user (in hops) may therefore be preferred. A user accessing the service VIP address will therefore be directed to its closest instance. Since the deployment of services is performed dynamically, instances can come and go. For example, if a service is no longer available at a local site, the route announcing the service will be withdrawn, and a user accessing the service will instead be directed to a location further away. In contrast, if a new service is deployed at the local site, a route will be announced on the local gateway, and a user requesting that service will be directed to the local site.
[0035] Accordingly, the techniques described herein allow for dynamic coupling of the distributed deployment of a service using a single cloud control plane with a dynamic routing protocol announcing a VIP address of that service. Services are announced locally in a geographically distributed computer cluster. Dynamic coupling of a service VIP to a routing protocol with local announcement enables more efficient use of network resources, while maintaining centralized Life Cycle Management (LCM) of the system. Furthermore, if several service instances are located at a local site, a load-balancing function can distribute the traffic among the local instances, but only keeping within the local cluster. Finally, egress traffic may also directed to a local gateway by distributing a default route from the local gateway.
[0036] Certain embodiments may provide one or more technical advantages. A stretched cluster according to some embodiments provides a single cloud control plane so that
services can deployed uniformly using a single VIP address. The local gateways provide a means to access the service instances locally, so that users can access the services more efficiently compared to accessing the service via a central gateway. "Hairpinning," which leads to extra latency and bandwidth consumption may be avoided. Additionally, the dynamic coupling of service deployment and dynamic routing protocols leads to a dynamic solution that adapts to changes in the service deployment.
[0037] Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
[0038] Figure 1 is a block diagram illustrating an example of a stretched cluster 100 with a central site 115 connected to two remote sites 125A, 125B via a communication network 105. Each site hosts one or more worker nodes 120 (also referred to as compute nodes 120). The central site 115 also hosts one or more master nodes 110 which provide single control plane functionality to the cluster 100. The master node(s) 110 perform functions such as hardware selection, deployment, load balancing and maintenance. The cluster 100 may in particular embodiments be a Kubernetes (K8s) cluster. Kubernetes is an open-source platform for automated deployment, scaling, and operation of application containers across clusters of nodes. Although a K8s cluster is shown for purposes of illustration, it will be appreciated that other types of cluster architectures could be used in some embodiments.
[0039] The central site 115 and the remote sites 125A, 125B have local gateways
140A, 140B, 140C where service virtual IP addresses (VIPs) are announced. Users, such as clients 130A, 130B may access the service at the closest site. For example, client 130B may access an instance of the service running on a worker 120 at remote site 125B via gateway 140B, while client 130A may access in instance of the service running on a worker 120 at central site 115 via gateway 140A.
[0040] By providing traffic ingress points, such as gateways 140A, 140B, 140C at different sites within the cluster, the cluster can attract traffic to different places within the cluster that may normally not attract traffic when using only a single ingress point.
[0041] In the example shown in Figure 1, geographically distributed applications are deployed using Kubernetes spanning across small edge nodes (i.e., the worker nodes 120 at the central and/or remote sites 114, 125A, B). Edge nodes may include embedded systems, radio nodes, or other systems having very limited resources. Many edge nodes are limited such that they cannot run a Kubernetes master control function.
[0042] Accordingly, the Kubernetes cluster 100 is stretched out, with a centralized control plane provided by the master nodes 110, to include remote sites 125A, 125B that include only Kubernetes workers 120. The stretched cluster 100 is characterized by edge placement of workloads at worker nodes 120 using labels. The cluster 100 avoids centralized ingress with the addition of a gateway 120B at the edge of the cluster at the remote site 125B, which provides traffic attraction to the site closest to the client. This arrangement also provides resiliency if service at the edge is disrupted.
[0043] Figure 2 is a block diagram illustrating an example of a stretched cluster 200 according to some embodiments that uses open source technology. In the example shown in Figure 2, the distributed cluster 200 includes a central K8s site 210 and remote K8s sites 220. The central site 210 includes a master node 110 and two worker nodes 120. Each remote site 220 includes a plurality of worker nodes 120. Each site is connected via a switch 140 to a router 150 at the edge of an OSPF network 205 for internal routing. The routers 150 also provide a gateway function for the cluster, so that services may be advertised to clients through the routers 150 via local breakout (LBO) nodes 160.
[0044] In the cluster 200, Calico CNI is used for internal networking. Calico with IP-in- IP mode is used to establish an overlay network, such that there is no interaction with the IP infrastructure. Calico automatically establishes full mesh iBGP between Kubernetes nodes 120.
[0045] In the example shown in Figure 2, services are announced at each site with standard routing protocols. Routers are configured with sticky equal-cost multi-path routing (ECMP) and dynamic BGP support. A load balancer may use BGP peering with certain nodes. Applications are deployed with an external traffic policy of "local" (externalTrafficPolicy: Local). Traffic policies direct how the load balancer should work. An external traffic policy indicates whether the load balancer should select services running on the same host, the same site or across different sites. The The application's service IP address is advertised over eBGP, thereby attracting only the relevant traffic to the local Kubernetes worker node 120.
[0046] The architecture shown in Figure 2 provides a step towards running
Kubernetes on distributed small edge nodes. BGP, Kubernetes, and Calico may be combined to enable local traffic breakout and geographic workload placement using Kubernetes labels.
Moreover, this architecture provides additional application resiliency if the edge nodes 120 are disrupted.
[0047] Figure 3 is a flow diagram illustrating operations of systems/methods according to some embodiments for providing a stretched cluster arrangement.
[0048] At step 302, a plurality of compute nodes are provided for a single cluster that is geographically distributed over multiple sites that are connected by a communication network.
[0049] At step 304, a service is deployed at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service. The shared control plane includes policies to deploy the service based on one or more of geography, resources or cost. Moreover, a load-balancing function may be applied at each site to distribute traffic among local instances of a service at the site. In some examples, egress traffic from the site is directed to a local gateway of the site by distributing a default route from the local gateway.
[0050] At step 306, a virtual Internet Protocol address (VIP) corresponding to the service is announced via the local gateways at each of the multiple sites. In some examples, the service is announced on the multiple sites simultaneously, where the same VIP is announced for the service at each of the multiple sites.
[0051] At step 308, the control plane determines a closest instance of the service for a user of a site of the multiple sites. In some embodiments, the closest instance may be determined based on determining a cost of each route from the user to the service. The cost may be determined by a routing protocol, such as Border Gateway Protocol (BGP), that
identifies a number of hops for each route and assigns a cost to each route based on the identified amount of hops for the route. Accordingly, routes having fewer hops may be determined to have a lower cost and therefore be "closer," and routes having more hops may be determined to have a higher cost and be "farther."
[0052] At step 310, the control plane assigns the closest instance of the service to the user. In the above example, where BGP is used to determine a cost for each route based on the the amount of hops for the route, the assigned closest instance may be the service having the least amount of hops between the service and the user.
[0053] Figures 4A and 4B illustrate elements of a master node 110 in a stretched cluster according to some embodiments. Referring to Figure 4A, a master node 110 includes a processing circuit 112 and a memory circuit 114 that stores computer readable program instructions that, when executed by the processing circuit 112 cause the master node 110 to perform operations described herein. The master node further includes a communication interface 116 for communicating with one or more worker nodes at a local or remote site in a cluster 100 via communication network 105 (Figure 1).
[0054] Figure 4B illustrates various functional modules that are stored in the memory circuit 114 and executed by the processing circuit 112. As shown therein, the functional modules include a service deployment module 122 for deploying services at worker nodes 120 in the cluster 100, a service announcement module 124 for announcing the service via gateways 140, and a service assignment module 126 for determining a closest instance of the service for a user and assigning the closest instance of the service to the user.
[0055] Although the subject matter described herein may be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated in Figure 5. For simplicity, the wireless network of Figure 5 only depicts network 406, network nodes 460 and 460b, and WDs 410, 410b, and 410c. In practice, a wireless network may further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, network node 460 and wireless device (WD) 410 are depicted with additional detail. The wireless network may provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network.
[0056] The wireless network may comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network may be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network may implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile
Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the
Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards.
[0057] Network 406 may comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices.
[0058] Network node 460 and WD 410 comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network may comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
[0059] As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may
then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Yet further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, a network node may be a virtual network node as described in more detail below. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network.
[0060] In Figure 5, network node 460 includes processing circuitry 470, device readable medium 480, interface 490, auxiliary equipment 484, power source 486, power circuitry 487, and antenna 462. Although network node 460 illustrated in the example wireless network of Figure 5 may represent a device that includes the illustrated combination of hardware components, other embodiments may comprise network nodes with different combinations of components. It is to be understood that a network node comprises any
suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Moreover, while the components of network node 460 are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node may comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium 480 may comprise multiple separate hard drives as well as multiple RAM modules).
[0061] Similarly, network node 460 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which network node 460 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeB's. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, network node 460 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate device readable medium 480 for the different RATs) and some components may be reused (e.g., the same antenna 462 may be shared by the RATs). Network node 460 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 460, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 460.
[0062] Processing circuitry 470 is configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry 470 may include processing information obtained by processing circuitry 470 by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
[0063] Processing circuitry 470 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 460 components, such as device readable medium 480, network node 460 functionality. For example, processing circuitry 470 may execute instructions stored in device readable medium 480 or in memory within processing circuitry 470. Such functionality may include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry 470 may include a system on a chip (SOC).
[0064] In some embodiments, processing circuitry 470 may include one or more of radio frequency (RF) transceiver circuitry 472 and baseband processing circuitry 474. In some embodiments, radio frequency (RF) transceiver circuitry 472 and baseband processing circuitry
474 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital
units. In alternative embodiments, part or all of RF transceiver circuitry 472 and baseband processing circuitry 474 may be on the same chip or set of chips, boards, or units
[0065] In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device may be performed by processing circuitry 470 executing instructions stored on device readable medium 480 or memory within processing circuitry 470. In alternative embodiments, some or all of the functionality may be provided by processing circuitry 470 without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 470 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 470 alone or to other components of network node 460, but are enjoyed by network node 460 as a whole, and/or by end users and the wireless network generally.
[0066] Device readable medium 480 may comprise any form of volatile or non volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 470. Device readable medium 480 may store any suitable instructions, data or information, including a computer program, software, an application including one or
more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 470 and, utilized by network node 460. Device readable medium 480 may be used to store any calculations made by processing circuitry 470 and/or any data received via interface 490. In some embodiments, processing circuitry 470 and device readable medium 480 may be considered to be integrated.
[0067] Interface 490 is used in the wired or wireless communication of signalling and/or data between network node 460, network 406, and/or WDs 410. As illustrated, interface 490 comprises port(s)/terminal(s) 494 to send and receive data, for example to and from network 406 over a wired connection. Interface 490 also includes radio front end circuitry 492 that may be coupled to, or in certain embodiments a part of, antenna 462. Radio front end circuitry 492 comprises filters 498 and amplifiers 496. Radio front end circuitry 492 may be connected to antenna 462 and processing circuitry 470. Radio front end circuitry may be configured to condition signals communicated between antenna 462 and processing circuitry 470. Radio front end circuitry 492 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 492 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 498 and/or amplifiers 496. The radio signal may then be transmitted via antenna 462. Similarly, when receiving data, antenna 462 may collect radio signals which are then converted into digital data by radio front end circuitry 492. The digital data may be passed to processing circuitry 470. In other embodiments, the interface may comprise different components and/or different combinations of components.
[0068] In certain alternative embodiments, network node 460 may not include separate radio front end circuitry 492, instead, processing circuitry 470 may comprise radio front end circuitry and may be connected to antenna 462 without separate radio front end circuitry 492. Similarly, in some embodiments, all or some of RF transceiver circuitry 472 may be considered a part of interface 490. In still other embodiments, interface 490 may include one or more ports or terminals 494, radio front end circuitry 492, and RF transceiver circuitry 472, as part of a radio unit (not shown), and interface 490 may communicate with baseband processing circuitry 474, which is part of a digital unit (not shown).
[0069] Antenna 462 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna 462 may be coupled to radio front end circuitry 490 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna 462 may comprise one or more omni directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GFIz and 66 GFIz. An omni-directional antenna may be used to transmit/receive radio signals in any direction, a sector antenna may be used to transmit/receive radio signals from devices within a particular area, and a panel antenna may be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna may be referred to as MIMO. In certain embodiments, antenna 462 may be separate from network node 460 and may be connectable to network node 460 through an interface or port.
[0070] Antenna 462, interface 490, and/or processing circuitry 470 may be configured to perform any receiving operations and/or certain obtaining operations described
herein as being performed by a network node. Any information, data and/or signals may be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna 462, interface 490, and/or processing circuitry 470 may be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals may be transmitted to a wireless device, another network node and/or any other network equipment.
[0071] Power circuitry 487 may comprise, or be coupled to, power management circuitry and is configured to supply the components of network node 460 with power for performing the functionality described herein. Power circuitry 487 may receive power from power source 486. Power source 486 and/or power circuitry 487 may be configured to provide power to the various components of network node 460 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source 486 may either be included in, or external to, power circuitry 487 and/or network node 460. For example, network node 460 may be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry 487. As a further example, power source 486 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry 487. The battery may provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, may also be used.
[0072] Alternative embodiments of network node 460 may include additional components beyond those shown in Figure 5 that may be responsible for providing certain
aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node 460 may include user interface equipment to allow input of
information into network node 460 and to allow output of information from network node 460. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for network node 460.
[0073] As used herein, wireless device (WD) refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term WD may be used interchangeably herein with user equipment (UE). Communicating wirelessly may involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a WD may be configured to transmit and/or receive information without direct human interaction. For instance, a WD may be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network.
Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE) a vehicle-mounted wireless terminal device, etc.. A WD may support device-to-device (D2D) communication, for example by
implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to- infrastructure (V2I), vehicle-to-everything (V2X) and may in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (loT) scenario, a WD may represent a machine or other device that performs monitoring and/or
measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node. The WD may in this case be a machine-to-machine (M2M) device, which may in a 3GPP context be referred to as an MTC device. As one particular example, the WD may be a UE implementing the 3GPP narrow band internet of things (NB-loT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g. refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.). In other scenarios, a WD may represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A WD as described above may represent the endpoint of a wireless connection, in which case the device may be referred to as a wireless terminal. Furthermore, a WD as described above may be mobile, in which case it may also be referred to as a mobile device or a mobile terminal.
[0074] As illustrated, wireless device 410 includes antenna 411, interface 414, processing circuitry 420, device readable medium 430, user interface equipment 432, auxiliary equipment 434, power source 436 and power circuitry 437. WD 410 may include multiple sets of one or more of the illustrated components for different wireless technologies supported by
WD 410, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless
technologies, just to mention a few. These wireless technologies may be integrated into the same or different chips or set of chips as other components within WD 410.
[0075] Antenna 411 may include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface 414. In certain alternative embodiments, antenna 411 may be separate from WD 410 and be connectable to WD 410 through an interface or port. Antenna 411, interface 414, and/or processing circuitry 420 may be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals may be received from a network node and/or another WD. In some embodiments, radio front end circuitry and/or antenna 411 may be considered an interface.
[0076] As illustrated, interface 414 comprises radio front end circuitry 412 and antenna 411. Radio front end circuitry 412 comprise one or more filters 418 and amplifiers 416. Radio front end circuitry 414 is connected to antenna 411 and processing circuitry 420, and is configured to condition signals communicated between antenna 411 and processing circuitry 420. Radio front end circuitry 412 may be coupled to or a part of antenna 411. In some embodiments, WD 410 may not include separate radio front end circuitry 412; rather, processing circuitry 420 may comprise radio front end circuitry and may be connected to antenna 411. Similarly, in some embodiments, some or all of RF transceiver circuitry 422 may be considered a part of interface 414. Radio front end circuitry 412 may receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry 412 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 418 and/or amplifiers 416. The radio
signal may then be transmitted via antenna 411. Similarly, when receiving data, antenna 411 may collect radio signals which are then converted into digital data by radio front end circuitry 412. The digital data may be passed to processing circuitry 420. In other embodiments, the interface may comprise different components and/or different combinations of components.
[0077] Processing circuitry 420 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD 410 components, such as device readable medium 430, WD 410 functionality. Such functionality may include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry 420 may execute instructions stored in device readable medium 430 or in memory within processing circuitry 420 to provide the functionality disclosed herein.
[0078] As illustrated, processing circuitry 420 includes one or more of RF transceiver circuitry 422, baseband processing circuitry 424, and application processing circuitry 426. In other embodiments, the processing circuitry may comprise different components and/or different combinations of components. In certain embodiments processing circuitry 420 of WD 410 may comprise a SOC. In some embodiments, RF transceiver circuitry 422, baseband processing circuitry 424, and application processing circuitry 426 may be on separate chips or sets of chips. In alternative embodiments, part or all of baseband processing circuitry 424 and application processing circuitry 426 may be combined into one chip or set of chips, and RF transceiver circuitry 422 may be on a separate chip or set of chips. In still alternative
embodiments, part or all of RF transceiver circuitry 422 and baseband processing circuitry 424 may be on the same chip or set of chips, and application processing circuitry 426 may be on a separate chip or set of chips. In yet other alternative embodiments, part or all of RF transceiver circuitry 422, baseband processing circuitry 424, and application processing circuitry 426 may be combined in the same chip or set of chips. In some embodiments, RF transceiver circuitry 422 may be a part of interface 414. RF transceiver circuitry 422 may condition RF signals for processing circuitry 420.
[0079] In certain embodiments, some or all of the functionality described herein as being performed by a WD may be provided by processing circuitry 420 executing instructions stored on device readable medium 430, which in certain embodiments may be a computer- readable storage medium. In alternative embodiments, some or all of the functionality may be provided by processing circuitry 420 without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry 420 can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry 420 alone or to other components of WD 410, but are enjoyed by WD 410 as a whole, and/or by end users and the wireless network generally.
[0080] Processing circuitry 420 may be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry 420, may include processing information obtained by processing circuitry 420 by, for example, converting the
obtained information into other information, comparing the obtained information or converted information to information stored by WD 410, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
[0081] Device readable medium 430 may be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry 420. Device readable medium 430 may include computer memory (e.g., Random Access Memory (RAM) or Read Only
Memory (ROM)), mass storage media (e.g., a hard disk), removable storage media (e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non- transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that may be used by processing circuitry 420. In some embodiments, processing circuitry 420 and device readable medium 430 may be considered to be integrated.
[0082] User interface equipment 432 may provide components that allow for a human user to interact with WD 410. Such interaction may be of many forms, such as visual, audial, tactile, etc. User interface equipment 432 may be operable to produce output to the user and to allow the user to provide input to WD 410. The type of interaction may vary depending on the type of user interface equipment 432 installed in WD 410. For example, if WD 410 is a smart phone, the interaction may be via a touch screen; if WD 410 is a smart meter, the interaction may be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected). User interface equipment 432 may include input interfaces, devices and circuits, and output interfaces, devices and circuits.
User interface equipment 432 is configured to allow input of information into WD 410, and is connected to processing circuitry 420 to allow processing circuitry 420 to process the input information. User interface equipment 432 may include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment 432 is also configured to allow output of information from WD 410, and to allow processing circuitry 420 to output information from WD 410. User interface equipment 432 may include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment 432, WD 410 may communicate with end users and/or the wireless network, and allow them to benefit from the functionality described herein.
[0083] Auxiliary equipment 434 is operable to provide more specific functionality which may not be generally performed by WDs. This may comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment 434 may vary depending on the embodiment and/or scenario.
[0084] Power source 436 may, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, may also be used. WD 410 may further comprise power circuitry 437 for delivering power from power source 436 to the various parts of WD 410 which need power from power source 436 to carry out any functionality described or indicated herein. Power circuitry 437 may in certain embodiments comprise power
management circuitry. Power circuitry 437 may additionally or alternatively be operable to receive power from an external power source; in which case WD 410 may be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. Power circuitry 437 may also in certain embodiments be operable to deliver power from an external power source to power source 436. This may be, for example, for the charging of power source 436. Power circuitry 437 may perform any formatting, converting, or other modification to the power from power source 436 to make the power suitable for the respective components of WD 410 to which power is supplied.
[0085] Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
Claims
1. A method comprising:
providing (302) a plurality of compute nodes (120) at a single cluster (100, 200) that is geographically distributed over multiple sites (115, 125, 210, 220) that are connected by a communication network (105, 205);
deploying (304) a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway (140A, 140B) to announce the service;
announcing (306), via the local gateways at each of the multiple sites, a virtual Internet Protocol, VIP, address corresponding to the service;
determining (308) a closest instance of the service for a user of a site of the multiple sites; and
assigning (310) the closest instance of the service to the user.
2. The method of claim 1, wherein policies in the control plane are used to deploy the service based on one or more of geography, resources and/or cost.
3. The method of claim 1, wherein the service is announced on the multiple sites simultaneously.
4. The method of claim 1, wherein determining the closest instance of the service comprises:
determining a cost corresponding to each of one or more routes from the service to the user; and
assigning a lowest-cost route of the one or more routes to the user.
5. The method of claim 4, wherein the cost corresponding to each of the one or more routes includes an amount of hops, and wherein the lowest-cost route is a route of the one or more routes that is determined to have a fewest amount of hops.
6. The method of claim 5, wherein the amount of hops for each of the one or more routes is determined using a routing protocol, such as Border Gateway Protocol, BGP.
7. The method of claim 1, wherein a load-balancing function distributes traffic among local instances of a service at the site.
8. The method of claim 1, wherein egress traffic from the site is directed to a local gateway of the site by distributing a default route from the local gateway.
9. The method of claim 1, wherein the VIP address corresponding to the service is the same at each of the multiple sites.
10. A system for managing a plurality of compute nodes (120) at a single cluster (100,
200) that is geographically distributed over multiple sites that are connected by a
communication network (105, 205), the system comprising:
at least one processor circuit (112); and
a non-transitory computer readable memory (114) containing instructions executable by the at least one processor circuit to perform operations comprising:
deploying (304) a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway to announce the service;
announcing (306), via the local gateways at each of the multiple sites, a virtual Internet Protocol (VIP) address corresponding to the service;
determining (308) a closest instance of the service for a user of a site of the multiple sites; and
assigning (310) the closest instance of the service to the user.
11. The system of claim 10, wherein policies in the control plane are used to deploy the service based on one or more of: geography, resources or cost.
12. The system of claim 10, wherein the service is announced on the multiple sites simultaneously.
13. The system of claim 10, wherein determining the closest instance of the service comprises:
determining a cost corresponding to each of one or more routes from the service to the user; and
assigning a lowest-cost route of the one or more routes to the user.
14. The system of claim 13, wherein the cost corresponding to each of the one or more routes includes an amount of hops, and wherein the lowest-cost route is a route of the one or more routes that is determined to have a fewest amount of hops.
15. The system of claim 14, wherein the amount of hops for each of the one or more routes is determined using a Border Gateway Protocol (BGP) routing protocol.
16. The system of claim 10, wherein a load-balancing function distributes traffic among local instances of a service at the site.
17. The system of claim 10, wherein egress traffic from the site is directed to a local gateway of the site by distributing a default route from the local gateway.
18. The system of claim 10, wherein the virtual Internet Protocol address (VIP) corresponding to the service is the same at each of the multiple sites.
19. A non-transitory computer readable medium containing computer program instructions executable by at least one processor circuit to perform operations comprising:
providing (302) a plurality of compute nodes (120) at a single cluster (100, 200) that is geographically distributed over multiple sites (115, 125, 210, 220) that are connected by a communication network (105, 205);
deploying (304) a service at the multiple sites via a shared control plane, each of the multiple sites including a local gateway (140A, 140B) to announce the service;
announcing (306), via the local gateways at each of the multiple sites, a virtual Internet Protocol, VIP, address corresponding to the service;
determining (308) a closest instance of the service for a user of a site of the multiple sites; and
assigning (310) the closest instance of the service to the user.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962800713P | 2019-02-04 | 2019-02-04 | |
US62/800,713 | 2019-02-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020161561A1 true WO2020161561A1 (en) | 2020-08-13 |
Family
ID=69423362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2020/050616 WO2020161561A1 (en) | 2019-02-04 | 2020-01-27 | Local service announcement in a stretched cluster |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020161561A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112162828A (en) * | 2020-10-29 | 2021-01-01 | 杭州谐云科技有限公司 | Container network cooperation system and method based on cloud side scene |
CN114390101A (en) * | 2022-01-04 | 2022-04-22 | 上海弘积信息科技有限公司 | Kubernetes load balancing method based on BGP networking |
US20230156074A1 (en) * | 2021-11-12 | 2023-05-18 | Electronics And Telecommunications Research Institute | Multi-cloud edge system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120131146A1 (en) * | 2010-11-23 | 2012-05-24 | Edgecast Networks, Inc. | Scalable Content Streaming System with Server-Side Archiving |
US20150046575A1 (en) * | 2013-08-08 | 2015-02-12 | Level 3 Communications, Llc | Content delivery methods and systems |
US20180191793A1 (en) * | 2016-12-30 | 2018-07-05 | Akamai Technologies, Inc. | Multicast overlay network for delivery of real-time video |
-
2020
- 2020-01-27 WO PCT/IB2020/050616 patent/WO2020161561A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120131146A1 (en) * | 2010-11-23 | 2012-05-24 | Edgecast Networks, Inc. | Scalable Content Streaming System with Server-Side Archiving |
US20150046575A1 (en) * | 2013-08-08 | 2015-02-12 | Level 3 Communications, Llc | Content delivery methods and systems |
US20180191793A1 (en) * | 2016-12-30 | 2018-07-05 | Akamai Technologies, Inc. | Multicast overlay network for delivery of real-time video |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112162828A (en) * | 2020-10-29 | 2021-01-01 | 杭州谐云科技有限公司 | Container network cooperation system and method based on cloud side scene |
US20230156074A1 (en) * | 2021-11-12 | 2023-05-18 | Electronics And Telecommunications Research Institute | Multi-cloud edge system |
US11916998B2 (en) * | 2021-11-12 | 2024-02-27 | Electronics And Telecommunications Research Institute | Multi-cloud edge system |
CN114390101A (en) * | 2022-01-04 | 2022-04-22 | 上海弘积信息科技有限公司 | Kubernetes load balancing method based on BGP networking |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10492119B2 (en) | Next generation intelligent mesh network with fronthaul and backhaul services | |
US10390348B2 (en) | System and method for an agile wireless access network | |
US11671335B2 (en) | First node, second node, and methods performed thereby for managing a network slice instance | |
US9461729B2 (en) | Software-defined network infrastructure having virtual range extenders | |
US9788211B2 (en) | System and method for a customized fifth generation (5G) network | |
JP2024026269A (en) | Frequency or radio access technology (RAT) selection based on slice availability | |
JP7219817B2 (en) | Communication technology selection method and apparatus | |
JP7569409B2 (en) | Method for updating background data transmission policy negotiated between an application function and a core network, policy control function, and application function - Patents.com | |
TWI745851B (en) | Service delivery with joint network and cloud resource management | |
KR102554326B1 (en) | Methods, apparatus and computer readable medium for discovery of application server and/or services for V2X communications | |
CN113475123A (en) | Method and system for Local Area Data Network (LADN) selection based on dynamic network conditions | |
JP2022528801A (en) | Set the HARQ timing for PDSCH with the pending PDSCH-HARQ timing indicator. | |
WO2020161561A1 (en) | Local service announcement in a stretched cluster | |
US11336513B2 (en) | Network nodes with intelligent integration | |
CN112823564B (en) | Method for providing dynamic NEF tunnel allocation and related network node | |
WO2021227833A1 (en) | Method and apparatus for providing edge service | |
KR20200087828A (en) | MCS and CQI table identification | |
KR20200019044A (en) | Method and apparatus for providing 5g ethernet service | |
US20230337056A1 (en) | Coordination of Edge Application Server Reselection using Edge Client Subnet | |
CN114531959A (en) | Providing and opening User Equipment (UE) communication modes associated with an application to request traffic for the application to be analyzed in a core network (ON) | |
CN114503458A (en) | System, method and apparatus for managing network resources | |
JP7202453B2 (en) | Methods for Signaling Reserved Resources for Ultra Reliable Low Latency Communications (URLLC) Traffic | |
US12089217B2 (en) | Apparatuses and methods for time domain resource scheduling for group transmissions | |
JP2024518389A (en) | Deterministic network entity for a communication network - Patents.com | |
WO2023219618A1 (en) | Transport slice identifier for end-to-end 5g network slicing mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20703297 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20703297 Country of ref document: EP Kind code of ref document: A1 |