EP4049413A1 - Verfahren und system zur verteilten kanten-cloud-berechnung - Google Patents

Verfahren und system zur verteilten kanten-cloud-berechnung

Info

Publication number
EP4049413A1
EP4049413A1 EP20878137.7A EP20878137A EP4049413A1 EP 4049413 A1 EP4049413 A1 EP 4049413A1 EP 20878137 A EP20878137 A EP 20878137A EP 4049413 A1 EP4049413 A1 EP 4049413A1
Authority
EP
European Patent Office
Prior art keywords
cloud computing
computing device
edge cloud
edge
microservices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20878137.7A
Other languages
English (en)
French (fr)
Other versions
EP4049413A4 (de
Inventor
Siavash M. Alamouti
Fay Arjomandi
Michael Burger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mimik Technology Inc
Original Assignee
Mimik Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/841,380 external-priority patent/US20200322225A1/en
Application filed by Mimik Technology Inc filed Critical Mimik Technology Inc
Publication of EP4049413A1 publication Critical patent/EP4049413A1/de
Publication of EP4049413A4 publication Critical patent/EP4049413A4/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1859Arrangements for providing special services to substations for broadcast or conference, e.g. multicast adapted to provide push services, e.g. data channels
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the disclosure relates to cloud computing in general.
  • the disclosure relates to methods and systems for distributed edge cloud computing.
  • Cloud computing has been essential for enabling applications like Facebook ® , YouTube ® , Instagram ® , DropBox ® , etc.
  • the underlying architecture corresponds to a client-server architecture where certain nodes or computing devices act as “servers” and other nodes, or computing devices act as “clients”.
  • the vast majority of computing devices or nodes today operate in a client-server mode where most of the servers are located in data centers made up of server farms scattered around the world.
  • Such a fixed and hierarchical client-server architecture may be efficient for hosting of applications that provide access to content and information from remote servers to a large number of client devices.
  • solutions’ backends are hosted on the servers that handle compute intensive tasks and the solutions’ client application software (frontends) are hosted on the “edge device” that are used for simpler functions, such as, entering commands, caching content, and rendering information for the end user.
  • the system implements decentralization of the cloud by turning any computing device or edge node into a cloud server.
  • edge computing devices By turning edge computing devices into cloud servers, it is possible to reduce the role of digital middlemen and third-party trust elements because central hosting services are not necessary for many applications.
  • a physical “edge cloud fabric” is created that is potentially orders of magnitude larger than the current “central cloud” fabric.
  • the edge cloud computing device includes an edge node activation module that is configured to receive a request from an application running in the edge cloud computing device and determine a type of one or more microservices required to service the received request.
  • the edge node activation module is further configured to process the request locally in the edge cloud computing device when the determined type corresponds to one or more microservices locally hosted in the edge cloud computing device.
  • the edge node activation module is further configured to provide a microservice runtime environment to execute the locally hosted one or more microservices.
  • the edge node activation module is further configured to provide a locally hosted API gateway to send the request to the locally hosted one or more microservices.
  • the one or more microservices are configured to service the request and send back a response to the application.
  • the edge node activation module is further configured to send, an http/https request corresponding to the received request from the application, to an API gateway hosted in a central cloud computing device when the determined type of one or more microservices, required to service the received request, corresponds to one or more microservices globally hosted in the central cloud computing device.
  • the edge node activation module is further configured to receive, an http/https response for the http/https request, from the API gateway hosted in the central cloud computing device, and wherein the http/https request is serviced by the one or more microservices globally hosted in the central cloud computing device.
  • the edge node activation module is further configured to provide a locally hosted API gateway to send the http/https request to the API gateway hosted in the central cloud computing device.
  • the edge node activation module is further configured to send the request directly to one or more microservices hosted in another edge cloud computing device when the determined type of one or more microservices, required to service the received request, corresponds to one or more microservices hosted in the another edge cloud computing device.
  • the edge node activation module is further configured to implement a sidecar pattern to form a service mesh corresponding to the one or more microservices locally hosted in the edge cloud computing device and the one or more microservices hosted in the another edge cloud computing device.
  • the edge node activation module is further configured to discover one or more other edge cloud computing devices based on a first set of parameters to establish a connection therebetween and provide a microservice runtime environment to execute the locally hosted one or more microservices associated with the connection established between one or more edge cloud computing devices.
  • the first set of parameters includes a user account associated with each of the one or more edge cloud computing devices, a network associated with the one or more edge cloud computing devices, and a proximity of the one or more edge cloud computing devices.
  • the edge node activation module is liirther configured to discover one or more microservices supported by the one or more edge cloud computing devices.
  • the edge node activation module is further configured to dynamically form one or more clusters with the one or more edge cloud computing devices and communicate with the one or more edge cloud computing devices at a microservice level either directly or through other edge cloud computing devices across the one or more clusters.
  • the edge node activation module is further configured to expose the locally hosted one or more microservices services to one or more edge cloud computing devices through a common embedded web server.
  • the edge node activation module includes a Webserver embedded there within, wherein the Webserver is configured to provide container management APIs using specific language based on an operating system associated with the edge cloud computing device.
  • Example computer readable media may comprise tangible, non-transitory computer readable storage media having computer executable instructions executable by a processor, the instructions that, when executed by the processor, cause the processor to carry out any combination of the various methods and approaches provided herein.
  • Example computing devices may include a server or a client device comprising a processor, a memory, a client application and/or a network service configured to carry out the methods described herein.
  • Fig.1 shows an example cloud architecture 100 using microservices.
  • Fig.2 shows another example of cloud architecture 200 using microservices.
  • Fig. 3 shows an exemplary embodiment 300 of an edge cloud computing network.
  • Fig. 4 illustrates fundamental building blocks of edge cloud architecture 400 in accordance with an embodiment.
  • Fig. 5 shows an edge cloud computing device 500 in accordance with an embodiment.
  • Fig. 6 shows an exemplary backend microservice distribution 600 in accordance with an embodiment.
  • Fig. 7 shows an exemplary edge cloud computing architecture 700 in accordance with an embodiment.
  • Fig. 8 shows exemplary embodiment of discovery, connection, and communication for two edge cloud computing devices belonging to same user ID in an edge cloud architecture 800 in accordance with an embodiment.
  • Fig. 9 shows an exemplary edge cloud architecture 900 implemented using serverless microservices in a sidecar pattern in accordance with an embodiment.
  • Fig. 10 shows an exemplary serverless microservice 1000 for applications taking advantage of microservices hosted locally and globally in accordance with an embodiment.
  • Fig. 11 shows an exemplary embodiment of a method 1100 of providing cloud computing infrastructure.
  • Fig. 12 shows another embodiment of a method 1200 of providing cloud computing infrastructure. DETAILED DESCRIPTION OF THE FIGURES
  • Fig. 1 shows an example cloud architecture 100 using microservices.
  • a computing device (client device or node) 102 runs a client application 104 that sends an http/https request 106 to an API gateway 108.
  • the API gateway 108 sends an http/https response 110 from the cloud backend 112 hosted in a central cloud computing device 114.
  • the http/https response 110 can correspond to one of the microservices (e.g. 120) that was launched in response to the http/https request 106.
  • Such an architecture usually includes a client application (e.g. 104) on a computing device (e.g. 102) and a collection of central cloud functions to support hosting the solutions’ backend that is usually composed of a series of microservices (e.g. 116, 118, 120) reachable through an API gateway (e.g. 108).
  • every http request is sent from the “client device” to the servers (e.g. 114) in the central cloud as in case of a typical server- client architecture.
  • FIG. 2 Yet another example of cloud architecture 200 for a client to client communication is shown in Fig. 2.
  • 1 st client device 202 running a client application 204 wishes to send information to 2 nd client device 230 running a client application 232.
  • the client application 204 sends an http request 206 that ends up at an API gateway 208 hosted in the central cloud 212.
  • the https request 206 corresponds to an appropriate microservice (e.g. 216, 218, 220) hosted on the central cloud 212 that gets launched in response by a request 210.
  • the launched microservice (e.g. 216) sends a trigger 214 to a push notification service 222 to communicate information available from the 1 st client device (228) to 2 nd client device 230.
  • the client application 232 running in 2 nd client device 230 responds with a (get info) request 224 to the API gateway 208 that is serviced again by a microservice (e.g. 216) hosted on the central cloud.
  • the servicing microservice e.g. 216 sends the information from 1 st client device (226) to 2 nd client device 230. Therefore, even if the two client devices (1 st and 2 nd client devices) are in close proximity and on the same local network, all communications and data would need to go through servers in a data center that may be 100s of miles away which may be suboptimal and undesirable.
  • An effective and feasible approach to address this issue is to enable any given computing device to act as a cloud server. Enabling computing devices to act as cloud servers, may potentially reduce the reliance on third party cloud services that are not necessary for the applications. Further, this approach may also allow microservice-based solutions to be more flexible by dynamically moving microservice from the backend to the computing device (now acting as a server). Many of the functions performed in the central cloud may be performed on edge devices “acting” or “configured” as servers. Once the computing devices are configured to act as servers, a decentralized edge cloud computing (architecture) may be provided that is orders of magnitude larger than the existing central cloud.
  • the first step to accomplish this is to remove the constraint that servers can only exist in data centers. This is a fundamental constraint that defines the dominant fixed and hierarchical client-server infrastructure for internet today.
  • An alternative architecture is disclosed herein that follows a pragmatic approach by enabling any computing device to act as either a client and/or a server based on the real-time needs of an application.
  • the first trend is the explosion of computing devices and embedded computing in all things, and the increasing capabilities of the edge devices. For instance, there is more computing power, memory, and storage available in today’s smart phones than in powerful servers just a decade ago.
  • the second trend is the enormous amounts of data generated on these (edge) devices.
  • With the advent of social media on mobile devices orders of magnitude more personal multimedia content is generated on devices (photos, videos, sensor data, etc.) than premium content from major studios and broadcasters hosted on central servers in the cloud.
  • Today, most of the data generated on (edge) devices is sent back to the central cloud for processing and to facilitate sharing.
  • the third trend is the decomposition of solutions in collection of microservices and the automation of deployment that make backend solutions much more dynamic (serverless) with a scalability that closely fits the demand either in volume or even geography.
  • decentralized cloud architecture do not require the creation of new network nodes with dedicated hardware. Instead, the disclosed architecture enables existing computing devices such as PCs, tablets, set-top-boxes (STBs), or even home routers to act as cloud server nodes at the edge of the cloud network when plausible.
  • the disclosed approach does not require any change to the low-level design of these devices. All that is required is a downloadable application (e.g. edge node activation module) that runs on top of existing operating systems without any changes to the hardware or the OS Kernel of existing devices.
  • edge node activation module e.g. edge node activation module
  • the disclosed architecture provides consumers with more control over their personal data. Furthermore, amongst other things, the disclosed approach minimizes the cost of hosting and delivery of application and services, improves network performance and minimizes latency.
  • Embodiments of edge cloud computing platform are disclosed.
  • the disclosed cloud platform accelerates the decentralization as the next revolution in cloud computing.
  • the primary step in cloud decentralization is to remove the constraint that servers can only exist in data centers. This is a fundamental constraint that defines the dominant client-server infrastructure for internet today.
  • the present disclosure provides for an alternative architecture/platform and a pragmatic approach to achieve this by enabling any computing device to act as either a client or a server based on the real-time needs of an application.
  • a cloud platform to create the edge cloud fabric using edge node activation modules and one or more backend services.
  • the benefits and advantages of disclosed architecture and platform include reduced cloud hosting costs, reduced communication bandwidth, increased network efficiency, reduced energy consumption and carbon emission, reduced latency, increased privacy and better control over consumer and enterprise data.
  • Embodiments of a method of providing edge cloud computing infrastructure in a communication network are disclosed.
  • the communication network includes one or more edge cloud computing devices in communication with at least one server computing device.
  • the method includes determining, by a first edge cloud computing device, a type of one or more microservices corresponding to a request from an application running in the first edge cloud computing device.
  • the method further includes processing, by the first edge cloud computing device, the request locally in the first edge cloud computing device when the determined type corresponds to one or more microservices locally hosted in the first edge cloud computing device.
  • the method further includes providing, by the first edge cloud computing device, a microservice runtime environment to execute the locally hosted one or more microservices.
  • the method further includes providing, by the first edge cloud computing device, a locally hosted API gateway to send the request to the locally hosted one or more microservices.
  • the method further includes sending, by the first edge cloud computing device, an http/https request corresponding to the request from the application, to an API gateway hosted in a central cloud computing device when the determined type of one or more microservices corresponds to one or more microservices globally hosted in the central cloud computing device.
  • the method further includes receiving, by the first edge cloud computing device, an http/https response to the http/https request, from the API gateway hosted in the central cloud computing device, and wherein the http/https request is serviced by the one or more microservices globally hosted in the central cloud computing device.
  • the method further includes providing, by the first edge cloud computing device, a locally hosted API gateway to send the http/https request to the API gateway hosted in the central cloud computing device.
  • the method further includes sending, by the first edge cloud computing device, a data request from the locally hosted one or more microservices directly to one or more microservices hosted in a second edge cloud computing device when the determined type of the request from the application corresponds to a data request for the second edge cloud computing device.
  • the method further includes providing, by the first edge cloud computing device, a sidecar pattern to form a service mesh to support the application running in the first edge cloud computing device.
  • the method further includes exposing, by the first edge cloud computing device, the locally hosted one or more microservices services through a common embedded web server to one or more edge cloud computing devices.
  • the method further includes providing, by the first edge cloud computing device, container management APIs using specific language based on an operating system associated with the edge cloud computing device.
  • the method further includes discovering, by the first edge cloud computing device, one or more other edge cloud computing devices to establish a connection therebetween and providing, by the first edge cloud computing device, a microservice runtime environment to execute the locally hosted one or more microservices associated with the connection established between one or more edge cloud computing devices.
  • the method further includes discovering, by the first edge cloud computing device, one or more microservices hosted in the discovered one or more other edge cloud computing devices and establishing, by the first edge cloud computing device, a direct microservice level connection between the locally hosted one or more microservices and the discovered one or more microservices in the one or more edge cloud computing devices.
  • the method further includes loading and executing, by the first edge cloud computing device, one or more microservices required to service the request from the application.
  • the method also includes stopping, by the first edge cloud computing device, the loaded one or more microservices once the request from the application has been serviced.
  • Fig. 3 depicts an embodiment of an edge cloud computing network 300.
  • a cloud fabric can be created that scales with the number of edge devices. This reduces the need for additional servers in data centers as the number of edge devices and content generated by edge devices grow.
  • edge devices are interchangeably referred to as “nodes” or “edge nodes” or “edge computing devices” or “edge cloud computing devices”. Accordingly, the “cloud” capacity is increased as the number of edge cloud computing devices grow. In addition, given that most of the data is produced at the edge, transport costs and latencies for applications are minimized. In the disclosed approach, most of the processing is performed at the edge, communication is kept as local as possible, and edge cloud computing devices collaborate and share computing and other resources.
  • the “central cloud” or “central cloud computing device” refer to one or more servers in data centers, that remain as valuable resources as they may be indispensable for many applications that require central storage or processing.
  • the central cloud will no longer be a bottleneck or the “necessary” trust element and do not need to grow in proportion with edge nodes. It may be noted that data center resources may need to increase but at a reasonable pace to accommodate the needs for central processing only. All the other possible tasks and functions can be relegated to the edge nodes where today most of the data is generated.
  • the edge cloud computing network 300 includes a plurality of edge cloud computing devices, such as, a laptop 302, a tablet PC 304, a central “cloud” 306, a car infotainment system 308, a security camera 310, a server computing device 312, a mobile device 314, and a gaming console 316.
  • each of the edge cloud computing devices can be configured to act as a client or a server as per the need of the edge cloud computing network 300.
  • the Fig. 3 shows connection or communication paths between the edge cloud computing devices as dashed lines.
  • the architecture does not follow the conventional client-server mode where one or more devices are designated to always act as “servers” and the other devices always act as “clients”.
  • each of the edge cloud computing devices may use different operating systems, such as, multiple variants of Linux®, android, iOS®, macOS®, Windows®, FedoraTM, etc.
  • the edge cloud computing devices may be configured to operate using different networking technologies, such as, fixed (Ethernet, fiber, xDSL, DOCSIS®, USB, etc.), mobile WAN (2G, 3G, 4G, etc.), Wireless LAN (WiFi®, etc.), Wireless PAN (Bluetooth®, WiGig, ZWave®, ZigBee®, IrDA, etc.), and machine networks (SigFox®, LoRa®, RPMA, etc.).
  • the proposed cloud architecture includes edge cloud computing devices (e.g. 314) that when “activated” are configured to connect, communicate, and collaborate with other edge cloud computing devices across many fragmented operating systems and network technologies.
  • the availability of network resources may be a challenge in the edge cloud computing network 300.
  • edge cloud computing devices e.g. 312, 3114
  • they can connect and communicate with other edge nodes using uplink network resources.
  • network connectivity is gradually becoming symmetrical, typically there are more downlink than uplink resources available.
  • posting a video from an edge node to the central cloud to be consumed by three other edge nodes needs different uplink/downlink resources directly as compared to (directly) streaming the video from the source to destination nodes.
  • edge nodes may be nonpersistent in nature. There may be less control over their availability and reliability, especially with battery operated mobile devices.
  • the proposed edge cloud computing architecture overcomes this challenge by a “microservice” approach explained below.
  • the non-persistent nature of the edge nodes is considered when building certain applications where persistent nodes are essential. Persistent nodes can always be provisioned using other collaborating edge nodes, or in the worst case, central cloud can be used.
  • distribution management deals with resource availability based on simpler criteria such as CPU load, memory constraint and IO.
  • the scope of distribution management is the specific data center where the solution (backend) is running.
  • the criteria for distribution management are much more diverse and include power availability and consumption, reliability, and device capabilities.
  • distribution scopes expand to network, proximity and account since most devices belong to specific users.
  • all nodes including the “central cloud” can act as cloud servers and there is no designated permanent trust element.
  • Edge nodes or edge cloud computing devices are configured to communicate directly, collaborate, and share resources directly without resorting to a third-party trust (central) element unless necessary.
  • central cloud resources are used only when needed, for instance, when there is a need for global storage, archiving, updating of centralized databases, centralized registration, etc. Any other function that can be handled by the edge nodes can be assigned to them, for instance, messaging between devices, or handshaking of control signals between machines, or transmitting data between nodes within a small cluster.
  • microservices A good software design practice is to develop solutions as a collection of many instances of single purpose, well-defined components referred to hereinafter as “microservices”.
  • the cloud is extended to the edge by recognizing and exposing computing resources and utilizing them in an opportunistic way when available. Further, adding analytics to the way ephemeral microservices are deployed based on the availability, policy, and context (including social and other application level events), enable optimal deployment of applications on the edge cloud computing network 300.
  • the disclosed architecture assumes that existing edge cloud computing devices can be easily turned into edge cloud servers (or edge cloud server computing devices). It is envisaged under the scope of the description that developers should be able to build applications (supported by the edge cloud) with as little effort as possible. Given the heterogeneous nature of the edge cloud computing devices, the disclosed approach assigns functional roles based on device capabilities.
  • an edge node or an edge cloud computing device is configured to demonstrate a plurality of capabilities to become a potential edge cloud server or edge cloud server computing device.
  • the plurality of capabilities includes the ability to discover the existence of other edge nodes or edge cloud computing devices regardless of the operating system (OS) or network associated with them.
  • the plurality of capabilities also includes the ability to discover other nodes’ capabilities and behavior (e.g. hardware specs, OS, persistency, etc.).
  • the plurality of capabilities further includes the ability to discover one or more microservices supported by other edge nodes or edge cloud computing devices and dynamically form clusters along with other edge nodes or edge cloud computing devices especially around network, proximity, and user account.
  • the plurality of capabilities further includes the ability to communicate with other nodes at the microservice level either directly or through other nodes across different clusters and connect with other nodes if they chose to share data, services, and/or resources.
  • the plurality of capabilities further includes the ability to adapt to assigned functions and roles based on resources and capabilities and process and analyze data locally.
  • the plurality of capabilities further includes the ability to be as secure and trustable as the central cloud.
  • the configuration of the edge node or the edge cloud computing device to demonstrate the plurality of capabilities is achieved in a platform-agnostic approach.
  • a downloadable application-level software e.g. edge node activation module
  • edge node activation module e.g. edge node activation module
  • the proposed approach requires no changes to the device hardware, OS Kernel, or drivers and works on most modem hardware (PCs, STBs, routers tablets, smart phones, etc.).
  • the proposed software-level application has a very small memory footprint and supports microservices that can be easily loaded, run, and stopped across the edge cloud computing devices.
  • the disclosed approach supports multi-tenancy, multiple applications and microservices with a single instance of software to support multiple customers.
  • the disclosed cloud platform has a light, but highly scalable backend (services) hosted on a “central cloud” (e.g. 306 in Fig. 3) and uses a bootstrap mechanism for registration of the nodes or other edge cloud computing devices.
  • the disclosed cloud platform provides the ability to create dynamic clusters of edge nodes within a same network, proximity and (user) account and to manage mobility characteristics (appearing and disappearing) of nodes inter and intra clusters.
  • the edge cloud computing network 300 provides for management of communication between the edge nodes or edge cloud computing devices either directly or through intermediate nodes and dynamic instantiation of backend resources or services based on demands from the edge nodes.
  • edge cloud computing network 300 creates effective persistence by pulling collaborating edge nodes and/or resources dynamically.
  • the disclosed approach To utilize the power of edge nodes and create a massive decentralized edge cloud, the disclosed approach considers and implements various principles in the edge cloud architecture.
  • the first principle of decentralization implemented by the disclosed approach is “meritocracy”. All nodes have an equal opportunity to participate in the edge cloud computing network 300. Nodes may take any role based on their capabilities. Capabilities that are enabled by the node owner are stored in the node profile. For instance, a node with large storage can become a “cache node” or a “backup storage node”, a node with great network connectivity can be a “proxy node”, and a persistent node can become the holder of knowledge (e.g., device and capability/role profiles) for a cluster of nodes and so on. Meritocracy prevents the need to provision central elements with predefined roles which leads to a hierarchical structure of nodes.
  • the nodes should tell the truth regarding their profiles in a transparent manner or else the principle of meritocracy cannot be applied effectively.
  • the disclosed architecture removes incentives to lie (e.g. not providing any node-specific privileges or rights). Even when there is no apparent incentive to lie (e.g., provide false information, misleading information, or disinformation), the disclosed architecture implements a mechanism to blacklist nodes that lie about their profile to harm the operations of a cluster in the edge cloud computing network 100.
  • the meritocracy may change with time and nodes may upgrade or downgrade their capabilities and profdes. The disclosed architecture accommodates any such changes to the nodes in real-time.
  • the central cloud architecture can be considered as a special case of edge cloud computing architecture where the edge nodes are used only as clients. Therefore, it may be desirable to discontinue the existing bad practices or falling back on readily available resources on central cloud in order to speed up development while sacrificing hosting costs, latency, and privacy.
  • all nodes are considered as a potential “server” to other nodes and all requests need to be kept local to the cluster where a node is active.
  • the second principle of decentralization implemented by the disclosed approach is “distributed discovery”.
  • a node in the edge cloud computing network 100 needs to discover other nodes.
  • discovery is intended to be a “filtered search” operation based on a scope.
  • Illustrative and non-limiting examples of a scope include a user account (nodes registered under the same account ID), network (nodes that are members of the same link-local cluster network), proximity (nodes which are reporting themselves as physically present at a geographical location or within an area defined by a geospatial query).
  • the discovery process uses any combination of these or other scopes without a dedicated central node, for instance, a central node acting as a presence server.
  • the discovery process includes information about how to connect and communicate to a device, important characteristics, roles, and personas that an edge node can adopt.
  • the roles can include cache node (a node with spare storage), proxy node (good connectivity to internet), CPU resources (node with spare CPU to run microservices), etc.
  • the third principle of decentralization implemented by the disclosed approach is “clustering”. Human and machine communicate in clusters. Robert Dunbar, an anthropologist suggested a cognitive limit of 150 for people with whom humans can have a stable relationship. In other words, humans communicate in constrained clusters. Additionally, humans seldom communicate with everyone in close relationship circle regularly or frequently. In fact, daily communication may be limited to a handful of very close relationships. Therefore, it makes logical sense that the proposed communication framework for edge cloud computing architecture takes this into account when assigning roles and responsibilities to nodes within a cluster.
  • the above characteristic of communication is however not limited to humans.
  • the communications between machines is very similar and most of the communication is often between a very small set of nodes in a cluster at any given instance of time. Therefore, all communications can be optimized to occur local to the cluster as much as possible.
  • one node (supemode) in the cluster is given a special role as the knowledge holder of the cluster.
  • the supernode is assigned the role of communicating this knowledge for/to global discovery or nodes in other clusters.
  • the proposed approach allows nodes to dynamically form their own ad-hoc clusters based on the three given scope described earlier.
  • Nodes dynamically take roles via election or selection by other nodes based on series of characteristics of nodes and rules. By doing so, the nodes dynamically form the fabric of edge cloud (i.e. Software Defined Cloud Infrastructure). As nodes enter and exit clusters, the roles are dynamically reassigned.
  • edge cloud i.e. Software Defined Cloud Infrastructure
  • the disclosed communication framework in the edge cloud therefore, takes this into account when assigning roles and responsibilities to nodes within a cluster.
  • a cluster is formed by a first active node (or a first edge cloud computing device) based on a given scope.
  • a node When a node is “activated”, it first looks for a “supemode” (also referred to as “super edge cloud computing device” in the ongoing description). The supemode oversees global discovery and holds knowledge of the edge cloud. If no supemode is found, the first node (or the first edge cloud computing device) declares or designates itself as the supemode.
  • the supemode then informs global discovery of its existence and receives the list of nodes within the defined scope. To maintain efficiency, the supernode informs other nodes within its scope. Subsequently, a better supemode may be identified, and that better supemode can them inform the global discovery of its existence and then function as the supemode.
  • the disclosed edge cloud implements this bootstrap model to avoid overloading any nodes, whether global or local, and therefore reduces traffic and chattiness and creates a light and scalable architecture.
  • presence notification is a function of the node itself along with the responsibility to decide which other nodes it wants to notify. Therefore, the disclosed edge cloud architecture does not implement a single global presence server or a point of registration in the disclosed edge cloud computing network. Similarly, the disclosed architecture does not have a “keep alive” mechanism at the infrastructure level between the nodes. In an embodiment, such mechanism can be delegated to microservices if needed in certain scenarios.
  • the fourth principle of decentralization implemented by the disclosed approach is “microservice to microservice communications”.
  • applications on edge cloud computing devices or nodes may communicate directly without a third-party trust element unless absolutely necessary. This can allow the devices to connect the edge nodes together at the network level. However, it is not sufficient to connect devices or edge nodes at the physical network level. Microservices running on the edge nodes need to communicate directly.
  • the edge node activation module in edge nodes provide for a light container that enables deploying and hosting microservices on edge nodes to utilize the formed edge “cloud fabric” to communicate directly with other microservices thereby creating a “service mesh”.
  • edge nodes are configured to load, start, and stop microservices on any other edge node in the edge cloud computing network 100. This configuration ensures that microservice management across the disclosed cloud platform remains distributed without the need for a central entity.
  • the microservices enabled on the edge nodes expose their services through a common embedded Webserver. API endpoints for each service are accessible from all other edge nodes in an edge cluster.
  • the edge cloud enables seamless reachability of microservices across edge nodes to form a service mesh either directly or via a “sidecar pattern” described later in more detail.
  • container daemons e.g. Linux
  • the disclosed edge cloud platform provides functionalities to manage ad- hoc clusters of edge nodes.
  • the disclosed edge cloud platform provides additional “light” container capabilities with the ability to download, deploy and operate microservices.
  • the fifth principle of decentralization implemented by the disclosed approach is “dynamic resource instantiation”.
  • dynamic resource instantiation For decentralization to be efficient, it is desirable to have very little overhead associated with the nodes to join a cluster, leave a cluster, or get assigned resources.
  • the solution implemented by the disclosed edge cloud architecture is referred to as “dynamic resource instantiation”.
  • signaling and data resources are deployed dynamically (by backend service) based on network’s condition and a demand from edge nodes within one or more clusters thereby eliminating the need to reserve computing resources. This increases efficiency and reduces cost by dynamically deploying the end points (e.g. SEP, BEP) which are instantiated only when needed.
  • end points e.g. SEP, BEP
  • the disclosed cloud platform assists the edge nodes to setup tunneling opportunistically to increase signaling and data bandwidth efficiency.
  • Resources are deployed based on parameters based on network topology and demand by the application running on the edge node.
  • the parameters include time to go-live, number of concurrent connections, and communication protocols (HTTP, SSH, Web socket or UDP tunneling).
  • end points can be deployed on available computing resources within the closest proximity of a given cluster.
  • the sixth principle of decentralization implemented by the disclosed approach is “collaboration”.
  • the edge nodes collaborate and share resources.
  • the sharing of decentralized cloud resources is desirable to be as seamless as in case of a central cloud.
  • the disclosed cloud architecture is able to uses the collective resources of all the edge cloud computing devices. For instance, a video is recorded in HD format on the mobile phone 314 and the recorded content is seamlessly stored on the laptop 302 or even a connected storage dongle.
  • the disclosed architecture enables sharing of resources with friends and family. For instance, allowing family members to share a Network Attached Storage (NAS) as a family resource.
  • NAS Network Attached Storage
  • the disclosed architecture also provides the ability to lease computing resources to strangers and to create an even larger edge cloud. This way, a cloud fabric is created from numerous edge nodes that is orders of magnitude larger than the central cloud.
  • Edge cloud provides an opportunity to take advantage of collaboration and resource sharing across edge nodes.
  • edge cloud can provide much of the benefits described above.
  • any application built on any edge device prioritizes using its local resources (instead of central or global resources) to host microservices to service other nodes in its cluster based on the requirements of the application. For instance, Jack’s device should be used as a server to host Jack’s app.
  • the approach can be expanded further to use resources on other nodes.
  • Jill’s phone can run a microservice for Jack’s application even when they’re not in an active session, or Jack can provide spare storage for Jill’s videos on his device, or Jill can use Jack’s fiber connection instead of her poor cellular connection at the time.
  • Jill can significantly improve efficiencies and scaling, but may not be necessary to make edge cloud useful.
  • the seventh principle of decentralization implemented by the disclosed approach is “infrastructure independence”. As describe earlier, for cloud decentralization, it is desirable that the disclosed cloud platform is agnostic to operating systems, network (type and technology) and location. Due to various reasons, there have been many failed industry attempts to standardize decentralized communication between nodes. Therefore, the proposed decentralized cloud platform is independent of the evolution of the operating systems and networks. In other words, the disclosed cloud platform operates on top of existing operating systems and networking standards at the application layer. This principle ensures that the disclosed cloud platform is deployed and maintained in the long-term with minimal or no dependencies. The disclosed cloud platform also avoids issues with legacy protocols, modules, libraries, data, etc.
  • Fig. 4 illustrates fundamental building blocks of edge cloud computing architecture in accordance with an embodiment of a distributed edge cloud platform 400.
  • the disclosed distributed edge cloud platform 400 is designed and developed. It is envisioned to be a pragmatic way of enabling edge cloud by configuring every edge cloud computing device to function as an edge cloud server. As described earlier, such configuration is performed in a completely distributed fashion agnostic to hardware platforms, operations systems, and underlying networking technologies.
  • the disclosed cloud platforms, microservices, edge nodes (or edge cloud computing devices), and cloud clusters are configured to run on any operating system and to communicate over any network.
  • the disclosed cloud platform and distributed cloud services are independent of any infrastructure.
  • the distributed edge cloud platform 400 is an end-to-end system that includes central and edge elements that are its fundamental building blocks.
  • the central element includes a backend services module 402 provided by a server computing device and the edge element includes the edge node activation module 426, and one or more microservices (e.g., 518, 520, 522 as described later with reference to Fig. 5).
  • the disclosed architecture is intended to be distributed and that the elements (central or edge) can reside anywhere on any reachable edge cloud computing device (e.g., 302, 304, 306, 312).
  • an edge cloud is defined as a collection of nodes (e.g. 302, 304), each with a globally unique ID, based on a context or a scope of capability of the particular device.
  • a given node may be a member of multiple clusters (e.g. see node 730 in Fig. 7).
  • a first cluster can correspond to a user account cluster, which is the cluster of nodes belonging to the user that registered them.
  • a second cluster can correspond to a network cluster (e.g. 726) which is the link-local network cluster it is physically connected to.
  • a third cluster can correspond to a proximity cluster (e.g. 736) which is the cluster of nodes within a certain surrounding area.
  • the backend services module 402 is configured to provide one or more backend services that include a discovery service 406, a signaling service 408, an identity service 410.
  • the signaling service 408 further provides resources such as a signaling endpoint (SEP) 412, and a bearer endpoint (BEP) 414.
  • the one or more backend services further include a server token service 416 and a registry service 418.
  • the server token service 416 may be associated with security token authentication/authorization functions for services.
  • the backend services module 402 is hosted using cloud web services 420 such as, but not limited to Amazon Web Services® (AWS) in the server computing device (e.g. 312) or in the cloud 306.
  • AWS Amazon Web Services®
  • fragments, or parts of the discovery service 406 and the signaling service 408 are implemented both on the backend server (e.g. 312) and on edge nodes (e.g. 302).
  • network proxies (or nodes) in each cluster are parts of the signaling service 408 and supemodes (or super edge cloud computing devices) in each cluster are part of the discovery service 406.
  • the disclosed cloud architecture departs from the existing notion of “service in the cloud - client on the edge”. Its value comes from distribution of services over the entire range, from central cloud (e.g. 306) all the way to the edge nodes (as explained later with reference to Fig. 7).
  • the discovery service 406 is configured to hold and provide the knowledge to form one or more clusters, the overall status of the clusters, and the nodes within them. Once a cluster is formed, any new node registers with the supemode that subsequently informs the discovery service 406 via the supemode. In order to reduce traffic for scalability, updates from the supemode to the discovery service 406 happen in an opportunistic fashion and only when a change occurs in the one or more clusters.
  • the discovery service 406 is configured to perform a reachability test to a supemode.
  • the discovery service 406 tests for reachability.
  • the supemode might be behind a firewall and while it can initiate a call to the discovery service 406, the discovery service or other external nodes might not be able to imitate a call to the supemode.
  • the discovery service 406 will request the signaling service 408 to dynamically deploy a signaling endpoint (SEP) (e.g. 412) for the cluster. Subsequently, the discovery service 406 returns the SEP address to the supemode.
  • SEP signaling endpoint
  • the discovery service 406 is configured to store a complete inventory of nodes and cluster profiles. This inventory includes details of computing resources on all the nodes, status of each node, location of each node, and services available on each node. The inventory further includes the end-to-end network topology to reach each node and the clusters, the reachability of the clusters, and the availability of resources and other pertinent information. In other words, the discovery service 406 has complete visibility to all resources across the edge cloud computing network 300 and can supply this information to dynamically deploy services on any available resource within the network in real-time. In an embodiment, the disclosed architecture uses standard amazon semantics to make it easier for developers to expose the resources in a similar fashion as in case of central cloud resources.
  • the identity service 410 corresponds to a third-party identity software as a service (SaaS), for example based on the OAuth2.0, which resides in the public cloud and creates and maintains authentication profiles of nodes.
  • the disclosed cloud platform uses the identity service 410 (along with the server token service 416) for authorization of nodes by means of token generation and management for one or more token holders.
  • the token holder can be the edge node activation module (e.g. 426, 508), the microservice (e.g. 518, 520, 522) using the edge node activation module, the application developer using the edge node activation module as well as the end-user of the application.
  • the disclosed cloud platform uses the tokens to verify the credentials, legitimacy of the token holder, and authorize access to the one or more backend services provided by the backend services module 402.
  • the authorization is performed through the use of Jason Web Tokens (JWT) and a subset of standard “claims” for verifying the identity of the token holder.
  • JWT Jason Web Tokens
  • the signaling endpoint (SEP) 412 and the bearer endpoint (BEP) 414 are both resources deployed dynamically and on demand based on a request received from, for example, the discovery service 406 or the signaling service 408.
  • SEP signaling endpoint
  • BEP bearer endpoint
  • the SEP is used for signaling communication while BEP is used for data communications and jointly they assist the nodes to setup tunneling opportunistically to increase signaling and data bandwidth efficiency.
  • SEP and BEP are deployed based on parameters such as, but not limited to time to go-live, number of concurrent connections, and communication protocols (HTTP, SSH, Web socket or UDP tunneling). If desired, end points can be deployed on an available computing resources within the closest proximity of the cluster.
  • the server token service 416 is a SaaS based solution based on OAuth2.0.
  • the server token service 416 delivers tokens to services making requests to other services.
  • the server token service 416 resides in the public cloud and issue service tokens according to a system map.
  • the server token service 416 implements “client_credentails” and “refresh_tokens” flow.
  • client_credentails When a microservice needs to invoke another microservice, it either already has a valid token and therefore can make the request directly or it requests a token which includes a list of permissions (or scopes).
  • the receiving service will validate the token signature and scopes in order to fulfdl the incoming/received request. In an embodiment, such service to service tokens are short lived.
  • the registry service 418 (also referred to as IT repository) is a SaaS solution which resides in the public cloud and maintains the list of all the backend microservices and the clusters to which they belong. Usually used for administrative purposes, the registry service 418 maintains the cluster knowledge and allows clusters to be self-managed for configuration purposes. In an embodiment, the registry service 418 provides geo-located lists of clusters (or configurations as described later with reference to fig. 6) that can be used by other services (e.g. the discovery service 406) to identify the signaling service 408 to invoke the SEP 412 or a BEP 414 when required.
  • the edge cloud computing device 404 includes an edge node activation module 426.
  • the edge node activation module 426 sits on top of an OS layer 428 and provides a microservice runtime environment for executing the one or more microservices using the microservice runtime environment module 424.
  • One or more 3rd party applications 422 are also hosted in the edge cloud computing device 404 that are serviced by the edge node activation module 426.
  • developers can develop their own microservices that can be hosted on the edge devices or nodes using the container manager provided by the edge node activation module 426.
  • the edge node activation module 426 is configured to turn any edge device (or edge cloud computing device) into a cloud server and extend the cloud computing infrastructure to that new edge.
  • Edge devices can be any device with basic computing capability such as a laptop (e.g. 302), a set-top-box, a residential and IoT gateway, a game console connected TV, a car infotainment system (e.g. 308), a smart phone (e.g. 314), etc.
  • Any edge device can download the edge node activation module 426 and execute it to “become” a cloud server.
  • any edge device that has executed the edge node activation module 426 is referred to as “node”.
  • Such nodes have one or more characteristics that are intended for the disclosed edge cloud platform and architecture.
  • the one or more characteristics include the ability to dynamically discover each other (or other nodes) independent of the OS and network and include the ability to expose the computing and available capability and functionality to each other.
  • the one or more characteristics further include the ability to form and organize into clusters (edge clusters) and communicate within the clusters even with no Internet availability, and across clusters.
  • the disclosed edge cloud platform operates by the formation of cluster nodes in accordance with the third principle of clustering as described supra.
  • One or more cluster are formed by a first active node (or first edge cloud computing device) based on a particular scope.
  • a node e.g. first edge cloud computing device
  • edge node activation module 426 When a node (e.g. first edge cloud computing device) is activated (enabled with edge node activation module 426), it first looks for a supernode which oversees global discovery and holds the knowledge of the edge cloud. If no supernode is found, the first node declares itself as the supemode. If internet is available, the supemode then informs global discovery of its existence and receives the list of nodes within the defined scope. To maintain efficiency, the supemode informs other nodes within its scope.
  • the edge node activation module 426 can reside on any edge cloud computing device or server and can be made available for various hardware platforms and operating systems.
  • the edge node activation module 426 corresponds to an application-level software and can therefore be downloaded on many types of edge cloud computing devices.
  • the backend services module 402 provides one or more backend services hosted on central cloud (e.g. 306) or any reachable and reliable computing resource with sufficient computing and memory and provide necessary services to support the edge nodes.
  • Fig. 5 shows an edge cloud computing device 500 in accordance with an embodiment.
  • the edge cloud computing device 500 includes a processor 502 coupled to a memory 504.
  • the memory corresponds to non-transitory computer readable medium having instructions implementing the various technologies described herein.
  • Example computer readable media may comprise tangible, non-transitory computer readable storage media having computer executable instructions executable by the processor 502, the instructions that, when executed by the processor, cause the processor to carry out any combination of the various methods and approaches provided herein.
  • all the edge cloud computing devices (302, 304, 308, 310, 312, 314, 316, 404) and the central cloud (e.g. 306) include at least a processor (e.g. 502), a memory (e.g. 504), and/or various other applications or modules stored in the memory which when executed by the processor(s) carry out the methods and approaches described herein.
  • the memory 504 includes an OS layer 506 and an edge node activation module 508.
  • the edge node activation module 508 further includes a Net module 510 having an API gateway.
  • the edge node activation module 508 also includes a container manager microservice (pS) image repository 512, HTTP request wrapper (lib.) 514, and an embedded Webserver 516.
  • the edge node activation module 508 is configured to expose one or more microservices to one or more edge nodes.
  • the edge node activation module 508 is configured to start/stop, download, deploy any service in the edge cloud and expose the services using the API gateway.
  • the edge node activation module 508 is configured to discover, connect, and communicate with other edge nodes in one or more clusters (within or across).
  • the memory 504 also includes one or more microservices (pS) depicted as 518, 520 and 522 in Fig. 5.
  • the microservice 522 is shown to be a part of user interface (UI) apps 524.
  • the memory 504 also includes other UI apps 526 without a microservice therein. All the microservices (518, 520 & 522) and the UI apps (524 & 526) are accessible through a 3rd party exposed API depicted as 528 in Fig. 5.
  • the edge node activation module 508 corresponds to a collection of software libraries and corresponding APIs. It is intended that developers can also use the software libraries and APIs to efficiently solve the fundamental challenge of networking nodes in the new hyper-connected and highly mobile distributed edge computing world.
  • the edge node activation module 308 can be delivered in a heterogeneous environment, regardless of OS, manufacturer, and connected network associated with any edge cloud computing device.
  • the edge node activation module 508 can run (be executed) on any PC, server, mobile device, fixed gateway, autonomous car gateway, connected TV or even in the cloud, depending on the application use case. As described earlier, once the edge node activation module 508 is loaded onto an edge device, it becomes an edge cloud node.
  • the edge node activation module 508 resides between the operating system layer 506 and the end-user applications (e.g. 524, 526). There are several microservices (e.g. 518, 520, 522) available from the edge node 500 and the edge node activation module 508 provides the ability for 3rd parties to develop their own microservices.
  • the edge node activation module 508 also provides a microservice runtime environment. As described earlier, by incorporating the edge node activation module 508, computing devices are transformed into intelligent network nodes or edge nodes, that can form one or more clusters.
  • the edge node activation module 508 takes away complexity of networking among distributed edge cloud nodes thereby enabling developers to focus on their solution in a microservice model even on small mobile devices (e.g. 314).
  • Nodes in a cluster are configured to take a specific role or combinations of roles, depending on physical hardware capability, OS, attached network connectivity, types of microservices running on each node and usage/privacy policy settings. Some roles are assigned through a process of election, considering other nodes within the cluster at any given time, while others are assigned through a process of selection. As described earlier, one of the most important roles in a cluster is that of the supemode (or a super edge cloud computing device), to which a node is elected by all member nodes. In a trivial case of a single-node cluster, a node serves as its own supemode. A supernode is configured to be the bearer of information regarding a cluster and all its member nodes.
  • the supemode is configured to maintain information related to other nodes, microservices deployed on each node, as well as historical artifacts from the operation of edge node activation module 508.
  • the supemode is configured to assign roles such as link-local proxy and link-local cache to other nodes in the cluster.
  • a link-local proxy node supports communication in cases where cluster nodes reside behind a firewall.
  • a node with large amounts of physical storage can be assigned the link-local cache role for the cluster.
  • the edge node activation module 508 supports a unique user and multiple microservices and application providers (otherwise called “tenants”). In other words, even if a user has loaded multiple applications on a mobile device all of which employ the edge node activation module 508, functionality and capabilities are related to (and authorized for) that user.
  • the edge node activation module 508 provides discovery, connection, and communication among edge devices, both at physical and microservice levels. For example, the edge node activation module 508 provides for node and service discovery by auto-discovery and auto-routing for all nodes with instances of edge node activation module in local and global network(s).
  • the edge node activation module 508 provides for node and service connection in ad-hoc edge-cloud of nodes form a self-organizing cluster.
  • the edge node activation module 508 is configured to provide a light container to manage the one or more microservices by (remotely/locally) loading, running, and managing microservice instances.
  • the edge node activation module 508 includes an edge web server for providing a microservices runtime environment.
  • nodes with the edge node activation module 508 are configured to discover, connect, and communicate with each other.
  • discovery is a “filtered search” operation, based on one or more scopes that corresponds to a user account i.e. nodes registered under the same account ID.
  • the edge node activation module 508 employs the OAuth 2.0 based OpenID standard through a third-party Identity SaaS provider (used as the identity service 410 provided by the backend services module 402).
  • the scope may also correspond to a network, such as nodes that are members of the same link-local cluster network.
  • the link-local identifier in this case is formed by combining the public IP address and the link-local network address.
  • the scope may also correspond to proximity, such as.
  • the discovery process executed by the edge node activation module 508 can use any combination of the above described scopes.
  • Microservices on each of these nodes and across clusters can use the edge cloud to form their own service mesh by calling each other via APIs.
  • nodes and microservices running on nodes have unique identifiers, such as a specific microservice (e.g. a drive) on a specific node is addressable uniquely, locally, and globally.
  • the edge node activation module 508 provides microservice runtime environment (light container) to expose the services associated with microservices through a common embedded Webserver. API endpoints for each service are accessible from all other nodes in an edge cluster through the API gateway which is part of the net module 510.
  • the edge node activation module 508 complements container daemons (or Docker®) in two different ways. In environments (e.g. Linux®) that can run container daemons, the edge node activation module 508 provides functionalities to manage ad-hoc clusters of edge nodes as described earlier. In environments that cannot run container daemons (e.g.
  • the edge node activation module 508 provides additional “light” container capabilities with the ability to download, deploy and operate microservices.
  • the embedded Webserver e.g. 516) provides a subset of container management (e.g. Docker®) APIs with one or more constraints.
  • the one or more constraints include use of a specific language based on the underlying OS (java for android, objective c for iOS®, etc.).
  • the one or more constraints further include use of the web server provided by the edge node activation module 508 by the microservices that run on the “light” container environment (provided by edge node activation module 508) to optimize the usage of limited resources on the underlying platform.
  • the edge node activation module 508 allows developers to build and host microservices on any node.
  • the disclosed cloud architecture also offers various microservices, utilizing the edge node activation module 508, to speedup application development and enable developers to immediately take advantage of the distributed edge cloud platform.
  • a drive microservice abstracts access to storage available on edge nodes and distributed fde management via a popular API can be provided.
  • a beam microservice is provided that beams content from a node to node(s) and/or to service(s), in a peer-to-peer, one-to-one and one-to-many fashion.
  • the edge node activation module 508 implements a sidecar pattern that allows an application to be decomposed into components built using different technologies. Using the sidecar pattern any component of an application can be built and deployed in isolation. The latency is reduced due to proximity of the sidecar with the application, and components, functionality can be added without changing the application itself.
  • the sidecar pattern abstracts many of the complexities of dealing with the service mesh. This is possible in the disclosed edge cloud computing architecture since many of these complexities are independent of the type of microservices deployed across the edge cloud. However, the side car pattern may not hide the distributed nature of the network. As an example, an API gateway, or security token management may be built using a sidecar pattern.
  • the API gateway is part of the net module 510 within the edge node activation module 508.
  • the API gateway makes the API end-points for each service accessible from all other nodes in a cluster.
  • the edge node activation module 508 provides functionalities that abstract the complexity of dealing with other microservices in different clusters.
  • edge node security becomes a crucial aspect of how microservices communicate. Certain elements like firewalls and network partitioning are very common in central cloud but may not generally exist on the edge. Therefore, it may be necessary to handle multiple levels of security. For instance, on the link-local cluster, it is not possible to use https because nodes in the cluster do not have domain names. The communication between nodes within the same link-local network are therefore encrypted. In addition, the API of each microservice is protected via tokens. Generally, the edge node activation module 508 runs in a trustless network environment. Therefore, it cannot be assumed that the firewalls protect the microservices running on edge nodes. In an embodiment, dealing with having a valid and non- expired token is abstracted by the sidecar pattern.
  • user payload may need to be encrypted so that it is only visible to authorized parties.
  • acquiring the key, encrypting, and decrypting of user payload are also abstracted by the sidecar.
  • routing to the proper node is a complex operation that requires dealing with the supemode and a link-local proxy node.
  • the sidecar hides this complexity from the developer of the microservice and the developers only need to invoke the appropriate microservice within the cluster. Distributed systems require retry mechanisms to ensure fault tolerance.
  • the sidecar handles retry calls and retry strategies. Developers can focus on developing their microservice rather than on the complexity of distributed systems. Similar to backend technologies like Istio which helps developers handle a service mesh, the edge node activation module 508 handles the service mesh at the edge and deals with all the constraints of using edge devices as servers.
  • Fig. 6 shows an exemplary backend microservice distribution 600 in accordance with an embodiment.
  • the backend system of the edge cloud computing platform is designed and deployed using a microservice based architecture as shown in Fig. 6.
  • each element is composed of a group 602 of geo-deployed clusters of microservices 604, 606, 608, 610, and 612 that are linked to a geo-distributed data store 614.
  • the data store e.g. 612
  • the discovery service e.g. 406
  • the registry service e.g. 418
  • the server token service e.g. 416
  • the identity service e.g. 410
  • each microservice cluster is geo-independent.
  • the signaling service e.g. 408 is used to provide APIs to launch SEP (e.g. 412) and BEP (e.g. 414) components.
  • the signaling service 408 keeps track of the existing BEP 414 and SEP 412 in a cluster of the signaling service 408 and provides information needed to properly load balance the BEP and SEP.
  • the signaling service 408 is independently geo distributed.
  • the geo-deployed clusters of microservices can correspond to respective clusters of edge cloud computing devices.
  • the microservices hosted in edge cloud computing devices in a cluster can form a cluster of microservices available to the edge nodes in the cluster.
  • the geo-deployed clusters of microservices can correspond to multiple clusters of edge cloud computing devices.
  • the microservices hosted in edge cloud computing devices in different clusters e.g. 2 clusters
  • Fig. 7 shows an exemplary edge cloud architecture 700 in accordance with an embodiment.
  • the value of decentralized cloud comes from distribution of services over the entire range, from central cloud (e.g. 306) all the way to the edge nodes.
  • Fig. 7 shows a backend services module 702 that is configured to provide one or more backend services that include a discovery service 704, a signaling service 706, an identity service 712, a server token service 714 and a registry service 716 .
  • the signaling service 706 is configured to provide a signaling endpoint (SEP) 708 and a bearer endpoint (BEP) 710.
  • SEP signaling endpoint
  • BEP bearer endpoint
  • the one or more backend services are hosted on cloud web services 718.
  • the disclosed cloud architecture allows collaboration between the backend services module 702 and the one or more nodes in the cloud to form one or more clusters.
  • Fig. 7 shows 3 clusters: a network cluster 1 (726), network cluster 2 (732) and a proximity cluster 3 (736).
  • the network cluster 1 (726) includes 3 nodes: node 1 which is a supemode (720), node 2 (722) and node 3 which is a network proxy node (724).
  • the network cluster 2 (732) includes 2 nodes: node 5 which is supemode and network proxy node 728 and node 6 which is a cache proxy node 730.
  • the proximity cluster 3 (736) includes 2 nodes: node 4 (734) and node 6 which is a cache proxy node 730.
  • each of these nodes include an edge node activation module (e.g.
  • the above-mentioned clusters were formed as a based on one or more scopes as described earlier. For instance, the network clusters
  • a given node can be a part of 2 clusters, for example, node 6 which is a cache proxy node 726 is a part of network cluster
  • Fig. 8 shows exemplary embodiment of a system 800 having discovery, connection and communication for two edge cloud computing devices belonging to same user ID. Similar to Fig. 7, Fig. 8 depicts a backend services module 802 configured to provide one or more backend services that include a discovery service 804, a signaling service 806, an identity service 812, a server token service 814, a registry service 816 hosted on cloud web services 818.
  • the signaling service 806 is configured to dynamically deploy resources such as a signaling endpoint (SEP) 808 and a bearer endpoint (BEP) 810.
  • Fig. 8 also shows 2 clusters: a network cluster 1 (826) and network cluster 2 (832).
  • the network cluster 1 (826) includes 3 nodes: node 1 which is a supemode (820), node 2 (822) and node 3 which is a network proxy node (824).
  • the network cluster 2 (832) includes 2 nodes: node 5 which is supemode and network proxy node 828 and node 6 which is a cache proxy node 830.
  • node 2 shown as 822 in network cluster 1 and node 6 shown as 830 in network cluster 2 belong to the same user (account) and have already registered with their respective link-local network clusters. It is to be noted that these two nodes although belonging to the same user account are part of two different clusters.
  • the disclosed edge architecture provides the SEP 808 as a reachable endpoint for node 6 (830), that it can use to communicate with node 2 (822) as if it were directly accessible. The communication between these two nodes is performed in an inter-cluster fashion using the SEP 808. After the signaling is established, the BEP 810 is provided for the bulk of the exchange among the two nodes 822 and 830.
  • the flexibility of separating signaling, and bearer channels allows the creation of “service-specific” BEPs that are not restricted to HTTP based service delivery.
  • the process of discovery, connection and communication amongst nodes includes the first step of sending discovery requests (by a new node) to the supemode (e.g. 820) for nodes that belong to a scope (e.g. network).
  • the process further includes the step of obtaining a list of nodes together with appropriate signaling information from the supemode.
  • the process further includes sending requests to remote nodes (in different clusters) via SEP (e.g. 806).
  • SEP e.g. 806
  • the process also includes having remote nodes request BEP (e.g. 810) for providing a service.
  • BEP e.g. 810 for providing a service.
  • the process concludes with the step of connecting and communicating to consume the service through the BEP provisioned.
  • edge node activation module 426 one of the major advantages of the edge node activation module 426 is the ability to develop frontend applications on typical client devices using the microservice concept and architecture.
  • the move to microservices is triggered by three major trends.
  • microservices implement and expose RESTful APIs (HTTP REST based).
  • a set of easy-to-use APIs can hide internal complexities and facilitate communication between microservices within a system.
  • deployment scripts e.g. Ansible
  • a pipeline infrastructure e.g. Jenkins
  • automated deployment can help build flexible systems by providing the ability to decide where deployments happen.
  • the ability to request IT resources such as, CPU, storage, and network
  • simple APIs to get these resources in a near real-time fashion makes creation of large and scalable systems more feasible.
  • the transition to microservices and edge cloud may require development teams to work more closely because it blends different knowledge and expertise together. For instance, it may require the skills of backend developers.
  • backend developers In order to support billions of small clients (e.g. IoT), there is a significant burden on the central cloud. On the one hand, too many resources may remain idle waiting for signals from clients on the edge.
  • fulfilling the performance demands of an application may not be feasible. For instance, deploying a backend system in the US to support a client in Europe may not satisfy latency constraints for many applications. Therefore, backend developers need to better leverage client resources to help support these new demands. They may be forced to offload many of the functions closer to the application even if it requires deploying part of the backend system in the “client” device running the application.
  • IT/DevOps Yet another expertise needed for the transition implementation is that of IT/DevOps.
  • IT teams have been responsible for figuring out and managing the infrastructure where solutions are to be deployed. They have to consider many constraints and parameters such as deployment and operations costs, scalability, and elasticity.
  • the scope of the cloud infrastructure is a single data center, and the main task is to address computing and networking resource constraints.
  • the scope should be expanded to deploying IT resources at the right time and the right place (generally beyond the scope of a data center). New scopes such as proximity, account and link-local presence need to be considered to ensure efficient deployment and operations.
  • frontend applications used to perform simple tasks such as, inputting and sending information to the backend and/or rendering information coming from the backend.
  • Most of the complex functions are generally relegated to the backend servers.
  • client device such as, caching, augmented reality (AR), image recognition, authorization, and authentication.
  • frontend applications are becoming larger and more complex (e.g. the Facebook® app on iOS® has tripled in size to over 300 Mbytes in less than 2 years). Therefore, there is an opportunity to transition from a monolithic frontend app design to a microservice architecture and decompose the frontend app subsystem into microservices. The app can then seamlessly call microservices that are local on the device along with those running on the backend (hosted on central cloud).
  • microservices based system One of the many consequences of a microservices based system is the choice between multi-tenancy and single-tenancy.
  • a major benefits of public cloud is multi-tenancy where multiple applications can share public cloud resources and microservices deployed on them.
  • certain applications may have to deploy microservices that need to remain as single tenant for a variety of reasons such as security or data privacy. Therefore, a hybrid approach where one can choose whether a microservice is multi-tenant or single tenant may be a better approach.
  • microservice is single-user or multi-user.
  • multi-user microservices are more desirable. However, this may not always be the case. For instance, if a microservice is to always serve a single user within a “client device” or a pair of “client devices” where only one acts a client and the other as server, a multi-user platform may be inefficient. Therefore, a hybrid approach where one can choose whether a microservice is multi-user or single-user may be a better approach.
  • the edge node activation module (e.g. 426) may be developed from scratch to provide flexibility and ease of implementation of a hybrid approach to benefit backend, frontend, and DevOps.
  • the benefits can include simplicity, flexibility, repeated deploy-ability and scalability of the development as will be described with reference to Fig. 9 and Fig. 10.
  • Fig. 9 shows an exemplary edge cloud architecture 900 implemented using serverless microservices in a sidecar pattern in accordance with an embodiment.
  • the architecture 900 includes client device 902 running a 3 rd party application or a client application 904.
  • the client device 902 includes an edge node activation module 922 and one or more locally hosted microservices 926, 928, and 930.
  • the edge node activation module 922 includes an API gateway 924 communicating with an API gateway 908 hosted in the central 912 or in the cloud computing device 914.
  • the edge node activation module 922 receives a request from the client application 904 and determines a type of one or more microservices required to service the request.
  • the API gateway 924 sends the request to the appropriate microservice that is instantiated or launched.
  • the locally hosted microservices may be loaded from a remote device or can be dynamically instantiated (in runtime) based on the demand from the client application 904.
  • the launched microservice e.g. 926) services the request and sends a response back to the client application 904 through the API gateway 924.
  • the API gateway 924 sends an http/https request 906 to the API gateway 908.
  • the API gateway 908 launches an appropriate microservice (e.g. 916, 918, 920) that is globally or centrally hosted on the central cloud 912 to service the http/https request 906.
  • the API gateway 908 sends an http/https response 910 to the API gateway 924.
  • the client application 904 can take advantage of locally hosted microservices that are exposed by the edge node activation module 922 and also the globally hosted microservices that are exposed by the API gateway 908.
  • backend developers can easily, where plausible, transition from a multi-user microservice to a single-user microservice that resides on the closest resource to the application i.e., on the same resource where the frontend application is running.
  • the resource exists as the application exists and the microservice exists only if the application makes a request through the API gateway provided by the edge node activation module. This reduces the complexity of developing multi-user microservices and brings the serverless microservice model to all kinds of edge resources beyond central cloud.
  • serverless microservices e.g. 926) expose their RESTful APIs, microservices can be utilized cross-domain.
  • the IT/DevOps will have a smaller number of microservices to manage in central cloud which helps reduce complexity and operational cost.
  • microservices reside closer to the application needed (for instance on the client device 902), a horizontal scalability with minimal or even no hosting cost is achieved.
  • the complexity is also reduced because there is no need for different infrastructure knowledge since resources at the edge appear the same (albeit with different constraints) as the resources on the central cloud.
  • the frontend application developers can follow backend development methodologies and decompose the complexity of the frontend application to serverless microservice, and sidecar pattern as illustrated in Fig. 9.
  • Developing applications with the edge node activation module (e.g. 426, 508, 922) provides the flexibility for the developer to decide where an application is active and decide what microservices need to run where, within a cluster of nodes: on central cloud, on a local device, or another device within the cluster.
  • the developer has more options to break down a client application, usually written as a monolithic block, into microservices and enjoy all the benefits of microservice architecture common in backend development: scalability, flexibility, choice of technologies, isolated impact on other modules or functions, ease of deployment, etc.
  • Fig. 10 shows an exemplary serverless microservice 1000 for applications taking advantage of microservices hosted locally and globally in accordance with an embodiment.
  • the client application can make requests not only to the API gateway in the central cloud but also locally to the same device.
  • the application can take advantage of microservices hosted locally for local functions and globally on the central cloud for those functions that cannot be hosted locally.
  • This concept can be expanded to multiple devices and edge nodes as shown in Fig. 10 for the example of client to client communication.
  • the architecture 1000 includes two client devices 1002 and 1038 running 3rd party applications or client applications 1004 and 1040, respectively.
  • the client devices 1002 and 1038 include an edge node activation module 1022 and 1042, respectively.
  • Each of the client devices locally host one or more microservices.
  • the client device 1002 hosts microservices 1026, 1028, and 1030.
  • the client device 1038 hosts microservices 1046, 1048, and 1050.
  • the edge node activation module 1022 includes an API gateway 1024 configured to communicate with an API gateway 1008 hosted in the central 1012.
  • the edge node activation module 1022 receives a request 1020 from the client application 1004 and determines a type of one or more microservices required to service the request.
  • the API gateway 1024 sends the service request 1032 to the appropriate microservice that is instantiated or launched.
  • the locally hosted microservices may be loaded from a remote device or can be instantiated based on the demand from the client application 1004.
  • the microservice e.g. 1026
  • the API gateway 1024 sends an http/https request 1006 to the API gateway 1008.
  • the API gateway 1008 launches an appropriate microservice (e.g. 1014, 1016, 1018) that is globally or centrally hosted on the central cloud 1012 to service the request.
  • the API gateway 1008 sends an http/https response 1010 to the API gateway 1024.
  • the edge node activation module 1022 determines that the type of one or microservices required to service the request 1020 corresponds to microservices hosted on another client device (e.g. 1038).
  • the edge node activation module 1022 enables a direct communication with the API gateway 1042.
  • the edge node activation module 1022 enables a direct microservice to microservice communication between 1030 and 1046.
  • the microservice 1030 sends a data request 1034 to the microservice 1046.
  • the microservice 1046 services the data request and sends a response 1036 to the microservice 1030.
  • the client to client communication can happen directly between edge devices/client devices (or through servers in central cloud) as described above. This gives the developer an opportunity to optimize all aspects of deployment, such as, cloud hosting costs, latency, bandwidth usage, data privacy and all other benefits that come with the microservice architecture for typical backend functions.
  • edge node activation module benefits the developers by seamlessly expanding the notion of on-demand IT resources to the edge by using the same models and APIs. It further expands the notion of clustering by adding new cluster scopes: user account, proximity, and network. It further expands on the notion of service mesh by providing a sidecar pattern at the edge to handle the API gateway, security, and routing for communication with other microservices whether locally on the edge globally or in central cloud.
  • developing applications with edge node activation module e.g. 426, 508, 922, 1022
  • edge node activation module e.g. 426, 508, 922, 1022
  • Solution developers can make the decision on where the data resides based on the solution business logic. Consequently, what is disclosed herein is a pragmatic approach for building an edge cloud with orders of magnitude more processing power, storage, and memory leveraging edge resources that are currently unused or seriously underutilized. This can create a cloud fabric that is orders of magnitude larger, cheaper, faster, and can provide better data privacy for all consumer and enterprise applications.
  • Fig. 11 shows an exemplary embodiment of a method 1100 of providing cloud computing infrastructure or a platform.
  • the edge cloud computing infrastructure is implemented in a communication network (e.g. edge cloud computing network 300) that includes one or more edge cloud computing devices (e.g. 302, 304) in communication with a server computing device (e.g. 312).
  • the method includes executing as in step 1102, by a first edge cloud computing device (e.g. 404, 500), an edge node activation module (e.g. 422, 508).
  • the edge node activation module is a software-level application downloadable by the first edge cloud computing device.
  • the method further includes discovering dynamically as in step 1104, by the first edge cloud computing device, other edge cloud computing devices (e.g. 310) independent of the operating system and network associated with the other edge cloud computing devices.
  • the method further includes exposing as in step 1106, by the first edge cloud computing device, resource availability, capability, and functionality of the discovered other edge cloud computing devices (e.g. 310).
  • the method further includes forming and organizing as in step 1108, by the first edge cloud computing device, one or more clusters (e.g. 722, 732) of the discovered other edge cloud computing devices.
  • the method also includes communicating as in step 1110, by the first edge cloud computing device, within the one or more clusters and across the one or more clusters.
  • the method further includes, subsequent to executing the edge node activation module (e.g. 422), searching, by the first edge cloud computing device, for a super edge cloud computing device (or a supemode).
  • the super edge cloud computing device is configured to manage globally discovery.
  • the method further includes in an event of not finding a super edge cloud computing device during the searching, designating, by the first edge cloud computing device, itself as the super edge cloud computing device.
  • the method includes communicating, by the first edge cloud computing device, global discovery of its existence and receiving, by the first edge cloud computing device, a list of one or more edge cloud computing devices within a scope of the first edge cloud computing device.
  • the method further includes receiving, by the first edge cloud computing device, a request for registration from one or more edge cloud computing devices entering subsequently in the one or more clusters.
  • the method also includes transmitting, by the first edge cloud computing device, to the registered one or more edge cloud computing devices a list of one or more other edge cloud computing devices within the scope of the first edge cloud computing device.
  • Fig. 12 shows an exemplary embodiment of a method 1200 of providing cloud computing infrastructure or a platform.
  • the edge cloud computing infrastructure is implemented in a communication network (e.g. edge cloud computing network 300) that includes one or more edge cloud computing devices (e.g. 302, 304, 500, 902, 1002) in communication with a server computing device (e.g. 312).
  • the method is performed by a first edge cloud computing device (e.g. 902, 1002) and includes determining, as in step 1202, a type of microservice corresponding to a request from a client application (e.g. 904, 1004) running in the first edge cloud computing device.
  • the method further includes determination of whether the type of microservice is global as in step 1204.
  • the request from the client application can be serviced only by a globally or centrally hosted microservice (e.g. 916, 918, 920, 1014, 1016).
  • a globally or centrally hosted microservice e.g. 916, 918, 920, 1014, 1016.
  • the first edge cloud computing device sends an http/https request (e.g. 906,1006) to the API gateway (e.g. 908, 1008) in the central cloud (912, 1012) as in step 1206.
  • the method further includes launching the globally hosted microservice (e.g. 916, 1014) and returning a response (e.g. http/https response 910, 1010) to the first edge cloud computing device as in step 1208.
  • the method further includes a determination of whether the type of microservice corresponding to the request from the client application is local or not, as in step 1210. If yes, the method further includes processing the request by launching a locally hosted microservice (e.g. 1026, 926) as in step 1212. If not, then the method includes sending the request directly to a microservice hosted in another (second) edge cloud computing device (e.g. 1038) as in step 1214. The method further includes launching a microservice (e.g. 1046) hosted in the another (second) edge cloud computing device (1038) and returning a response to the request.
  • a locally hosted microservice e.g. 1026, 926)
  • the method includes sending the request directly to a microservice hosted in another (second) edge cloud computing device (e.g. 1038) as in step 1214.
  • the method further includes launching a microservice (e.g. 1046) hosted in the another (second) edge cloud computing device (1038) and returning a response to the request.
  • the edge node activation module enables the edge cloud computing device or client device to dynamically create or instantiate microservices locally.
  • the edge node activation module also discovers the other edge nodes present in a given cluster or across clusters and exposes one or more microservices hosted in the discovered edge nodes. This way, any edge node can act as a “server” or a “client” and a given request from client application can be serviced either locally or globally or by other edge nodes as per the demand (type) of the service request.
  • Embodiments of a server computing device are disclosed.
  • the server computing device is configured for operation in a communication network that includes one or more edge cloud computing devices in communication with the server computing device.
  • the server computing device includes a backend services module configured to provide one or more backend services to support the one or more edge cloud computing devices.
  • the one or more backend services include a discovery service configured to provide knowledge to form one or more clusters of the one or more edge cloud computing devices, wherein each of the one or more clusters comprise at least one super edge cloud computing device.
  • the backend services further include a signaling service configured to dynamically deploy a Signaling Endpoint (SEP) and a Bearer Endpoint (BEP) for the one or more clusters upon receiving a request from the discovery service.
  • the backend services further includes a server token service configured to deliver a token to a microservice, in a first edge cloud computing device in the one or more clusters, making requests to another microservice, in a second edge cloud computing device in the one or more clusters.
  • the one or more backend services further include an identity service configured to create and maintain authentication profiles of the one or more edge cloud computing devices.
  • the one or more backend services further includes a registry service configured to maintains a list of all microservices provided in the one or more clusters and associated clusters information.
  • the registry service is further configured to maintain cluster knowledge of the one or more clusters to allow the one or more clusters to be self-managed for configuration purposes.
  • the registry service is further configured to provide geo-located lists of clusters and associated configuration details to be used by the one or more backend services.
  • the knowledge to form one or more clusters includes profiles of the one or more clusters, details of computing resources associated with the one or more edge cloud computing devices forming the one or more clusters, status and/or location of the one or more edge cloud computing devices forming the one or more clusters, one or more microservices available on the one or more edge cloud computing devices forming the one or more clusters, end-to-end network topology to reach each edge cloud computing device forming the one or more clusters, and reachability of the one or more clusters.
  • the discovery service is further configured to provide information associated with resources available in the communication network to dynamically deploy the one or more microservices on any available edge cloud computing device within the communication network in real-time.
  • the identity service is configured to generate and maintain a token for one or more of: an edge node activation module in each edge cloud computing device, a microservice using the edge node activation module, an application developer using the edge node activation module, and an end-user of an application supported by the edge node activation module.
  • the edge cloud computing device includes an edge node activation module configured to discover one or more other edge cloud computing devices based on a first set of parameters to establish a connection therebetween.
  • the edge node activation module is further configured to provide a microservice runtime environment to execute one or more microservices associated with the connection established between one or more edge cloud computing devices.
  • the edge node activation module is configured to discover an existence of the one or more edge cloud computing devices regardless of an operating system and/or network type associated with the one or more edge cloud computing devices.
  • the edge node activation module is further configured to discover capabilities and behavior associated with the one or more edge cloud computing devices and discover the one or more microservices supported by the one or more edge cloud computing devices.
  • the first set of parameters include a user account associated with each of the one or more edge cloud computing devices, a network associated with the one or more edge cloud computing devices, and a proximity of the one or more edge cloud computing devices.
  • the edge node activation module is further configured to dynamically form one or more clusters with the one or more edge cloud computing devices and communicate with the one or more edge cloud computing devices at a microservice level either directly or through other edge cloud computing devices across the one or more clusters.
  • the edge node activation module is further configured to connect with the discovered one or more edge cloud computing devices if the discovered one or more edge cloud computing devices chose to share data, services, and/or resources.
  • the edge node activation module is further configured to expose the one or more microservices services through a common embedded web server.
  • one or more API endpoints for each microservice are accessible from the one or more edge cloud computing devices in a cluster through an API gateway.
  • the edge node activation module is further configured to provide flexible container capabilities based at least in part on the respective computing environments associated with the one or more edge cloud computing devices. The respective computing environments run a container daemon to download, deploy, and operate the one or more microservices.
  • the computing environment runs a container daemon to manage ad- hoc clusters of the one or more edge cloud computing devices.
  • the edge node activation module further includes a Webserver embedded within.
  • the Webserver is configured to provide container management APIs using specific language based on an operating system associated with the edge cloud computing device.
  • the edge node activation module further includes one or more software libraries and corresponding APIs.
  • Embodiments of a server computing device relate to a communication network that includes one or more edge cloud computing devices in communication with the server computing device.
  • the server computing device includes a backend services module configured to provide one or more services to support the one or more edge cloud computing devices.
  • the one or more backend services include a discovery service configured to provide knowledge to form one or more clusters of the one or more edge cloud computing devices. Each of the one or more clusters include at least one super edge cloud computing device (or a super node).
  • the one or more backend services further include a signaling service configured to dynamically deploy a Signaling Endpoint (SEP) and a Bearer Endpoint (BEP) for the one or more clusters upon receiving a request from the discovery service.
  • the one or more backend services further include an identity service configured to create and maintain authentication profiles of the one or more edge cloud computing devices.
  • the discovery service is configured to allow a new edge cloud computing device that is not part of the first cluster to register with the super edge cloud computing device corresponding to the first cluster.
  • the discovery service is further configured to allow each of the super edge cloud computing devices to register itself.
  • the knowledge to form one or more clusters includes profiles of the one or more clusters, details of computing resources associated with the one or more edge cloud computing devices forming the one or more clusters, status & location of the one or more edge cloud computing devices forming the one or more clusters, one or more services available on the one or more edge cloud computing devices forming the one or more clusters, end-to-end network topology to reach each edge cloud computing device forming the one or more clusters, and reachability of the one or more clusters.
  • the discovery service is further configured to provide information associated with resources available in the communication network to dynamically deploy the one or more services on any available edge cloud computing device within the communication network in real-time.
  • the signaling service is configured to dynamically deploy the Signaling Endpoint (SEP) and the Bearer Endpoint (BEP) based on a demand for computing resources within the one or more clusters.
  • SEP Signaling Endpoint
  • BEP Bearer Endpoint
  • the Signaling Endpoint (SEP) is used for signaling communication and the Bearer Endpoint (BEP) is used for data communications.
  • the dynamic deployment of the Signaling Endpoint (SEP) and the Bearer Endpoint (BEP) increases signaling bandwidth and data bandwidth for the one or more edge cloud computing devices in the one or more clusters.
  • the signaling service is further configured to dynamically deploy the Signaling Endpoint (SEP) and the Bearer Endpoint (BEP) based on one or more parameters.
  • the one or more parameters include time to go-live for the one or more services, number of concurrent connections in the one or more clusters, and one or more communication protocols associated with the one or more edge cloud computing devices in the one or more clusters.
  • the signaling service is further configured to dynamically deploy the Signaling Endpoint (SEP) and the Bearer Endpoint (BEP) on an available edge cloud computing device within the closest proximity of the one or more clusters.
  • the identity service is configured to generate and maintain a token for one or more of: an edge node activation module in each edge cloud computing device, a microservice using the edge node activation module, an application developer using the edge node activation module and an end-user of an application supported by the edge node activation module.
  • the identity service is configured to verify credentials and legitimacy of a token holder and authorize the token holder’s access to the one or more services provided by the backend services module.
  • Embodiments of a method of providing edge cloud computing infrastructure are disclosed.
  • the method is implemented in a communication network that includes one or more edge cloud computing devices in communication with a server computing device or a central cloud.
  • the method includes executing, by a first edge cloud computing device, an edge node activation module.
  • the method further includes discovering dynamically, by the first edge cloud computing device, other edge cloud computing devices independent of the operating system and network associated with the other edge cloud computing devices.
  • the method further includes exposing, by the first edge cloud computing device, resource availability, capability, and functionality of the discovered other edge cloud computing devices.
  • the method further includes forming and organizing, by the first edge cloud computing device, one or more clusters of the discovered other edge cloud computing devices.
  • the method also includes communicating, by the first edge cloud computing device, within the one or more clusters and across the one or more clusters.
  • the method includes, subsequent to executing the edge node activation module, searching, by the first edge cloud computing device, for a super edge cloud computing device (also referred to as “supemode” in the ongoing description).
  • the super edge cloud computing device is configured to manage global discovery of nodes or edge cloud computing devices.
  • the method further includes designating, by the first edge cloud computing device, itself as the super edge cloud computing device.
  • the method further includes communicating, by the first edge cloud computing device, global discovery of its existence and receiving, by the first edge cloud computing device, a list of one or more edge cloud computing devices within a scope of the first edge cloud computing device.
  • the method further includes receiving, by the first edge cloud computing device, a request for registration from one or more edge cloud computing devices entering subsequently in the one or more clusters and transmitting, by the first edge cloud computing device, to the registered one or more edge cloud computing devices a list of one or more other edge cloud computing devices within the scope of the first edge cloud computing device and/or within the scope of the registered one or more edge cloud computing devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)
EP20878137.7A 2019-10-26 2020-10-26 Verfahren und system zur verteilten kanten-cloud-berechnung Pending EP4049413A4 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962926455P 2019-10-26 2019-10-26
US16/841,380 US20200322225A1 (en) 2019-04-05 2020-04-06 Method and system for distributed edge cloud computing
PCT/IB2020/060038 WO2021079357A1 (en) 2019-10-26 2020-10-26 Method and system for distributed edge cloud computing

Publications (2)

Publication Number Publication Date
EP4049413A1 true EP4049413A1 (de) 2022-08-31
EP4049413A4 EP4049413A4 (de) 2023-07-05

Family

ID=75619946

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20878137.7A Pending EP4049413A4 (de) 2019-10-26 2020-10-26 Verfahren und system zur verteilten kanten-cloud-berechnung

Country Status (5)

Country Link
EP (1) EP4049413A4 (de)
JP (1) JP7426636B2 (de)
KR (1) KR20220091487A (de)
CA (1) CA3152892A1 (de)
WO (1) WO2021079357A1 (de)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556390A (zh) * 2021-07-15 2021-10-26 深圳市高德信通信股份有限公司 一种分布式边缘计算系统
WO2023035147A1 (en) * 2021-09-08 2023-03-16 Siemens Aktiengesellschaft Data processing method of industry edge product and distributed computing protocol engine thereof
KR102553079B1 (ko) * 2021-10-19 2023-07-10 아콘소프트 주식회사 운영 자동화 기능을 가진 엣지 클라우드 기반 컴퓨팅 시스템
WO2023115522A1 (en) * 2021-12-24 2023-06-29 Huawei Technologies Co., Ltd. Systems and methods for enabling network-based reusable computing
CN114024967B (zh) * 2022-01-10 2022-03-25 广东电力信息科技有限公司 一种基于云边和边边协同架构的iaas数据处理系统及方法
KR20230136458A (ko) * 2022-03-18 2023-09-26 한국과학기술원 영상 분석을 위한 마이크로서비스 기반 엣지 디바이스 아키텍처
WO2024086008A1 (en) * 2022-10-20 2024-04-25 Fisher-Rosemount Systems, Inc. Authentication/authorization framework for a process control or automation system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010191482A (ja) * 2009-02-13 2010-09-02 Fujitsu Ltd クライアントサーバシステム、クライアント装置および業務処理振分プログラム
US9887884B2 (en) * 2013-03-15 2018-02-06 Mentor Graphics Corporation Cloud services platform
US10893100B2 (en) * 2015-03-12 2021-01-12 International Business Machines Corporation Providing agentless application performance monitoring (APM) to tenant applications by leveraging software-defined networking (SDN)
EP3304285A1 (de) 2015-06-03 2018-04-11 Telefonaktiebolaget LM Ericsson (publ) Implantierter agent in einem ersten dienstbehälter zur ermöglichung eines umgekehrten proxy auf einem zweiten behälter
US10516672B2 (en) * 2016-08-05 2019-12-24 Oracle International Corporation Service discovery for a multi-tenant identity and data security management cloud service
US10489275B2 (en) * 2016-10-20 2019-11-26 Cisco Technology, Inc. Agentless distributed monitoring of microservices through a virtual switch
WO2018089417A1 (en) * 2016-11-09 2018-05-17 Interdigital Patent Holdings, Inc. Systems and methods to create slices at a cell edge to provide computing services
US10574736B2 (en) * 2017-01-09 2020-02-25 International Business Machines Corporation Local microservice development for remote deployment
DE112017006994T5 (de) * 2017-02-05 2019-10-17 Intel Corporation Bereitstellung und verwaltung von microservices
CN110832808B (zh) * 2017-06-09 2023-06-20 环球互连及数据中心公司 用于消息传递服务的方法、存储介质和计算系统
US10303450B2 (en) * 2017-09-14 2019-05-28 Cisco Technology, Inc. Systems and methods for a policy-driven orchestration of deployment of distributed applications
CN111566620A (zh) * 2018-01-08 2020-08-21 赫尔环球有限公司 用于提供基于位置的服务的分布式处理系统和方法
US10735509B2 (en) * 2018-01-31 2020-08-04 Ca, Inc. Systems and methods for synchronizing microservice data stores
US11423254B2 (en) * 2019-03-28 2022-08-23 Intel Corporation Technologies for distributing iterative computations in heterogeneous computing environments

Also Published As

Publication number Publication date
JP2022554220A (ja) 2022-12-28
EP4049413A4 (de) 2023-07-05
CA3152892A1 (en) 2021-04-29
WO2021079357A1 (en) 2021-04-29
KR20220091487A (ko) 2022-06-30
JP7426636B2 (ja) 2024-02-02
CN114731296A (zh) 2022-07-08

Similar Documents

Publication Publication Date Title
US20210042160A1 (en) Method and system for distributed edge cloud computing
US20200322225A1 (en) Method and system for distributed edge cloud computing
EP4049413A1 (de) Verfahren und system zur verteilten kanten-cloud-berechnung
CN110462589B (zh) 本地装置协调器中的按需代码执行
US9507630B2 (en) Application context transfer for distributed computing resources
KR102084104B1 (ko) 종단간 m2m 서비스 계층 세션
US10318550B2 (en) Systems and methods for autonomous resource discovery, management, and stitching
JP2019514118A (ja) Nfvを通したプールベースm2mサービス層構築
Shang et al. Breaking out of the cloud: Local trust management and rendezvous in named data networking of things
JPWO2020202126A5 (de)
NL2033580B1 (en) End-to-end network slicing (ens) from ran to core network for nextgeneration (ng) communications
EP3545408A1 (de) Lokalisierter vorrichtungskoordinator mit fähigkeiten zur codeausführung auf anfrage
Alamouti et al. Hybrid edge cloud: A pragmatic approach for decentralized cloud computing
US20230179522A1 (en) Executing workloads across multiple cloud service providers
US20230134683A1 (en) Memory interleaving coordinated by networked processing units
Li et al. 6G cloud-native system: Vision, challenges, architecture framework and enabling technologies
Benomar et al. A Stack4Things-based web of things architecture
US20230370416A1 (en) Exposure of ue id and related service continuity with ue and service mobility
US20230362683A1 (en) Operator platform instance for mec federation to support network-as-a-service
Porras et al. Dynamic resource management and cyber foraging
Drost et al. Zorilla: a peer‐to‐peer middleware for real‐world distributed systems
Pal Extending mobile cloud platforms using opportunistic networks: survey, classification and open issues
US11200331B1 (en) Management of protected data in a localized device coordinator
US20240147404A1 (en) Multi-access edge computing (mec) application registry in mec federation
WO2023081202A1 (en) Mec dual edge apr registration on behalf of edge platform in dual edge deployments

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220316

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20230607

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/18 20060101ALI20230601BHEP

Ipc: G06F 9/455 20180101ALI20230601BHEP

Ipc: G06F 9/50 20060101ALI20230601BHEP

Ipc: G06F 9/44 20180101ALI20230601BHEP

Ipc: G06F 15/16 20060101ALI20230601BHEP

Ipc: H04L 12/16 20060101AFI20230601BHEP