WO2023129981A1 - Smart edge hypervisor system and method of use - Google Patents

Smart edge hypervisor system and method of use Download PDF

Info

Publication number
WO2023129981A1
WO2023129981A1 PCT/US2022/082493 US2022082493W WO2023129981A1 WO 2023129981 A1 WO2023129981 A1 WO 2023129981A1 US 2022082493 W US2022082493 W US 2022082493W WO 2023129981 A1 WO2023129981 A1 WO 2023129981A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
hypervisor
devices
distributed
lot
Prior art date
Application number
PCT/US2022/082493
Other languages
French (fr)
Inventor
Jeroen GROENER
Januar HIMANTONO
Original Assignee
Pentair, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pentair, Inc. filed Critical Pentair, Inc.
Publication of WO2023129981A1 publication Critical patent/WO2023129981A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/504Resource capping

Definitions

  • This disclosure generally relates to the field of a virtual machine monitoring platform. More particularly, the disclosure relates to a distributed edge hypervisor system that implements virtualization using an operating method.
  • virtual machines have been used to allow computing functions to be carried out at a desired location, which may be remote from where elements of a computing system were originally installed. Virtualization at the level of the operating system in an edge-computing system enables location-independent computing.
  • One approach to virtualization is the use of “containers.”
  • the aim of employing a container is to try to isolate an application and its dependencies into a self-contained unit that can be run or executed anywhere.
  • the container wraps up a piece of software in a complete file system.
  • the file system then contains everything that it needs to run such as code, runtime, system tools, and system libraries.
  • Containers have thus been used to provide operating-system-level virtualization.
  • the containerized application includes all the dependencies, thus enabling the containerized application to run on any node where it has been loaded.
  • containers can be spread to new nodes relatively easily and may be easier to update once deployed to multiple nodes.
  • containers are lightweight in comparison to more traditional virtual machines.
  • Virtualization creates a level of indirection or an abstraction layer between, for example, a physical object and a managing application. Virtualization may be a framework or a methodology for dividing the resources of a computer into multiple execution environments. A key benefit of virtualization is the ability to run multiple operating systems (OS) on a single physical server and share underlying hardware resources.
  • OS operating systems
  • a host and computing environment included a hardware infrastructure of a processor core, input/output devices, memory units, and fixed storage, the combination of which supported only a single operating system, which in turn supported the execution of a single application at a time.
  • processor power has increased exponentially, advanced forms of multiple operating systems enable both simulated and actual multitasking such that multiple applications may be executed within the same host computing environment.
  • DNETC distributed network computing
  • Open Thread An example of DNETC technology is Open Thread where the individual nodes of the thread network can get disconnected without losing the functionality of the application. All computing resources are shared and distributed over a single OS. Applications are self-contained packages of logic that mostly rely on core object files and related resource files.
  • VM virtual machine
  • hypervisor runs directly on the hardware platform similar to how an operating system runs directly on hardware.
  • a hosted hypervisor runs within a host operating system.
  • a smart edge hypervisor that containerizes multiple operating systems running distributed on micro-edge, as well as loT devices using a multi-access edge computing (MEC) technology to optimize loT device operations and edge device computations, is provided.
  • MEC multi-access edge computing
  • a connected system having a multi-core multi-operating system distributed edge hypervisor and an operating method of the connected distributed edge hypervisor system is disclosed.
  • the system and method, which uses the distributed edge hypervisor allows a single OS to span across multiple micro edge devices by sharing multiple hypervisors with the same predefined layer. Further, multiple predefined layers can be constructed on the pool of available hypervisors in that layer.
  • a system and method are provided that have a novel hardware and software interaction, by way of a single edge hypervisor containerizing multiple OS running distributed on the micro-edge and loT devices using MEC technology to optimize loT device operations and edge device computations.
  • micro-edge device “adds” the device resources and OS to the multiOS distributed edge hypervisor. After the subscription of an loT device, the device can “autoselect” the best available access point/router within the distributed edge hypervisor using MEC technology.
  • Micro-edge devices can be provided in the form of hosting access points.
  • Another aspect which can be implemented on, but not limited to, field programmable gate arrays (FPGAs), personal computers (PC), microcontrollers, and with other known processors to have computer algorithms and instruction up-gradation for supporting many applications.
  • FPGAs field programmable gate arrays
  • PC personal computers
  • microcontrollers and with other known processors to have computer algorithms and instruction up-gradation for supporting many applications.
  • FIG. l is a block diagram of a system architecture for a multi-core multi-operating system distributed edge hypervisor.
  • FIG. 2 is a flow diagram of an operating method for implementing the distributed edge hypervisor of FIG. 1.
  • Devices, methods, and systems discussed herein can optimize micro-edge and loT device aggregation, processing, and service provisioning by dynamically assigning the smart-edge hypervisor to the particular MEC server hosting loT gateways, or to the local gateways responsible for loT traffic aggregation, processing, and storage.
  • the gateways can be arranged hierarchically to reduce latencies between edge compute servers or gateways and loT devices.
  • These devices, methods, and techniques can provide dynamic processing and provide for processing near the edge (i.e., closer to loT data sources), which in turn optimizes network resources.
  • the devices, methods, and systems that are discussed can provide resourceoptimizing edge-computing capacity (e.g., loT traffic aggregation, processing, and service provisioning) by dynamically assigning needs-based smart-edge hypervisor containerizing multiple operating systems running distributed on micro edge devices and loT-devices using MEC technology to optimize loT device operations and edge device computations.
  • the micro-edge and loT devices can be located at different levels of the network depending on the network density, deployment requirements, and application needs.
  • the system can include a plurality of edge gateways implementing edge-based computing that communicate between the micro-edge and loT devices arranged geographically to reduce latency.
  • hypervisor software module that provides the virtual machines on a host device to further use multiple operating systems within different guests that operate on the same resources, virtual computing, and storage platform of the host machine.
  • hypervisor allows an operating system to run independently from the underlying hardware through the use of virtual machines.
  • Hypervisors can include “Type 1” (or “bare-metal”) or “Type 2” (or “hosted”) hypervisors.
  • a Type 1 hypervisor acts as a lightweight operating system and runs directly on the host’s hardware, while a Type 2 hypervisor runs as a software layer on an operating system, like other computer programs.
  • the bare-metal hypervisors run directly on the computing hardware and the hosted hypervisors run on top of the OS of the host machine.
  • Lightweight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge.
  • hosted hypervisors run within the OS, additional (and different) operating systems can be installed on top of the hypervisor.
  • Embodiments herein utilize a smart edge or distributed edge hypervisor to overcome the limitations of existing technologies.
  • the smart edge hypervisor can further implement virtualization by providing a virtual machine monitoring platform.
  • a physical host can execute a virtual machine monitor (VMM) that initiates a source virtual machine.
  • VMM virtual machine monitor
  • the virtual machine monitoring platform to optimize loT devices and micro-edge devices can comprise one or more devices, in the same way as the system described in connection with FIG. 1. Further, the virtual machine monitoring platform can be provided with a computer memory communicatively coupled with the processors holding computer program instruction that, when executed by the processor, perform an operating method described in connection with FIG. 2.
  • the above-described virtual machine monitoring platform can reduce the redundancy of edge gateways.
  • the redundancy can be reduced by virtualizing the edge devices such that the critical processes continue to function in case of single or multiple edge device hardware failures. Even upgrades and patches to single or multiple-edge devices can be executed without downtime.
  • the multiple virtual networks can be managed by creating secure virtual LANs (VLANs) to isolate operations, processes, and devices while maintaining full functionality if the cloud connection or uplink is interrupted.
  • VLANs secure virtual LANs
  • Edge devices are also referred to as edge gateways.
  • An edge device can act as an interface between two computer networks.
  • the edge device can serve as an interface between a first computer network, using which certain automation devices are communicatively connected at a field level for controlling and/or monitoring a technical process, and a second network such as the cloud.
  • Edge computing is a distributed, open IT architecture that features decentralized processing power, enabling mobile computing and loT technologies.
  • loT devices are devices in a network that often include sensors and limited computing power.
  • Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed to improve response. In edge computing, data is processed by the device itself or by a local computer or server, rather than being transmitted to a data center.
  • Edge computing optimizes internet devices and web applications by bringing computing closer to the source of the data. This minimizes the need for long-distance communications between client and server, which reduces latency and bandwidth usage.
  • Edge computing harnesses the concept of the cloud, in that the servers connect to the user via the internet, but it shifts the servers closer to the end-user.
  • the “edge” in this case refers to the edges of high-density population centers and the “edges” of networks, the outer periphery of both.
  • edge devices can include many different components, such as an loT sensor, an employee’s notebook computer, a smartphone, a security camera, or even the internet-connected microwave oven in the office break room.
  • Other examples include loT-enabled pumps, heaters, lights, water filtration equipment, biogas upgrading systems, brewery systems, reverse osmosis systems, agricultural products, tanks, valves, and other loT-connected products for aquatic applications.
  • Edge gateways themselves are considered edge devices within an edgecomputing infrastructure.
  • Edge access networks are also evolving to include virtualization and mobile networks.
  • the rising domain of multi-access edge computing provides virtualization at a large scale.
  • MEC offers cloud-computing capabilities and an IT service environment at the edge of the network by implementing MEC with data centers that are distributed at the edge.
  • the network edge analyzes, processes, and stores the data. Collecting and processing data closer to the customer reduces latency and brings real-time performance to high-bandwidth applications.
  • the resources that make up a cloud can reside anywhere — from a centralized data center to a cell site, a central office, an aggregation site, a metro data center, and/or on the customer premises.
  • the MEC platform enables distributed edge computing by processing content at the edge using either a server or a CPE.
  • a software-defined access layer may also be used as an extension of a distributed cloud.
  • a multi-core multi-operating system distributed edge hypervisor 100 and operating method 200 thereof can be used to overcome the limitations of existing edge computing technologies by virtualizing the edge devices.
  • the critical processes continue to function in case of single or multiple-edge device hardware failures.
  • upgrades and patches to single or multiple-edge devices can be executed without downtime.
  • FIG. 1 illustrates a multi-core multi-operating system distributed edge hypervisor 100.
  • the multi-core multi-operating system distributed edge hypervisor 100 can include one or more processors 102, a memory 104 device, a storage interface 106, and an I/O interface 108 in communication with input devices 112 and output devices 114.
  • a cloud server 110 connected to a network 116 can be used to host a virtual platform for the multi-core multi-operating system distributed edge hypervisor 100.
  • the multi-core multi-operating system distributed edge hypervisor 100 can include the one or more processors 102 on the computer network 116, and the computer memory 104 communicatively coupled with the processors 102 holding computer program instructions. When executed by the processor 102, the system performs a method for operating the distributed edge hypervisor 100, as described in more detail in connection with FIG. 2.
  • the distributed edge hypervisor 100 can further include one or more platform interfaces 120 to connect an operating system (OS) 122 across multiple edge devices by sharing one or more hypervisors 124 on a dedicated hypervisor layer 126.
  • OS operating system
  • multiple same dedicated layers 126 can be configured to provide the pool of the shared multiple virtual machine monitoring platform. Further, the maximum resource claim of a single hypervisor 124 can be configured to have less than the sum of minimum resource requirements of the other hypervisor subscriptions running on the same micro edge device.
  • the distributed edge hypervisor 100 can further include multiple virtual networks 116, which are managed and create secure VLANs to isolate operations, processes, and devices while maintaining full functionality. Further, the distributed edge hypervisor can be configured to allow a single OS 122 to span across multiple micro edge devices, by sharing multiple hypervisors 124 with the same dedicated layer 126. In some embodiments, the dedicated layer 126 may be provided in the form of a predefined layer.
  • the multiple predefined layers 126 are configured to construct, on-demand, on the pool of available hypervisors 124 in the dedicated layer.
  • the distributed edge hypervisor 100 can be adapted to allow a single operating system 122 to span across multiple micro edge devices, by sharing multiple L0 hypervisors 124 with the same LI layer 126. Multiple LI layers 126 can be constructed on the pool of available L0 hypervisors 124.
  • the method disclosed herein can apply to any computing resource, (e.g., a device with a chip), which further becomes part of the smart edge hypervisor 100 and contributes to the shared resource capability of the smart edge.
  • a computing resource e.g., a device with a chip
  • the smart edge hypervisor 100 can be provided in the form of an offline brain (e.g., Al, or similar machine-learning technology) of the application field where it is applied and be a distributed cloud computing network for the nodes (e.g., devices) it is serving.
  • an offline brain e.g., Al, or similar machine-learning technology
  • the nodes e.g., devices
  • the subscription model of the distributed edge hypervisor 100 can make the resource pool almost unlimited, thus the expansion of the distributed edge hypervisor 100, which enables dynamic scaling.
  • the computing resources, as well as storage capacity, can easily be expanded.
  • the distributed edge hypervisor 100 can be integrated into a product application further comprising an advanced, more powerful controller, increased storage capacity, and additional memory to support the next-generation analytics skills for the expanded cluster ensuring continuous improvements and continuous development without disrupting the operations of the distributed edge hypervisor 100.
  • module refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, Python, or assembly.
  • One or more software instructions in the modules may be embedded in firmware, such as an EPROM.
  • modules may comprise connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors.
  • the modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
  • a data repository can be cloud-based storage or a hard disk drive (HDD), Solid-state drive (SSD), flash drive, ROM or any other data storage means.
  • the processor 102 can be one of, but not limited to, a general-purpose processor, an application-specific integrated circuit (ASIC), and an FPGA processor.
  • the processor 102 can include one or more specialized processing units such as FPGA, integrated system (e.g., bus) controllers, memory management control units, floating-point units, graphics processing units, digital signal processing units, etc.
  • the processor 102 can also include a microprocessor, such as AMD Athlon, Duron, or Opteron, ARM’s application, embedded or secure processors, IBM PowerPC, Intel’s Core, Itanium, Xeon, Celeron, or other processors, etc.
  • the processor 102 may be implemented using a mainframe, distributed processor, multi-core, parallel, grid, or other architectures.
  • an apparatus having a multi-core multi-operating system distributed edge hypervisor 100 to optimize loT devices and edge devices connected in a cloud computing environment can be implemented with and operated on multiple types of computing systems.
  • the computing system can include a central processing unit (“CPU”) or a similar processing unit.
  • the processing unit can comprise at least one data processor for executing program components for executing user or system-generated requests.
  • a user can include a person, a person using a device such as those included in this disclosure, or such a device itself.
  • the cloud server 110 can include the processor 102 in communication with the one or more input devices 112 and the output devices 114 via the I/O interface 108.
  • the VO interface 108 can employ one or more communication protocols/methods such as, in a non-limiting example, audio, analog, digital, monaural, RCA, stereo, IEEE- 1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.
  • n b/g/n/x Bluetooth®
  • cellular e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMAX, etc.
  • CDMA code-division multiple access
  • HSPA+ high-speed packet access
  • GSM global system for mobile communications
  • LTE long-term evolution
  • WiMAX etc.
  • a computing device such as a desktop, laptop, mobile, PDA, remote, etc. for computing is utilized. All of these devices can be arranged on the network 116. Therefore, all the components for a specific type of network 116 can be included.
  • the smart edge hypervisor 100 can be implemented by using the cloud server 110 machine connected to a realtime network 116.
  • the processor 102 can be in communication with one or more memory 104 devices (e.g., RAM, ROM, etc.) via the storage interface 106.
  • the storage interface 106 can connect to memory 104 devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc.
  • the memory 104 devices can further include a drum, magnetic disc drive, magnetooptical drive, optical drive, redundant array of independent disks (RAID), solid-state memory devices, solid-state drives, etc.
  • the memory 104 devices can store a collection of program or database components, including, without limitation, the operating system 122, user interface application, web browser, mail server, mail client, user/application data (e.g., any data variables, data elements, data records, or similar), etc.
  • the operating system 122 can facilitate resource management and operation of the computer system.
  • Examples of operating systems 122 include, without limitation, RTOS / FREERTOS, Apple Macintosh OS X, UNIX, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like.
  • BSD Berkeley Software Distribution
  • FreeBSD FreeBSD
  • NetBSD NetBSD
  • OpenBSD OpenBSD
  • Linux distributions e.g., Red Hat, Ubuntu, Kubuntu, etc.
  • IBM OS/2 Microsoft Windows (XP, Vista/7/8, etc.)
  • Apple iOS Google Android
  • Blackberry OS or the like.
  • FIG. 2 illustrates a flow diagram for an operating method 200 for a multi-core multi-operating system distributed edge hypervisor 100 to optimize loT devices and edge devices connected to the cloud server 110 computing environment.
  • the distributed edge hypervisor 100 can include at least the one or more processors 102 provided on the computer network 116, as described in connection with FIG. 1.
  • the operating method can execute programmable instructions by the one or more processor 102 to auto-select the best available access point and/or router within the distributed edge hypervisor 100 using MEC.
  • the computer memory 104 is communicatively coupled with the processor 102 holding the programmable instructions, that when executed by the processor 102, perform the following steps of the method 200.
  • the operating method 200 further includes the processor 102 subscribing and adding the resources of the loT devices and the edge devices for expanding the range of the services provided by the distributed edge hypervisor 100 at step 206.
  • the processors 102 select a best-evaluated services platform interface (i.e., access point) 120 within the distributed edge hypervisor 100 using multiple access edge computing, and further, the edge devices can be provided in the form of an access point.
  • a best-evaluated services platform interface i.e., access point
  • the operating method 200 further includes, at step 210 permitting the operating system 122 provided across the multiple edge devices, by the multiple access edge devices, and by sharing the multiple hypervisors 124 on the same dedicated layer 126.
  • the same dedicated layer 126 is optionally provided on the pool of the shared multiple virtual machine monitoring platform.
  • the operating method 200 can further include permitting the operating system 122 across the multiple edge devices, and by sharing multiple virtual machine monitoring platforms (not shown) on the same dedicated layer 126. Multiple dedicated layers 126 are optionally provided on the pool of the shared multiple virtual machine monitoring platforms, wherein the virtual machine monitoring platform is a distributed edge hypervisor 100.
  • the method 200 can also expand the range of the “Swarm or MESH” of this particular virtual machine monitoring platform.
  • the operating method 200 can also be provided in the form of a subscription model which uses the multi-access edge technology to perform computations.
  • the micro-edge devices can also be the hosting access points/routers etc. in the virtual machine monitoring platform.

Abstract

A multi-core multi-operating system distributed edge hypervisor and an operating method are provided. The distributed edge hypervisor can use a multi-access edge computing (MEC) unit and one or more processes on a connected network to subscribe and add the resources of one or more Internet of things (IoT) devices and/or micro edge devices to expand the range of the services provided by the distributed edge hypervisor. The one or more processors can automatically select a best-evaluated services access point within the distributed edge hypervisor using multiple access edge devices as access points to permit an operating system across multiple edge devices to share one or more hypervisors on a dedicated layer.

Description

SMART EDGE HYPERVISOR SYSTEM AND METHOD OF USE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No. 63/266,131 filed December 29, 2021, the entire contents of which are incorporated herein by reference.
TECHNICAL FIELD
[0002] This disclosure generally relates to the field of a virtual machine monitoring platform. More particularly, the disclosure relates to a distributed edge hypervisor system that implements virtualization using an operating method.
BACKGROUND
[0003] In some computing systems, virtual machines have been used to allow computing functions to be carried out at a desired location, which may be remote from where elements of a computing system were originally installed. Virtualization at the level of the operating system in an edge-computing system enables location-independent computing.
[0004] One approach to virtualization is the use of “containers.” The aim of employing a container is to try to isolate an application and its dependencies into a self-contained unit that can be run or executed anywhere. The container wraps up a piece of software in a complete file system. The file system then contains everything that it needs to run such as code, runtime, system tools, and system libraries. Containers have thus been used to provide operating-system-level virtualization.
[0005] The containerized application includes all the dependencies, thus enabling the containerized application to run on any node where it has been loaded. In turn, containers can be spread to new nodes relatively easily and may be easier to update once deployed to multiple nodes. In terms of size and the requirements for transfer through a network, containers are lightweight in comparison to more traditional virtual machines.
[0006] Virtualization creates a level of indirection or an abstraction layer between, for example, a physical object and a managing application. Virtualization may be a framework or a methodology for dividing the resources of a computer into multiple execution environments. A key benefit of virtualization is the ability to run multiple operating systems (OS) on a single physical server and share underlying hardware resources.
[0007] Until the late twentieth century, a host and computing environment included a hardware infrastructure of a processor core, input/output devices, memory units, and fixed storage, the combination of which supported only a single operating system, which in turn supported the execution of a single application at a time. Progressively, as processor power has increased exponentially, advanced forms of multiple operating systems enable both simulated and actual multitasking such that multiple applications may be executed within the same host computing environment.
[0008] In recent years, distributed network computing (DNETC) has been gaining popularity. An example of DNETC technology is Open Thread where the individual nodes of the thread network can get disconnected without losing the functionality of the application. All computing resources are shared and distributed over a single OS. Applications are self-contained packages of logic that mostly rely on core object files and related resource files.
[0009] As power computing becomes an integral part of modern industry, however, applications and resources became co-dependent on the presence of other applications such that the necessary environment for an application included not only the essential and compatible operating system and supporting hardware platform, but also other key applications such as, but not limited to, application servers, database management servers, collaboration servers, communicative logic and the like commonly referred to as middleware. With respect to the complexity of application and platform interoperability, though, different amalgamations of applications performing in a single hardware platform can demonstrate divergent degrees of performance and stability.
[0010] In modem computing, these issues have been resolved by using a virtual machine (VM) monitor, known in the art as a “hypervisor”, which manages the interaction between each VM operating system, and the underlying resources provided by the hardware platform. In this regard, a hypervisor runs directly on the hardware platform similar to how an operating system runs directly on hardware. By association, a hosted hypervisor runs within a host operating system. [0011] A present shift from cloud computing to edge computing for Internet of Things (loT) services is driven by both the need to have more processing power closer to the loT devices and the reduced cost of providing the proximately located computing capabilities.
[0012] Standards, such as for mobile edge computing, are addressing this paradigm shift by aiming to offer a different services environment and cloud-computing capabilities within the radio access network (RAN) in close proximity to wireless and mobile subscribers. Mobile edge computing can allow better performance for loT services and applications because of the increased responsiveness between the edge and loT devices over cloud-based processing. However, as the trend for loT data storage, processing, and analytics move toward the network edge, current systems often fail to optimize mobile edge computing resources supporting loT services.
[0013] An existing core challenge in edge computing is the extreme diversity in hardware that applications are expected to run on. This, in turn, creates challenges in producing secure, maintainable, scalable applications capable of running across different targets. Accordingly, there remains a need for technology convergence to make the system, apparatus, and method compact. It can be appreciated from the foregoing that there is a need for an improved solution that addresses the difficulties described above.
SUMMARY
[0014] A smart edge hypervisor that containerizes multiple operating systems running distributed on micro-edge, as well as loT devices using a multi-access edge computing (MEC) technology to optimize loT device operations and edge device computations, is provided.
[0015] A connected system having a multi-core multi-operating system distributed edge hypervisor and an operating method of the connected distributed edge hypervisor system is disclosed. The system and method, which uses the distributed edge hypervisor, allows a single OS to span across multiple micro edge devices by sharing multiple hypervisors with the same predefined layer. Further, multiple predefined layers can be constructed on the pool of available hypervisors in that layer.
[0016] A system and method are provided that have a novel hardware and software interaction, by way of a single edge hypervisor containerizing multiple OS running distributed on the micro-edge and loT devices using MEC technology to optimize loT device operations and edge device computations.
[0017] According to another aspect, which is based on subscribing a micro-edge device or an loT device to an available edge hypervisor, “adds” the device resources and OS to the multiOS distributed edge hypervisor. After the subscription of an loT device, the device can “autoselect” the best available access point/router within the distributed edge hypervisor using MEC technology. Micro-edge devices can be provided in the form of hosting access points. [0018] Another aspect, which can be implemented on, but not limited to, field programmable gate arrays (FPGAs), personal computers (PC), microcontrollers, and with other known processors to have computer algorithms and instruction up-gradation for supporting many applications.
DESCRIPTION OF THE DRAWINGS
[0019] FIG. l is a block diagram of a system architecture for a multi-core multi-operating system distributed edge hypervisor; and
[0020] FIG. 2 is a flow diagram of an operating method for implementing the distributed edge hypervisor of FIG. 1.
DETAILED DESCRIPTION
[0021] The following discussion is presented to enable a person skilled in the art to make and use embodiments of the invention. Various modifications to the illustrated embodiments will be readily apparent to those skilled in the art, and the generic principles herein can be applied to other embodiments and applications without departing from embodiments of the invention. Thus, embodiments of the invention are not intended to be limited to embodiments shown but are to be accorded the widest scope consistent with the principles and features disclosed herein. The following detailed description is to be read with reference to the figures, in which like elements in different figures have like reference numerals. The figures, which are not necessarily to scale, depict selected embodiments and are not intended to limit the scope of embodiments of the invention. Skilled artisans will recognize the examples provided herein have many useful alternatives and fall within the scope of embodiments of the invention.
[0022] Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components outlined in the following description or illustrated in the attached drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. For example, the use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
[0023] As used herein, unless otherwise specified or limited, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. Further, unless otherwise specified or limited, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings.
[0024] Devices, methods, and systems discussed herein can optimize micro-edge and loT device aggregation, processing, and service provisioning by dynamically assigning the smart-edge hypervisor to the particular MEC server hosting loT gateways, or to the local gateways responsible for loT traffic aggregation, processing, and storage. The gateways can be arranged hierarchically to reduce latencies between edge compute servers or gateways and loT devices.
[0025] These devices, methods, and techniques can provide dynamic processing and provide for processing near the edge (i.e., closer to loT data sources), which in turn optimizes network resources. The devices, methods, and systems that are discussed can provide resourceoptimizing edge-computing capacity (e.g., loT traffic aggregation, processing, and service provisioning) by dynamically assigning needs-based smart-edge hypervisor containerizing multiple operating systems running distributed on micro edge devices and loT-devices using MEC technology to optimize loT device operations and edge device computations. [0026] The micro-edge and loT devices can be located at different levels of the network depending on the network density, deployment requirements, and application needs. In a nonlimiting example, the system can include a plurality of edge gateways implementing edge-based computing that communicate between the micro-edge and loT devices arranged geographically to reduce latency.
[0027] To implement the multi-access edge computing, the use of different operating systems working simultaneously and independently of each other to perform various functions, which further requires the use of a hypervisor that helps to run the system free from the dependency on hardware. Therefore, the software module that provides the virtual machines on a host device to further use multiple operating systems within different guests that operate on the same resources, virtual computing, and storage platform of the host machine is termed a “hypervisor”. The hypervisor allows an operating system to run independently from the underlying hardware through the use of virtual machines.
[0028] Hypervisors can include “Type 1” (or “bare-metal”) or “Type 2” (or “hosted”) hypervisors. A Type 1 hypervisor acts as a lightweight operating system and runs directly on the host’s hardware, while a Type 2 hypervisor runs as a software layer on an operating system, like other computer programs. The bare-metal hypervisors run directly on the computing hardware and the hosted hypervisors run on top of the OS of the host machine. Lightweight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge. Although hosted hypervisors run within the OS, additional (and different) operating systems can be installed on top of the hypervisor. Hosted hypervisors are sometimes known as client hypervisors because they are most often used with end-users and software testing. [0029] Embodiments herein utilize a smart edge or distributed edge hypervisor to overcome the limitations of existing technologies. The smart edge hypervisor can further implement virtualization by providing a virtual machine monitoring platform. A physical host can execute a virtual machine monitor (VMM) that initiates a source virtual machine.
[0030] The virtual machine monitoring platform to optimize loT devices and micro-edge devices can comprise one or more devices, in the same way as the system described in connection with FIG. 1. Further, the virtual machine monitoring platform can be provided with a computer memory communicatively coupled with the processors holding computer program instruction that, when executed by the processor, perform an operating method described in connection with FIG. 2.
[0031] Another aspect reduces the downtime and latency issues in a computing environment. The above-described virtual machine monitoring platform can reduce the redundancy of edge gateways. In particular, the redundancy can be reduced by virtualizing the edge devices such that the critical processes continue to function in case of single or multiple edge device hardware failures. Even upgrades and patches to single or multiple-edge devices can be executed without downtime. Furthermore, the multiple virtual networks can be managed by creating secure virtual LANs (VLANs) to isolate operations, processes, and devices while maintaining full functionality if the cloud connection or uplink is interrupted.
[0032] The embodiments herein relate to subscribing and adding edge devices or microedge devices and a method for operating an edge device or loT device in a virtual machine monitoring platform. Edge devices are also referred to as edge gateways. An edge device can act as an interface between two computer networks. By way of example, the edge device can serve as an interface between a first computer network, using which certain automation devices are communicatively connected at a field level for controlling and/or monitoring a technical process, and a second network such as the cloud.
[0033] Edge computing is a distributed, open IT architecture that features decentralized processing power, enabling mobile computing and loT technologies. loT devices are devices in a network that often include sensors and limited computing power. Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location where it is needed to improve response. In edge computing, data is processed by the device itself or by a local computer or server, rather than being transmitted to a data center.
[0034] Edge computing optimizes internet devices and web applications by bringing computing closer to the source of the data. This minimizes the need for long-distance communications between client and server, which reduces latency and bandwidth usage. Edge computing harnesses the concept of the cloud, in that the servers connect to the user via the internet, but it shifts the servers closer to the end-user. The “edge” in this case, refers to the edges of high-density population centers and the “edges” of networks, the outer periphery of both.
[0035] Furthermore, these edge devices can include many different components, such as an loT sensor, an employee’s notebook computer, a smartphone, a security camera, or even the internet-connected microwave oven in the office break room. Other examples include loT-enabled pumps, heaters, lights, water filtration equipment, biogas upgrading systems, brewery systems, reverse osmosis systems, agricultural products, tanks, valves, and other loT-connected products for aquatic applications. Edge gateways themselves are considered edge devices within an edgecomputing infrastructure.
[0036] Edge access networks are also evolving to include virtualization and mobile networks. The rising domain of multi-access edge computing provides virtualization at a large scale. MEC offers cloud-computing capabilities and an IT service environment at the edge of the network by implementing MEC with data centers that are distributed at the edge.
[0037] Instead of sending all data to a cloud for processing, the network edge analyzes, processes, and stores the data. Collecting and processing data closer to the customer reduces latency and brings real-time performance to high-bandwidth applications. Further, the resources that make up a cloud can reside anywhere — from a centralized data center to a cell site, a central office, an aggregation site, a metro data center, and/or on the customer premises. The MEC platform enables distributed edge computing by processing content at the edge using either a server or a CPE. A software-defined access layer may also be used as an extension of a distributed cloud. [0038] A multi-core multi-operating system distributed edge hypervisor 100 and operating method 200 thereof can be used to overcome the limitations of existing edge computing technologies by virtualizing the edge devices. The critical processes continue to function in case of single or multiple-edge device hardware failures. Thus, even upgrades and patches to single or multiple-edge devices can be executed without downtime.
[0039] FIG. 1 illustrates a multi-core multi-operating system distributed edge hypervisor 100. The multi-core multi-operating system distributed edge hypervisor 100 can include one or more processors 102, a memory 104 device, a storage interface 106, and an I/O interface 108 in communication with input devices 112 and output devices 114. A cloud server 110 connected to a network 116 can be used to host a virtual platform for the multi-core multi-operating system distributed edge hypervisor 100.
[0040] The multi-core multi-operating system distributed edge hypervisor 100 (hereinafter “distributed edge hypervisor”) can include the one or more processors 102 on the computer network 116, and the computer memory 104 communicatively coupled with the processors 102 holding computer program instructions. When executed by the processor 102, the system performs a method for operating the distributed edge hypervisor 100, as described in more detail in connection with FIG. 2. The distributed edge hypervisor 100 can further include one or more platform interfaces 120 to connect an operating system (OS) 122 across multiple edge devices by sharing one or more hypervisors 124 on a dedicated hypervisor layer 126.
[0041] In accordance with an embodiment, multiple same dedicated layers 126 can be configured to provide the pool of the shared multiple virtual machine monitoring platform. Further, the maximum resource claim of a single hypervisor 124 can be configured to have less than the sum of minimum resource requirements of the other hypervisor subscriptions running on the same micro edge device.
[0042] In another embodiment, the distributed edge hypervisor 100 can further include multiple virtual networks 116, which are managed and create secure VLANs to isolate operations, processes, and devices while maintaining full functionality. Further, the distributed edge hypervisor can be configured to allow a single OS 122 to span across multiple micro edge devices, by sharing multiple hypervisors 124 with the same dedicated layer 126. In some embodiments, the dedicated layer 126 may be provided in the form of a predefined layer.
[0043] In a further embodiment, the multiple predefined layers 126 are configured to construct, on-demand, on the pool of available hypervisors 124 in the dedicated layer.
[0044] The distributed edge hypervisor 100 can be adapted to allow a single operating system 122 to span across multiple micro edge devices, by sharing multiple L0 hypervisors 124 with the same LI layer 126. Multiple LI layers 126 can be constructed on the pool of available L0 hypervisors 124.
[0045] The method disclosed herein can apply to any computing resource, (e.g., a device with a chip), which further becomes part of the smart edge hypervisor 100 and contributes to the shared resource capability of the smart edge.
[0046] Further, in an alternative embodiment, the smart edge hypervisor 100 can be provided in the form of an offline brain (e.g., Al, or similar machine-learning technology) of the application field where it is applied and be a distributed cloud computing network for the nodes (e.g., devices) it is serving.
[0047] The subscription model of the distributed edge hypervisor 100 can make the resource pool almost unlimited, thus the expansion of the distributed edge hypervisor 100, which enables dynamic scaling. The computing resources, as well as storage capacity, can easily be expanded.
[0048] In one embodiment, the distributed edge hypervisor 100 can be integrated into a product application further comprising an advanced, more powerful controller, increased storage capacity, and additional memory to support the next-generation analytics skills for the expanded cluster ensuring continuous improvements and continuous development without disrupting the operations of the distributed edge hypervisor 100.
[0049] The word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, Python, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprise connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device. A data repository can be cloud-based storage or a hard disk drive (HDD), Solid-state drive (SSD), flash drive, ROM or any other data storage means. [0050] In various embodiments, the processor 102 can be one of, but not limited to, a general-purpose processor, an application-specific integrated circuit (ASIC), and an FPGA processor. The processor 102 can include one or more specialized processing units such as FPGA, integrated system (e.g., bus) controllers, memory management control units, floating-point units, graphics processing units, digital signal processing units, etc. The processor 102 can also include a microprocessor, such as AMD Athlon, Duron, or Opteron, ARM’s application, embedded or secure processors, IBM PowerPC, Intel’s Core, Itanium, Xeon, Celeron, or other processors, etc. The processor 102 may be implemented using a mainframe, distributed processor, multi-core, parallel, grid, or other architectures.
[0051] Further, an apparatus having a multi-core multi-operating system distributed edge hypervisor 100 to optimize loT devices and edge devices connected in a cloud computing environment can be implemented with and operated on multiple types of computing systems.
[0052] The computing system can include a central processing unit (“CPU”) or a similar processing unit. The processing unit can comprise at least one data processor for executing program components for executing user or system-generated requests. A user can include a person, a person using a device such as those included in this disclosure, or such a device itself.
[0053] Returning to FIG. 1, the cloud server 110 can include the processor 102 in communication with the one or more input devices 112 and the output devices 114 via the I/O interface 108. The VO interface 108 can employ one or more communication protocols/methods such as, in a non-limiting example, audio, analog, digital, monaural, RCA, stereo, IEEE- 1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802. n b/g/n/x, Bluetooth®, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMAX, etc.
[0054] A computing device (not shown) such as a desktop, laptop, mobile, PDA, remote, etc. for computing is utilized. All of these devices can be arranged on the network 116. Therefore, all the components for a specific type of network 116 can be included. Preferably, the smart edge hypervisor 100 can be implemented by using the cloud server 110 machine connected to a realtime network 116.
[0055] In some embodiments, the processor 102 can be in communication with one or more memory 104 devices (e.g., RAM, ROM, etc.) via the storage interface 106. The storage interface 106 can connect to memory 104 devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc.
[0056] The memory 104 devices can further include a drum, magnetic disc drive, magnetooptical drive, optical drive, redundant array of independent disks (RAID), solid-state memory devices, solid-state drives, etc. The memory 104 devices can store a collection of program or database components, including, without limitation, the operating system 122, user interface application, web browser, mail server, mail client, user/application data (e.g., any data variables, data elements, data records, or similar), etc. The operating system 122 can facilitate resource management and operation of the computer system. Examples of operating systems 122 include, without limitation, RTOS / FREERTOS, Apple Macintosh OS X, UNIX, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like.
[0057] The above-mentioned embodiments are not only applicable to a monitoring platform and machines but are smart devices in general. Any computing resource or device with a chip, could in principle become part of the smart edge hypervisor 100 and contribute to the shared resource capability of the smart edge hypervisor 100.
[0058] FIG. 2 illustrates a flow diagram for an operating method 200 for a multi-core multi-operating system distributed edge hypervisor 100 to optimize loT devices and edge devices connected to the cloud server 110 computing environment. The distributed edge hypervisor 100 can include at least the one or more processors 102 provided on the computer network 116, as described in connection with FIG. 1.
[0059] At step 202, the operating method can execute programmable instructions by the one or more processor 102 to auto-select the best available access point and/or router within the distributed edge hypervisor 100 using MEC. At this step, the computer memory 104 is communicatively coupled with the processor 102 holding the programmable instructions, that when executed by the processor 102, perform the following steps of the method 200.
[0060] At step 204, the operating method 200 further includes the processor 102 subscribing and adding the resources of the loT devices and the edge devices for expanding the range of the services provided by the distributed edge hypervisor 100 at step 206.
[0061] At step 208, the processors 102 select a best-evaluated services platform interface (i.e., access point) 120 within the distributed edge hypervisor 100 using multiple access edge computing, and further, the edge devices can be provided in the form of an access point.
[0062] The operating method 200 further includes, at step 210 permitting the operating system 122 provided across the multiple edge devices, by the multiple access edge devices, and by sharing the multiple hypervisors 124 on the same dedicated layer 126. In some embodiments, the same dedicated layer 126 is optionally provided on the pool of the shared multiple virtual machine monitoring platform.
[0063] The operating method 200 can further include permitting the operating system 122 across the multiple edge devices, and by sharing multiple virtual machine monitoring platforms (not shown) on the same dedicated layer 126. Multiple dedicated layers 126 are optionally provided on the pool of the shared multiple virtual machine monitoring platforms, wherein the virtual machine monitoring platform is a distributed edge hypervisor 100. The method 200 can also expand the range of the “Swarm or MESH” of this particular virtual machine monitoring platform.
[0064] In some embodiments, the operating method 200 can also be provided in the form of a subscription model which uses the multi-access edge technology to perform computations. The micro-edge devices can also be the hosting access points/routers etc. in the virtual machine monitoring platform.
[0065] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Upon reviewing the above description, many other embodiments will be apparent to those of skill in the art.
[0066] The benefits and advantages which may be provided by the present invention have been described above concerning specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[0067] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions, and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions, and improvements fall within the scope of the invention.
[0068] Specific embodiments of a smart edge hypervisor according to the present invention have been described to illustrate the manner in which the invention can be made and used. It should be understood that the implementation of other variations and modifications of this invention and its different aspects will be apparent to one skilled in the art and that this invention is not limited by the specific embodiments described. Features described in one embodiment can be implemented in other embodiments. The subject disclosure is understood to encompass the present invention and any modifications, variations, or equivalents that fall within the spirit and scope of the basic underlying principles disclosed and claimed herein.

Claims

CLAIMS What is claimed is:
1. A multi-core multi-operating system distributed edge hypervisor to efficiently operate Internet of Things (loT) devices and edge devices, comprising: one or more processors provided on a cloud service of a computer network; a computer memory communicatively coupled with the one or more processors, wherein the computer memory is designed to hold programmable instructions to be executed by the processor; an interface designed to communicatively couple to one or more input devices and one or more output devices; one or more access points designed to allow an operating system to span across multiple micro-edge devices; and a dedicated layer shared among one or more hypervisors, wherein the one or more hypervisors can be accessed by the operating system.
2. The system of claim 1, wherein the distributed edge hypervisor is configured to allow the operating system to span across the multiple micro-edge devices by sharing the one or more hypervisors with the dedicated layer.
3. The system of claim 2, wherein the dedicated layer is designed to construct on- demand, a pool of available hypervisors in the dedicated layer.
4. The system of claim 1, wherein a subscription model of the distributed edge hypervisor is configured to enable dynamic scaling and capability expansions for distributed edge analytics.
5. The system of claim 1, wherein the distributed edge hypervisor further comprises one or more virtual networks including a secure virtual LAN (VLAN).
6. The system of claim 1, wherein a maximum resource claim of a single hypervisor is configured to have less than a sum of minimum resource requirements of other hypervisor subscriptions running on a common micro edge device.
7. The system of claim 1, wherein the system is implemented on a field programmable gate array (FPGA).
8. A method for operating a distributed edge hypervisor system to efficiently operate
Internet of things (loT) devices and edge devices connected in a cloud computing, comprising: auto-selecting the best available access point within the distributed edge hypervisor using a multi-access edge computing (MEC) unit including a processor provided in a computer network; storing programmable instructions on a computer memory communicatively coupled with the processor; and executing the programmable instructions using the processor, wherein the programmable instructions are configured to use the processor to perform the steps of: subscribing resources of the loT devices and the edge devices for expanding a range of the services provided by the distributed edge hypervisor; selecting a best-evaluated services access point within the distributed edge hypervisor using multiple access edge devices; permitting the operating system provided across the multiple access edge devices by the multiple access edge devices; and sharing multiple hypervisors on a dedicated layer.
9. The method of claim 8, wherein the edge devices are provided in the form of access points.
10. The method of claim 8, wherein the distributed edge hypervisor further comprises multiple virtual networks, which are managed, and creating a secure virtual LAN (VLAN) to isolate operations, processes, and devices.
11. The method of claim 8, wherein the distributed edge hypervisor is configured to allow a single operating system to span across multiple micro-edge devices by sharing multiple hypervisors with the dedicated layer.
12. The method of claim 8, wherein the dedicated layer is configured to construct on- demand, a pool of available hypervisors in the dedicated layer.
13. The method of claim 8, wherein the dedicated layer is optionally provided on a pool of a shared multiple virtual machine monitoring platform.
21
14. A method for operating a distributed edge hypervisor system, comprising: providing one or more processors across a connected network via a cloud server to operate one or more Internet of Things (loT) devices and one or more edge devices; automatically selecting a best available access point within the distributed edge hypervisor using the one or more processors; subscribing resources of the one or more loT devices and the one or more edge devices for expanding a range of the services provided by the distributed edge hypervisor; selecting a best-evaluated services access point within the distributed edge hypervisor using multiple access edge devices; and spanning the operating system across multiple access edge devices by sharing multiple hypervisors on a dedicated layer.
15. The method of claim 14, wherein the one or more edge devices are provided in the form of access points.
16. The method of claim 14, wherein the distributed edge hypervisor further comprises a virtual network including a secure virtual LAN (VLAN).
17. The method of claim 16, further comprising isolating operations, processes, and devices of the distributed edge hypervisor system using the virtual network.
18. The method of claim 14, further comprising constructing, on demand, a pool of available hypervisors in the dedicated layer.
22
19. The method of claim 14, further comprising adding the resources of the one or more loT devices and the one or more edge devices to an available edge hypervisor.
20. The method of claim 14, further comprising dynamically assigning the distributed edge hypervisor to a multi-access edge computing (MEC) unit server hosting loT gateways to reduce latencies between the loT gateways and the one or more loT devices.
23
PCT/US2022/082493 2021-12-29 2022-12-28 Smart edge hypervisor system and method of use WO2023129981A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163266131P 2021-12-29 2021-12-29
US63/266,131 2021-12-29

Publications (1)

Publication Number Publication Date
WO2023129981A1 true WO2023129981A1 (en) 2023-07-06

Family

ID=87000282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/082493 WO2023129981A1 (en) 2021-12-29 2022-12-28 Smart edge hypervisor system and method of use

Country Status (1)

Country Link
WO (1) WO2023129981A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100107162A1 (en) * 2008-03-07 2010-04-29 Aled Edwards Routing across a virtual network
US20120011254A1 (en) * 2010-07-09 2012-01-12 International Business Machines Corporation Network-aware virtual machine migration in datacenters
US20140013328A1 (en) * 2009-07-22 2014-01-09 Broadcom Corporation Method And System For Abstracting Virtual Machines In A Network
US20210117242A1 (en) * 2020-10-03 2021-04-22 Intel Corporation Infrastructure processing unit

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100107162A1 (en) * 2008-03-07 2010-04-29 Aled Edwards Routing across a virtual network
US20140013328A1 (en) * 2009-07-22 2014-01-09 Broadcom Corporation Method And System For Abstracting Virtual Machines In A Network
US20120011254A1 (en) * 2010-07-09 2012-01-12 International Business Machines Corporation Network-aware virtual machine migration in datacenters
US20210117242A1 (en) * 2020-10-03 2021-04-22 Intel Corporation Infrastructure processing unit

Similar Documents

Publication Publication Date Title
US11714672B2 (en) Virtual infrastructure manager enhancements for remote edge cloud deployments
US11184438B2 (en) Omnichannel approach to application sharing across different devices
US10169028B2 (en) Systems and methods for on demand applications and workflow management in distributed network functions virtualization
US10469600B2 (en) Local Proxy for service discovery
US10146563B2 (en) Predictive layer pre-provisioning in container-based virtualization
US10416996B1 (en) System and method for translating affliction programming interfaces for cloud platforms
US10445121B2 (en) Building virtual machine disk images for different cloud configurations from a single generic virtual machine disk image
CN110720091B (en) Method for coordinating infrastructure upgrades with hosted application/Virtual Network Functions (VNFs)
US11301762B1 (en) High perforamance machine learning inference framework for edge devices
US10162735B2 (en) Distributed system test automation framework
US10318314B2 (en) Techniques for managing software container dependencies
US9882775B1 (en) Dependent network resources
US20150095473A1 (en) Automatic configuration of applications based on host metadata using application-specific templates
US20150058461A1 (en) Image management in cloud environments
US11119675B2 (en) Polymorphism and type casting in storage volume connections
JP2013536518A (en) How to enable hypervisor control in a cloud computing environment
US10412190B1 (en) Device multi-step state transitions
US20190012212A1 (en) Distributed Computing Mesh
US20150288777A1 (en) Startup of message-passing-interface (mpi) based applications in a heterogeneous environment
US11467835B1 (en) Framework integration for instance-attachable accelerator
WO2023129981A1 (en) Smart edge hypervisor system and method of use
US10417254B2 (en) Intelligent content synchronization between content libraries
US9866451B2 (en) Deployment of enterprise applications
US20230019200A1 (en) Interactive access to headless cluster managers
US11698755B1 (en) Physical hardware controller for provisioning dynamic storage services on processing devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22917544

Country of ref document: EP

Kind code of ref document: A1