US20220129296A1 - Service network approach for dynamic container network management - Google Patents
Service network approach for dynamic container network management Download PDFInfo
- Publication number
- US20220129296A1 US20220129296A1 US17/507,359 US202117507359A US2022129296A1 US 20220129296 A1 US20220129296 A1 US 20220129296A1 US 202117507359 A US202117507359 A US 202117507359A US 2022129296 A1 US2022129296 A1 US 2022129296A1
- Authority
- US
- United States
- Prior art keywords
- address
- virtualization environment
- base
- nested
- environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 230000000977 initiatory effect Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/355—Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/352—Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/40—Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
- A63F13/48—Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
- A63F13/49—Saving the game status; Pausing or ending the game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/452—Remote windowing, e.g. X-Window System, desktop virtualisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/461—Saving or restoring of program or task context
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/50—Address allocation
- H04L61/5007—Internet protocol [IP] addresses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/65—Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/141—Setup of application sessions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/142—Managing session states for stateless protocols; Signalling session states; State transitions; Keeping-state mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/143—Termination or inactivation of sessions, e.g. event-controlled end of session
- H04L67/145—Termination or inactivation of sessions, e.g. event-controlled end of session avoiding end of session, e.g. keep-alive, heartbeats, resumption message or wake-up for inactive or interrupted session
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
- H04L67/306—User profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/53—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
- A63F2300/538—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5546—Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/63—Methods for processing data by generating or executing the game program for controlling the execution of the game in time
- A63F2300/636—Methods for processing data by generating or executing the game program for controlling the execution of the game in time involving process of starting or resuming a game
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45566—Nested virtual machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45587—Isolation or security of virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L2101/00—Indexing scheme associated with group H04L61/00
- H04L2101/60—Types of network addresses
- H04L2101/668—Internet protocol [IP] address subnets
Definitions
- FIG. 1 is a flow diagram of an exemplary method for dynamic container network management.
- FIG. 2 is a block diagram of an exemplary system for dynamic container network management.
- FIG. 3 is a block diagram of an exemplary network for dynamic container network management.
- FIG. 5 is a block diagram of exemplary network addresses for dynamic container network management.
- the cloud-based software distribution platform may provide such inter-device application access via nested containers and/or virtual machines (“VM”).
- the platform may run one or more containers as base virtualization environments, each of which may host a VM as nested virtualization environments.
- a container may provide an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine.
- a virtual machine may provide an isolated application environment that virtualizes hardware as well as an OS.
- a VM may be more resource-intensive than a container, a VM may virtualize hardware and/or an OS different from the base host machine.
- the platform may utilize several containers, with a virtual machine running in each container.
- the use of containers may facilitate scaling of independent virtualized environments, whereas the use of virtual machines may facilitate running applications designed for different application environments.
- network management of the various virtualization environments may require assigning network addresses (e.g., internet protocol (“IP”) addresses) to each virtualization environment.
- IP internet protocol
- a conventional dynamic host configuration protocol (“DHCP”) addressing scheme may assign unique network addresses to each virtualization environment without accounting for the nested architecture of the platform.
- DHCP dynamic host configuration protocol
- a lookup table or other similar additional network address management may be required to manage the network addresses and correlate VMs to their corresponding containers.
- enforcing network policies or logging/investigating network behavior may require additional overhead for using the lookup table.
- the present disclosure is generally directed to dynamic container network management.
- embodiments of the present disclosure may use an addressing scheme for assigning IP addresses to base virtualization environments and their corresponding nested virtualization environment.
- the addressing scheme may provide for IP addresses that may correlate, using the IP addresses themselves, the base virtualization environment with the nested virtualization environment.
- the addressing scheme described herein may not require using a lookup table to determine which IP address corresponds to which base or nested virtualization environment.
- the systems and methods described herein may improve the functioning of a computer by providing more efficient network management that may obviate the overhead associated with using a lookup table.
- the systems and methods described herein may improve the field of network management by providing an efficient network addressing scheme for nested virtualization environment.
- FIG. 1 illustrates a method for dynamic container network management are provided.
- FIG. 2 illustrates a system for performing the methods described herein.
- FIG. 3 illustrates a network environment for dynamic container network management.
- FIG. 4 illustrates a cloud-based software distribution platform.
- FIG. 5 illustrates an exemplary addressing scheme.
- FIG. 1 is a flow diagram of an exemplary computer-implemented method 100 for dynamic container network management.
- the steps shown in FIG. 1 may be performed by any suitable computer-executable code and/or computing system, including the system(s) illustrated in FIGS. 2 and/or 3 .
- each of the steps shown in FIG. 1 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below.
- the term “virtual machine” may refer to an isolated application environment that virtualizes hardware as well as an OS. Because a VM may virtualize hardware, an OS for the VM may not be restricted by the base host machine OS. For example, even if the base host machine is running Windows (or another desktop OS), a VM on the base host machine may be configured to run Android (or other mobile OS) by emulating mobile device hardware. In other examples, other combinations of OSes may be used.
- the cloud-based software distribution host may host software applications for cloud-based access.
- software applications particularly games
- the cloud-based software distribution host described herein may provide cloud-based access to games designed for a particular OS on a device running an otherwise incompatible OS for the games.
- the platform may host a desktop game and allow a mobile device (or other device running an OS that is not supported by the game) to interact with an instance of the desktop game as if running on the mobile device.
- the platform may host a mobile game and allow a desktop computer (or other device running an OS that is not supported by the game) to interact with an instance of the mobile game as if running on the desktop computer.
- a desktop computer or other device running an OS that is not supported by the game
- the software applications may correspond to any software application that may not be supported or is otherwise incompatible with another computing device, including but not limited to OS, hardware, etc.
- one or more of modules 202 in FIG. 2 may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks.
- one or more of modules 202 may represent modules stored and configured to run on one or more computing devices, such as the devices illustrated in FIG. 3 (e.g., computing device 302 and/or server 306 ).
- One or more of modules 202 in FIG. 2 may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
- example system 200 may also include one or more physical processors, such as physical processor 230 .
- Physical processor 230 generally represents any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions.
- physical processor 230 may access and/or modify one or more of modules 202 stored in memory 240 . Additionally or alternatively, physical processor 230 may execute one or more of modules 202 to facilitate maintain the mapping system.
- Examples of physical processor 230 include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, and/or any other suitable physical processor.
- CPUs Central Processing Units
- FPGAs Field-Programmable Gate Arrays
- ASICs Application-Specific Integrated Circuits
- example system 200 may also include one or more additional elements 220 , such as a container 222 , a container IP address 224 , a virtual machine 226 , and a VM IP address 228 .
- Container 222 , container IP address 224 , VM 226 , and/or VM IP address 228 may be stored on and/or executed from a local storage device, such as memory 240 , or may be accessed remotely.
- Container 222 may represent a base virtualization environment, as will be explained further below.
- Container IP address 224 may represent a network address assigned to container 222 according to an addressing scheme described herein.
- VM 226 may represent a nested virtualization environment running in container 222 .
- VM IP address 228 may represent a network address assigned to VM 226 according to the addressing scheme, as will be explained further below.
- Example system 200 in FIG. 2 may be implemented in a variety of ways. For example, all or a portion of example system 200 may represent portions of example network environment 300 in FIG. 3 .
- FIG. 3 illustrates an exemplary network environment 300 implementing aspects of the present disclosure.
- the network environment 300 includes computing device 302 , a network 304 , and server 306 .
- Computing device 302 may be a client device or user device, such as a mobile device, a desktop computer, laptop computer, tablet device, smartphone, or other computing device.
- Computing device 302 may include a physical processor 230 , which may be one or more processors, and a memory 240 , which may store data such as one or more of additional elements 220 and/or modules 202 .
- Server 306 may represent or include one or more servers capable of hosting a cloud-based software distribution platform. Server 306 may provide cloud-based access to software applications running in nested virtualization environments. Server 306 may include a physical processor 230 , which may include one or more processors, memory 240 , which may store modules 202 , and one or more of additional elements 220 .
- Network 304 may represent any type or form of communication network, such as the Internet, and may comprise one or more physical connections, such as LAN, and/or wireless connections, such as WAN.
- the systems described herein may perform step 102 in a variety of ways.
- the base virtualization environment may correspond to a container (e.g., container 222 ) that shares an OS kernel with the cloud-based software distribution host and, as will be described further below, the nested virtualization environment may correspond to a virtual machine (e.g., VM 226 ) running in the container.
- the base virtualization environment may have been previously initiated and may require an IP address, if not previously assigned, or may require a new IP address, for instance due to changes to a network topology or changes to the nested virtualization environment.
- Identifying module 206 may identify container 222 as requiring assignment of an IP address.
- identifying the base virtualization environment may include initiating the base virtualization environment.
- virtualization module 204 which may correspond to a virtualization environment management system such as a hypervisor or other virtualization or container management software, may initiate container 222 .
- identifying module 206 may identify container 222 as requiring assignment of an IP address.
- FIG. 4 illustrates an exemplary cloud-based software distribution platform 400 .
- the platform 400 may include a host 406 , a network 404 (which may correspond to network 304 ), and computing devices 402 and 403 .
- Host 406 which may correspond to server 306 , may include containers 440 and 442 , which may respectively include a virtual machine 430 and a virtual machine 432 .
- VM 430 may run an application 420 and VM 432 may run an application 422 .
- Host 406 may utilize nested virtualization environments (e.g., VM 430 running in container 440 and VM 432 running in container 442 ) to more efficiently manage virtualization environments.
- nested virtualization environments e.g., VM 430 running in container 440 and VM 432 running in container 442 ) to more efficiently manage virtualization environments.
- the nested virtualization may facilitate management of virtualization environments for various types of VMs as well as more efficiently scale the number of VMs running concurrently. Certain aspects which may be global across certain VMs may be better managed via containers.
- Computing device 402 which may correspond to an instance of computing device 302 , may access application 420 via network 404 .
- Computing device 403 which may correspond to an instance of computing device 302 , may access application 422 via network 404 .
- one or more of the systems described herein may assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment.
- IP internet protocol
- addressing module 208 may assign container IP address 224 to container 222 .
- IP address may refer to a numerical label assigned to a device on a network for identification and location addressing.
- IP addresses include, without limitation, static IP addresses that are fixed and may remain the same each time a system connects to a network and dynamic IP addresses that may be reassigned as needed for a network topology.
- addressing module 208 may assign container IP address 224 to container 222 based on the addressing scheme described herein.
- FIG. 5 illustrates a first IP address 500 , which may correspond to container IP address 224 , and a second IP address 502 , which may correspond to VM IP address 228 .
- IP address 500 may include a network identifier 510 , a subnet identifier 512 , and a host identifier 514 .
- IP address 502 may include a network identifier 520 , a subnet identifier 522 , and a host identifier 524 .
- the term “network identifier” may refer to a network number or routing prefix for routing network traffic to associated routers.
- the term “subnet identifier” may refer to an identifier for a subnetwork of a particular network (e.g., a network identified by the network identifier).
- the term “host identifier” may refer to an identifier for a particular host device on the subnetwork. This host device may correspond to physical host device as well as virtual host devices, such as a container, VM, etc.
- the addressing scheme may correlate IP address 500 with IP address 502 based on one or more of network identifiers 510 and 520 , subnet identifiers 512 and 522 , and host identifiers 514 and 524 .
- one or more of the systems described herein may identify a nested virtualization environment running in the base virtualization environment.
- the cloud-based software distribution host may serve an application running in the nested virtualization environment.
- identifying module 206 may identify VM 226 running in container 222 .
- identifying module 206 as part of host 406 , may identify VM 430 running in container 440 , and/or VM 432 running in container 442 .
- Host 406 may serve application 420 via VM 430 and may also serve application 422 via VM 432 . As further illustrated in FIG. 4 , computing device 402 may access and virtually run application 420 served by host 406 .
- application 420 may be an application that is not configured to run in a native application environment of computing device 402 .
- application 420 may be a mobile app for running on a mobile device OS
- computing device 402 may be a desktop computer or a mobile device with an OS incompatible with application 420 .
- host 406 may run VM 430 capable of running application 420 .
- Host 406 may provide computing device 402 with cloud-based access to application 420 for instance by receiving inputs (e.g., user inputs, device information, commands, etc.) from computing device 402 , converting the inputs for use with application 420 , and provide outputs (e.g., graphical outputs, commands, etc.) from application 420 to computing device 402 .
- inputs e.g., user inputs, device information, commands, etc.
- outputs e.g., graphical outputs, commands, etc.
- host 406 may provide computing device 403 with cloud-based access to application 422 .
- Identifying module 206 may identify VM 226 as requiring assignment of an IP address.
- identifying the nested virtualization environment may include initiating the nested virtualization environment.
- virtualization module 204 may initiate VM 226 .
- identifying module 206 may identify VM 226 as requiring assignment of an IP address.
- one or more of the systems described herein may assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address.
- the addressing scheme may correlate the second IP address to the first IP address.
- addressing module 208 may assign VM IP address 228 to VM 226 .
- the addressing scheme may involve using the first IP address to assign the second IP address.
- Addressing module 208 may use container IP address 224 to determine a value for VM IP address 228 .
- the value for container IP address 224 may be directly used for assigning the value for VM IP address 228 .
- all or a subset of container IP address 224 may directly identify VM 226 .
- the value for container IP address 224 may indirectly identify VM 226 .
- all or a subset of container IP address 224 may be transformed (e.g., with a hash or similar function) to identify VM 226 .
- addressing module 208 may use VM IP address 228 to determine a value for container IP address 224 .
- VM IP address 228 may directly or indirectly identify container 222 .
- a subset of first IP address 500 (which may correspond to container IP address 224 or VM IP address 228 ) may correlate to second IP address 502 (which may correspond to VM IP address 228 or container IP address 224 ).
- a subset of first IP address 500 (e.g., subnet identifier 512 and/or host identifier 514 ) may correspond to at least a portion of second IP address 502 .
- the addressing scheme may reserve a separate subnetwork address range for IP addresses of nested virtualization environments to distinguish from base virtualization environments.
- VM 430 and VM 432 in FIG. 4 may be assigned to a subnetwork address range separate from that of container 440 and container 442 .
- the subnet identifier may indicate whether the IP address corresponds to a virtual machine or a container. In such examples, the host identifier may match for matching containers and VMs.
- subnet identifier 512 may indicate that first IP address 500 corresponds to a container and subnet identifier 522 may indicate that second IP address 502 corresponds to a VM.
- Host identifier 514 may match (e.g., be the same as or otherwise complements) host identifier 524 to indicate a nested pair of virtualization environments.
- a subset or portion of the second IP address may identify the base virtualization environment.
- a subset or portion of the second IP address may identify the nested virtualization environment.
- the first IP address may directly correlate to the second IP address.
- the addressing scheme may forego a lookup table for correlating the first IP address with the second IP address.
- host 406 may more efficiently perform network management functions for containers 440 and 442 and VMs 430 and 432 .
- host 406 may independently filter network traffic for the base virtualization environments (e.g., containers 440 and 442 ) and network traffic for the nested virtualization environments (e.g., and VMs 430 and 432 ).
- a subset of the particular IP address may identify between a container or a VM.
- host 406 may identify between base virtualization environments and nested virtualization environments using the IP addresses themselves, host 406 may efficiently apply a first filter protocol to base virtualization environments and independently apply a second filter protocol to nested virtualization environments. In addition, host 406 may independently enforce different network policies for base virtualization environments and nested virtualization environments. For example, host 406 may enforce a first network policy for the base virtualization environments and a second network policy for the nested virtualization environments. Additionally, tracing of network behavior may be simplified because a subset of a particular IP address may identify a nested container/VM pair without requiring a lookup table to identify such pairs.
- the systems and methods described herein provide dynamic container network management via an addressing scheme that correlates virtual machines to their corresponding containers.
- a cloud application architecture may run virtual machines on top of an existing container platform for running instances of particular hosting environments.
- the container platform may assign IP addresses for each container, each virtual machine may require its own IP address to facilitate access to external services.
- a conventional DHCP scheme may assign IP addresses to virtual machines in a way that may not account for the cloud application architecture such that a lookup table may be needed to determine which virtual machine address corresponds to which container address. Thus, enforcing network policies or logging and investigating network behavior may require using the lookup table.
- the systems and methods described herein may provide an addressing scheme that may simplify correlation between containers and virtual machines without requiring the lookup table. For example, virtual machine addresses may be assigned to a separate subnetwork address range to facilitate independent filtering of container program traffic and virtual machine traffic. In addition, the addressing scheme may allow determining the address of a container from the address of the corresponding virtual machine and vice versa.
- a computer-implemented method may include: (i) identifying a base virtualization environment on a cloud-based software distribution host, (ii) assigning, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identifying a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assigning, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
- IP internet protocol
- Example 2 The method of Example 1, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
- Example 3 The method of Example 1 or 2, wherein a portion of the second IP address identifies the base virtualization environment.
- Example 4 The method of Example 1, 2, or 3, wherein a portion of the second IP address identifies the nested virtualization environment.
- Example 5 The method of any of Examples 1-4, wherein the addressing scheme directly correlates the first IP address to the second IP address.
- Example 6 The method of any of Examples 1-5, wherein the second IP address includes a subnetwork address based on a separate subnetwork address range reserved by the addressing scheme for IP addresses of nested virtualization environments.
- Example 7 The method of any of Examples 1-6, further comprising applying a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
- Example 8 The method of any of Examples 1-7, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
- Example 9 The method of any of Examples 1-8, wherein the base virtualization environment corresponds to a container that shares an OS kernel with the cloud-based software distribution host and the nested virtualization environment corresponds to a virtual machine (VM).
- VM virtual machine
- Example 10 The method of any of Examples 1-9, wherein the VM corresponds to a mobile OS environment, the application corresponds to a mobile game, and the cloud-based software distribution host provides cloud-based access to an instance of the mobile came.
- a system may include: at least one physical processor, physical memory comprising computer-executable instructions that, when executed by the physical processor, may cause the physical processor to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
- IP internet protocol
- Example 12 The system of Example 11, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
- Example 13 The system of Example 11 or 12, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
- Example 14 The system of Example 11, 12, or 13, wherein the addressing scheme directly correlates the first IP address with the second IP address.
- Example 15 The system of any of Examples 11-14, further comprising instructions that, when executed by the physical processor, cause the physical processor to: apply a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
- Example 16 The system of any of Examples 11-15, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
- Example 17 A non-transitory computer-readable medium that may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
- IP internet protocol
- Example 18 The method of Example 17, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
- Example 19 The method of Example 17 or 18, wherein the addressing scheme directly correlates the first IP address to the second IP address.
- Example 20 The method of Example 17, 18, or 19, further comprising instructions that, when executed by the at least one processor of the computing device, may cause the computing device to: enforce a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
- computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein.
- these computing device(s) may each include at least one memory device and at least one physical processor.
- the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions.
- a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
- RAM Random Access Memory
- ROM Read Only Memory
- HDDs Hard Disk Drives
- SSDs Solid-State Drives
- optical disk drives caches, variations or combinations of one or more of the same, or any other suitable storage memory.
- the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions.
- a physical processor may access and/or modify one or more modules stored in the above-described memory device.
- Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
- modules described and/or illustrated herein may represent portions of a single module or application.
- one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks.
- one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein.
- One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
- one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another.
- one or more of the modules recited herein may receive network address data to be transformed, transform the network address data, use the result of the transformation to assign network addresses, and store the result of the transformation to manage network addresses.
- one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
- the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions.
- Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
- transmission-type media such as carrier waves
- non-transitory-type media such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Cardiology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- General Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Information Transfer Between Computers (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
- Stored Programmes (AREA)
- Pinball Game Machines (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 63/105,320, filed 25 Oct. 2020, and U.S. Provisional Application No. 63/194,821, filed 28 May 2021, the disclosures of each of which are incorporated, in their entirety, by this reference. Co-pending U.S. application Ser. No. 17/506,640, filed 20 Oct. 2021, is incorporated, in its entirety, by this reference.
- The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
-
FIG. 1 is a flow diagram of an exemplary method for dynamic container network management. -
FIG. 2 is a block diagram of an exemplary system for dynamic container network management. -
FIG. 3 is a block diagram of an exemplary network for dynamic container network management. -
FIG. 4 is a block diagram of an exemplary cloud-based application platform. -
FIG. 5 is a block diagram of exemplary network addresses for dynamic container network management. - Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
- A cloud-based software distribution platform may provide users with cloud-based access to applications running remotely on the platform. The cloud-based software distribution platform may allow a user to use his or her own device to connect with the platform and access applications as if running on the user's device. The platform may further allow the user to run applications regardless of a type or operating system (“OS”) of the user's device as well as an intended operating environment of the application. For example, the user may use a mobile device to run applications designed for a desktop computing environment. Even if the application may not natively be run on the user's device, the platform may provide cloud-based access to the application.
- The cloud-based software distribution platform may provide such inter-device application access via nested containers and/or virtual machines (“VM”). For example, the platform may run one or more containers as base virtualization environments, each of which may host a VM as nested virtualization environments. A container may provide an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine. A virtual machine may provide an isolated application environment that virtualizes hardware as well as an OS. Although a VM may be more resource-intensive than a container, a VM may virtualize hardware and/or an OS different from the base host machine.
- In order to scale cloud-based access to applications, the platform may utilize several containers, with a virtual machine running in each container. The use of containers may facilitate scaling of independent virtualized environments, whereas the use of virtual machines may facilitate running applications designed for different application environments. However, network management of the various virtualization environments (e.g., containers and/or VMs) may require assigning network addresses (e.g., internet protocol (“IP”) addresses) to each virtualization environment. A conventional dynamic host configuration protocol (“DHCP”) addressing scheme may assign unique network addresses to each virtualization environment without accounting for the nested architecture of the platform. Thus, a lookup table or other similar additional network address management may be required to manage the network addresses and correlate VMs to their corresponding containers. However, enforcing network policies or logging/investigating network behavior may require additional overhead for using the lookup table.
- The present disclosure is generally directed to dynamic container network management. As will be explained in greater detail below, embodiments of the present disclosure may use an addressing scheme for assigning IP addresses to base virtualization environments and their corresponding nested virtualization environment. The addressing scheme may provide for IP addresses that may correlate, using the IP addresses themselves, the base virtualization environment with the nested virtualization environment. Thus, the addressing scheme described herein may not require using a lookup table to determine which IP address corresponds to which base or nested virtualization environment. The systems and methods described herein may improve the functioning of a computer by providing more efficient network management that may obviate the overhead associated with using a lookup table. In addition, the systems and methods described herein may improve the field of network management by providing an efficient network addressing scheme for nested virtualization environment.
- Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
- The following will provide, with reference to
FIGS. 1-5 , detailed descriptions of dynamic container network management.FIG. 1 illustrates a method for dynamic container network management are provided.FIG. 2 illustrates a system for performing the methods described herein.FIG. 3 illustrates a network environment for dynamic container network management.FIG. 4 illustrates a cloud-based software distribution platform.FIG. 5 illustrates an exemplary addressing scheme. -
FIG. 1 is a flow diagram of an exemplary computer-implementedmethod 100 for dynamic container network management. The steps shown inFIG. 1 may be performed by any suitable computer-executable code and/or computing system, including the system(s) illustrated inFIGS. 2 and/or 3 . In one example, each of the steps shown inFIG. 1 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below. - As illustrated in
FIG. 1 , atstep 102 one or more of the systems described herein may identify a base virtualization environment on a cloud-based software distribution host. The virtualization environment may be an isolated application environment that virtualizes at least an OS. For example, identifyingmodule 206 may identify container 222 (e.g., the base virtualization environment). - In some embodiments, the term “virtualization environment” may refer to an isolated application environment that may virtualize at least some aspects of the application environment such that an application may interface with the virtualized aspects as if running on the application's native environment. Examples of virtualization environments include, without limitation, containers and VMs. In some embodiments, the term “container” may refer to an isolated application environment that virtualizes at least an OS of the base host machine by sharing a system or OS kernel with the host machine. For example, if the base host machine runs Windows (or other desktop OS), the container may also run Windows (or other desktop OS) by sharing the OS kernel such that the container may not require a complete set of OS binaries and libraries. In some embodiments, the term “virtual machine” may refer to an isolated application environment that virtualizes hardware as well as an OS. Because a VM may virtualize hardware, an OS for the VM may not be restricted by the base host machine OS. For example, even if the base host machine is running Windows (or another desktop OS), a VM on the base host machine may be configured to run Android (or other mobile OS) by emulating mobile device hardware. In other examples, other combinations of OSes may be used.
- In some embodiments, the cloud-based software distribution host may host software applications for cloud-based access. Conventionally, software applications, particularly games, are often developed for a specific OS and require porting to run on other OSes. However, the cloud-based software distribution host described herein (also referred to as the cloud-based software distribution platform herein) may provide cloud-based access to games designed for a particular OS on a device running an otherwise incompatible OS for the games. For example, the platform may host a desktop game and allow a mobile device (or other device running an OS that is not supported by the game) to interact with an instance of the desktop game as if running on the mobile device. Similarly, the platform may host a mobile game and allow a desktop computer (or other device running an OS that is not supported by the game) to interact with an instance of the mobile game as if running on the desktop computer. Although the examples herein refer to games as well as OS incompatibility, in other examples the software applications may correspond to any software application that may not be supported or is otherwise incompatible with another computing device, including but not limited to OS, hardware, etc.
- Various systems described herein may perform step 110.
FIG. 2 is a block diagram of anexample system 200 for dynamic container network management. As illustrated in this figure,example system 200 may include one ormore modules 202 for performing one or more tasks. As will be explained in greater detail herein,modules 202 may include avirtualization module 204, an identifyingmodule 206, an addressingmodule 208, and anetworking module 210. Although illustrated as separate elements, one or more ofmodules 202 inFIG. 2 may represent portions of a single module or application. - In certain embodiments, one or more of
modules 202 inFIG. 2 may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, and as will be described in greater detail below, one or more ofmodules 202 may represent modules stored and configured to run on one or more computing devices, such as the devices illustrated inFIG. 3 (e.g.,computing device 302 and/or server 306). One or more ofmodules 202 inFIG. 2 may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks. - As illustrated in
FIG. 2 ,example system 200 may also include one or more memory devices, such asmemory 240.Memory 240 generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example,memory 240 may store, load, and/or maintain one or more ofmodules 202. Examples ofmemory 240 include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, and/or any other suitable storage memory. - As illustrated in
FIG. 2 ,example system 200 may also include one or more physical processors, such asphysical processor 230.Physical processor 230 generally represents any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example,physical processor 230 may access and/or modify one or more ofmodules 202 stored inmemory 240. Additionally or alternatively,physical processor 230 may execute one or more ofmodules 202 to facilitate maintain the mapping system. Examples ofphysical processor 230 include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, and/or any other suitable physical processor. - As illustrated in
FIG. 2 ,example system 200 may also include one or moreadditional elements 220, such as acontainer 222, acontainer IP address 224, avirtual machine 226, and aVM IP address 228.Container 222,container IP address 224,VM 226, and/orVM IP address 228 may be stored on and/or executed from a local storage device, such asmemory 240, or may be accessed remotely.Container 222 may represent a base virtualization environment, as will be explained further below.Container IP address 224 may represent a network address assigned tocontainer 222 according to an addressing scheme described herein.VM 226 may represent a nested virtualization environment running incontainer 222.VM IP address 228 may represent a network address assigned toVM 226 according to the addressing scheme, as will be explained further below. -
Example system 200 inFIG. 2 may be implemented in a variety of ways. For example, all or a portion ofexample system 200 may represent portions ofexample network environment 300 inFIG. 3 . -
FIG. 3 illustrates anexemplary network environment 300 implementing aspects of the present disclosure. Thenetwork environment 300 includescomputing device 302, anetwork 304, andserver 306.Computing device 302 may be a client device or user device, such as a mobile device, a desktop computer, laptop computer, tablet device, smartphone, or other computing device.Computing device 302 may include aphysical processor 230, which may be one or more processors, and amemory 240, which may store data such as one or more ofadditional elements 220 and/ormodules 202. -
Server 306 may represent or include one or more servers capable of hosting a cloud-based software distribution platform.Server 306 may provide cloud-based access to software applications running in nested virtualization environments.Server 306 may include aphysical processor 230, which may include one or more processors,memory 240, which may storemodules 202, and one or more ofadditional elements 220. -
Computing device 302 may be communicatively coupled toserver 306 throughnetwork 304.Network 304 may represent any type or form of communication network, such as the Internet, and may comprise one or more physical connections, such as LAN, and/or wireless connections, such as WAN. - Returning to
FIG. 1 , the systems described herein may performstep 102 in a variety of ways. In one example, the base virtualization environment may correspond to a container (e.g., container 222) that shares an OS kernel with the cloud-based software distribution host and, as will be described further below, the nested virtualization environment may correspond to a virtual machine (e.g., VM 226) running in the container. The base virtualization environment may have been previously initiated and may require an IP address, if not previously assigned, or may require a new IP address, for instance due to changes to a network topology or changes to the nested virtualization environment. Identifyingmodule 206 may identifycontainer 222 as requiring assignment of an IP address. - In some examples, identifying the base virtualization environment may include initiating the base virtualization environment. For example,
virtualization module 204, which may correspond to a virtualization environment management system such as a hypervisor or other virtualization or container management software, may initiatecontainer 222. As part of initiatingcontainer 222, identifyingmodule 206 may identifycontainer 222 as requiring assignment of an IP address. -
FIG. 4 illustrates an exemplary cloud-basedsoftware distribution platform 400. Theplatform 400 may include ahost 406, a network 404 (which may correspond to network 304), andcomputing devices server 306, may includecontainers virtual machine 430 and avirtual machine 432.VM 430 may run anapplication 420 andVM 432 may run anapplication 422. Host 406 may utilize nested virtualization environments (e.g.,VM 430 running incontainer 440 andVM 432 running in container 442) to more efficiently manage virtualization environments. For instance, as a number of VMs are initiated and/or closed the nested virtualization may facilitate management of virtualization environments for various types of VMs as well as more efficiently scale the number of VMs running concurrently. Certain aspects which may be global across certain VMs may be better managed via containers. -
Computing device 402, which may correspond to an instance ofcomputing device 302, may accessapplication 420 vianetwork 404.Computing device 403, which may correspond to an instance ofcomputing device 302, may accessapplication 422 vianetwork 404. - Returning to
FIG. 1 , atstep 104 one or more of the systems described herein may assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment. For example, addressingmodule 208 may assigncontainer IP address 224 tocontainer 222. - In some embodiments, the term “IP address” may refer to a numerical label assigned to a device on a network for identification and location addressing. Examples of IP addresses include, without limitation, static IP addresses that are fixed and may remain the same each time a system connects to a network and dynamic IP addresses that may be reassigned as needed for a network topology.
- The systems described herein may perform
step 104 in a variety of ways. In one example, addressingmodule 208 may assigncontainer IP address 224 tocontainer 222 based on the addressing scheme described herein.FIG. 5 illustrates afirst IP address 500, which may correspond tocontainer IP address 224, and asecond IP address 502, which may correspond toVM IP address 228. - As illustrated in
FIG. 5 ,IP address 500 may include anetwork identifier 510, asubnet identifier 512, and ahost identifier 514.IP address 502 may include anetwork identifier 520, asubnet identifier 522, and ahost identifier 524. In some embodiments, the term “network identifier” may refer to a network number or routing prefix for routing network traffic to associated routers. In some embodiments, the term “subnet identifier” may refer to an identifier for a subnetwork of a particular network (e.g., a network identified by the network identifier). In some embodiments, the term “host identifier” may refer to an identifier for a particular host device on the subnetwork. This host device may correspond to physical host device as well as virtual host devices, such as a container, VM, etc. - As will be described further herein, the addressing scheme may correlate
IP address 500 withIP address 502 based on one or more ofnetwork identifiers subnet identifiers host identifiers - Turning back to
FIG. 1 , atstep 106 one or more of the systems described herein may identify a nested virtualization environment running in the base virtualization environment. The cloud-based software distribution host may serve an application running in the nested virtualization environment. For example, identifyingmodule 206 may identifyVM 226 running incontainer 222. In another example, identifyingmodule 206, as part ofhost 406, may identifyVM 430 running incontainer 440, and/orVM 432 running incontainer 442. - Host 406 may serve
application 420 viaVM 430 and may also serveapplication 422 viaVM 432. As further illustrated inFIG. 4 ,computing device 402 may access and virtually runapplication 420 served byhost 406. In some examples,application 420 may be an application that is not configured to run in a native application environment ofcomputing device 402. For example,application 420 may be a mobile app for running on a mobile device OS, andcomputing device 402 may be a desktop computer or a mobile device with an OS incompatible withapplication 420. However, as seen inFIG. 4 , host 406 may runVM 430 capable of runningapplication 420. Host 406 may providecomputing device 402 with cloud-based access toapplication 420 for instance by receiving inputs (e.g., user inputs, device information, commands, etc.) fromcomputing device 402, converting the inputs for use withapplication 420, and provide outputs (e.g., graphical outputs, commands, etc.) fromapplication 420 tocomputing device 402. Thus, a user ofcomputing device 402 may useapplication 420 as if running oncomputing device 402 even if much of the processing forapplication 420 is performed onhost 406. Similarly, host 406 may providecomputing device 403 with cloud-based access toapplication 422. - The systems described herein may perform
step 106 in a variety of ways. In one example, the nested virtualization environment may have been previously initiated and may require an IP address, if not previously assigned, or may require a new IP address, for instance due to changes to the network topology. Identifyingmodule 206 may identifyVM 226 as requiring assignment of an IP address. - In some examples, identifying the nested virtualization environment may include initiating the nested virtualization environment. For example,
virtualization module 204 may initiateVM 226. As part of initiatingVM 226, identifyingmodule 206 may identifyVM 226 as requiring assignment of an IP address. - At
step 108 one or more of the systems described herein may assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address. The addressing scheme may correlate the second IP address to the first IP address. For example, addressingmodule 208 may assignVM IP address 228 toVM 226. - The systems described herein may perform
step 108 in a variety of ways. In one example, the addressing scheme may involve using the first IP address to assign the second IP address. Addressingmodule 208 may usecontainer IP address 224 to determine a value forVM IP address 228. The value forcontainer IP address 224 may be directly used for assigning the value forVM IP address 228. For example, all or a subset ofcontainer IP address 224 may directly identifyVM 226. In other examples, the value forcontainer IP address 224 may indirectly identifyVM 226. For example, all or a subset ofcontainer IP address 224 may be transformed (e.g., with a hash or similar function) to identifyVM 226. - Alternatively, addressing
module 208 may useVM IP address 228 to determine a value forcontainer IP address 224. For example, all or a subset ofVM IP address 228 may directly or indirectly identifycontainer 222. - As illustrated in
FIG. 5 , a subset of first IP address 500 (which may correspond tocontainer IP address 224 or VM IP address 228) may correlate to second IP address 502 (which may correspond toVM IP address 228 or container IP address 224). For example, a subset of first IP address 500 (e.g.,subnet identifier 512 and/or host identifier 514) may correspond to at least a portion ofsecond IP address 502. - In some examples, the addressing scheme may reserve a separate subnetwork address range for IP addresses of nested virtualization environments to distinguish from base virtualization environments. For example,
VM 430 andVM 432 inFIG. 4 may be assigned to a subnetwork address range separate from that ofcontainer 440 andcontainer 442. The subnet identifier may indicate whether the IP address corresponds to a virtual machine or a container. In such examples, the host identifier may match for matching containers and VMs. For instance, iffirst IP address 500 is assigned tocontainer 440 andsecond IP address 502 is assigned toVM 430, thensubnet identifier 512 may indicate thatfirst IP address 500 corresponds to a container andsubnet identifier 522 may indicate thatsecond IP address 502 corresponds to a VM.Host identifier 514 may match (e.g., be the same as or otherwise complements)host identifier 524 to indicate a nested pair of virtualization environments. - A subset or portion of the second IP address may identify the base virtualization environment. Alternatively or additionally, a subset or portion of the second IP address may identify the nested virtualization environment. Thus, using the addressing scheme, the first IP address may directly correlate to the second IP address. Advantageously, the addressing scheme may forego a lookup table for correlating the first IP address with the second IP address.
- By using the addressing scheme described herein, host 406 may more efficiently perform network management functions for
containers VMs containers 440 and 442) and network traffic for the nested virtualization environments (e.g., andVMs 430 and 432). Rather than using a lookup table to determine whether a particular IP address corresponds to a container or a VM, a subset of the particular IP address may identify between a container or a VM. Becausehost 406 may identify between base virtualization environments and nested virtualization environments using the IP addresses themselves, host 406 may efficiently apply a first filter protocol to base virtualization environments and independently apply a second filter protocol to nested virtualization environments. In addition,host 406 may independently enforce different network policies for base virtualization environments and nested virtualization environments. For example, host 406 may enforce a first network policy for the base virtualization environments and a second network policy for the nested virtualization environments. Additionally, tracing of network behavior may be simplified because a subset of a particular IP address may identify a nested container/VM pair without requiring a lookup table to identify such pairs. - The systems and methods described herein provide dynamic container network management via an addressing scheme that correlates virtual machines to their corresponding containers. A cloud application architecture may run virtual machines on top of an existing container platform for running instances of particular hosting environments. Although the container platform may assign IP addresses for each container, each virtual machine may require its own IP address to facilitate access to external services. A conventional DHCP scheme may assign IP addresses to virtual machines in a way that may not account for the cloud application architecture such that a lookup table may be needed to determine which virtual machine address corresponds to which container address. Thus, enforcing network policies or logging and investigating network behavior may require using the lookup table. The systems and methods described herein may provide an addressing scheme that may simplify correlation between containers and virtual machines without requiring the lookup table. For example, virtual machine addresses may be assigned to a separate subnetwork address range to facilitate independent filtering of container program traffic and virtual machine traffic. In addition, the addressing scheme may allow determining the address of a container from the address of the corresponding virtual machine and vice versa.
- Example 1: A computer-implemented method may include: (i) identifying a base virtualization environment on a cloud-based software distribution host, (ii) assigning, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identifying a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assigning, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
- Example 2: The method of Example 1, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
- Example 3: The method of Example 1 or 2, wherein a portion of the second IP address identifies the base virtualization environment.
- Example 4: The method of Example 1, 2, or 3, wherein a portion of the second IP address identifies the nested virtualization environment.
- Example 5: The method of any of Examples 1-4, wherein the addressing scheme directly correlates the first IP address to the second IP address.
- Example 6: The method of any of Examples 1-5, wherein the second IP address includes a subnetwork address based on a separate subnetwork address range reserved by the addressing scheme for IP addresses of nested virtualization environments.
- Example 7: The method of any of Examples 1-6, further comprising applying a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
- Example 8: The method of any of Examples 1-7, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
- Example 9: The method of any of Examples 1-8, wherein the base virtualization environment corresponds to a container that shares an OS kernel with the cloud-based software distribution host and the nested virtualization environment corresponds to a virtual machine (VM).
- Example 10: The method of any of Examples 1-9, wherein the VM corresponds to a mobile OS environment, the application corresponds to a mobile game, and the cloud-based software distribution host provides cloud-based access to an instance of the mobile came.
- Example 11: A system may include: at least one physical processor, physical memory comprising computer-executable instructions that, when executed by the physical processor, may cause the physical processor to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
- Example 12: The system of Example 11, wherein the addressing scheme uses a value of the first IP address to assign a value for the second IP address.
- Example 13: The system of Example 11 or 12, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
- Example 14: The system of Example 11, 12, or 13, wherein the addressing scheme directly correlates the first IP address with the second IP address.
- Example 15: The system of any of Examples 11-14, further comprising instructions that, when executed by the physical processor, cause the physical processor to: apply a first filter protocol to the base virtualization environment and a second filter protocol to the nested virtualization environment to independently filter network traffic for the base virtualization environment and network traffic for the nested virtualization environment.
- Example 16: The system of any of Examples 11-15, further comprising enforcing a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
- Example 17: A non-transitory computer-readable medium that may include one or more computer-executable instructions that, when executed by at least one processor of a computing device, may cause the computing device to: (i) identify a base virtualization environment on a cloud-based software distribution host, (ii) assign, based on an addressing scheme, a first internet protocol (IP) address to the base virtualization environment, (iii) identify a nested virtualization environment running in the base virtualization environment, wherein: the cloud-based software distribution host serves an application running in the nested virtualization environment, and each of the base and nested virtualization environments comprise an isolated application environment that virtualizes at least an operating system (OS), and (iv) assign, based on the addressing scheme, a second IP address to the nested virtualization environment distinct from the first IP address, wherein the addressing scheme correlates the second IP address to the first IP address.
- Example 18: The method of Example 17, wherein a portion of the second IP address identifies the base virtualization environment, or the portion of the second IP address identifies the nested virtualization environment.
- Example 19: The method of Example 17 or 18, wherein the addressing scheme directly correlates the first IP address to the second IP address.
- Example 20: The method of Example 17, 18, or 19, further comprising instructions that, when executed by the at least one processor of the computing device, may cause the computing device to: enforce a first network policy for the base virtualization environment and a second network policy, different from the first network policy, for the nested virtualization environment.
- As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
- In some examples, the term “memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
- In some examples, the term “physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
- Although illustrated as separate elements, the modules described and/or illustrated herein may represent portions of a single module or application. In addition, in certain embodiments one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks. For example, one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein. One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
- In addition, one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the modules recited herein may receive network address data to be transformed, transform the network address data, use the result of the transformation to assign network addresses, and store the result of the transformation to manage network addresses. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
- In some embodiments, the term “computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
- The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
- The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the present disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
- Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.”
Claims (20)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/507,359 US20220129296A1 (en) | 2020-10-25 | 2021-10-21 | Service network approach for dynamic container network management |
EP21816576.9A EP4232897A1 (en) | 2020-10-25 | 2021-10-23 | Service network approach for dynamic container network management |
PCT/US2021/056373 WO2022087503A1 (en) | 2020-10-25 | 2021-10-23 | Service network approach for dynamic container network management |
CN202180073112.8A CN116830084A (en) | 2020-10-25 | 2021-10-23 | Service network method for dynamic container network management |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063105320P | 2020-10-25 | 2020-10-25 | |
US202163194821P | 2021-05-28 | 2021-05-28 | |
US17/507,359 US20220129296A1 (en) | 2020-10-25 | 2021-10-21 | Service network approach for dynamic container network management |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220129296A1 true US20220129296A1 (en) | 2022-04-28 |
Family
ID=81257183
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/506,640 Pending US20220129295A1 (en) | 2020-10-25 | 2021-10-20 | Server-side hosted environment for a cloud gaming system |
US17/507,310 Abandoned US20220131943A1 (en) | 2020-10-25 | 2021-10-21 | Session reconnects and dynamic resource allocation |
US17/507,303 Active US11583768B2 (en) | 2020-10-25 | 2021-10-21 | Systems and methods for secure concurrent streaming of applications |
US17/507,041 Abandoned US20230336624A1 (en) | 2020-10-25 | 2021-10-21 | Persistent storage overlay |
US17/507,292 Active US11638870B2 (en) | 2020-10-25 | 2021-10-21 | Systems and methods for low-latency initialization of streaming applications |
US17/507,359 Pending US20220129296A1 (en) | 2020-10-25 | 2021-10-21 | Service network approach for dynamic container network management |
US17/507,299 Abandoned US20220126203A1 (en) | 2020-10-25 | 2021-10-21 | Systems and methods for distributing compiled shaders |
US17/508,293 Pending US20240269549A1 (en) | 2020-10-25 | 2021-10-22 | Systems and methods for measuring input latency for cloud gaming applications |
Family Applications Before (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/506,640 Pending US20220129295A1 (en) | 2020-10-25 | 2021-10-20 | Server-side hosted environment for a cloud gaming system |
US17/507,310 Abandoned US20220131943A1 (en) | 2020-10-25 | 2021-10-21 | Session reconnects and dynamic resource allocation |
US17/507,303 Active US11583768B2 (en) | 2020-10-25 | 2021-10-21 | Systems and methods for secure concurrent streaming of applications |
US17/507,041 Abandoned US20230336624A1 (en) | 2020-10-25 | 2021-10-21 | Persistent storage overlay |
US17/507,292 Active US11638870B2 (en) | 2020-10-25 | 2021-10-21 | Systems and methods for low-latency initialization of streaming applications |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/507,299 Abandoned US20220126203A1 (en) | 2020-10-25 | 2021-10-21 | Systems and methods for distributing compiled shaders |
US17/508,293 Pending US20240269549A1 (en) | 2020-10-25 | 2021-10-22 | Systems and methods for measuring input latency for cloud gaming applications |
Country Status (4)
Country | Link |
---|---|
US (8) | US20220129295A1 (en) |
EP (6) | EP4232899A1 (en) |
CN (6) | CN116964559A (en) |
WO (6) | WO2022087500A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11921592B2 (en) * | 2020-07-20 | 2024-03-05 | Google Llc | Restoration of a computing session |
US11803413B2 (en) * | 2020-12-03 | 2023-10-31 | International Business Machines Corporation | Migrating complex legacy applications |
US20230116110A1 (en) * | 2021-10-08 | 2023-04-13 | BlueStack Systems, Inc. | Methods, Systems and Computer Program Products for Selective Routing of Software Instructions Between a Client Device and a Cloud Services Server |
CN114996004B (en) * | 2022-05-30 | 2024-06-28 | 杭州迪普科技股份有限公司 | Method and device for continuously deleting session |
US11984999B2 (en) | 2022-09-12 | 2024-05-14 | International Business Machines Corporation | Smarter collaborative conferences |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150081764A1 (en) * | 2013-09-13 | 2015-03-19 | Curious Olive, Inc. | Remote Virtualization of Mobile Apps |
US20170170990A1 (en) * | 2015-12-15 | 2017-06-15 | Microsoft Technology Licensing, Llc | Scalable Tenant Networks |
US20200167175A1 (en) * | 2018-11-26 | 2020-05-28 | Red Hat, Inc. | Filtering based containerized virtual machine networking |
Family Cites Families (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6363409B1 (en) | 1995-04-24 | 2002-03-26 | Microsoft Corporation | Automatic client/server translation and execution of non-native applications |
US7548238B2 (en) * | 1997-07-02 | 2009-06-16 | Nvidia Corporation | Computer graphics shader systems and methods |
US20070174429A1 (en) | 2006-01-24 | 2007-07-26 | Citrix Systems, Inc. | Methods and servers for establishing a connection between a client system and a virtual machine hosting a requested computing environment |
US20100146506A1 (en) * | 2008-12-08 | 2010-06-10 | Electronics And Telecommunications Research Institute | SYSTEM AND METHOD FOR OFFERING SYSTEM ON DEMAND (SoD) VIRTUAL-MACHINE |
US8410994B1 (en) | 2010-08-23 | 2013-04-02 | Matrox Graphics Inc. | System and method for remote graphics display |
KR102003007B1 (en) | 2010-09-13 | 2019-07-23 | 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 | A Method and System of Providing a Computer Game at a Computer Game System Including a Video Server and a Game Server |
JP5520190B2 (en) * | 2010-10-20 | 2014-06-11 | 株式会社ソニー・コンピュータエンタテインメント | Image processing system, image processing method, moving image transmitting apparatus, moving image receiving apparatus, program, and information storage medium |
JP2012125451A (en) * | 2010-12-16 | 2012-07-05 | Sony Computer Entertainment Inc | Game system, method for controlling the game system, program, and information storage medium |
US9412193B2 (en) * | 2011-06-01 | 2016-08-09 | Apple Inc. | Run-time optimized shader program |
US9773344B2 (en) * | 2012-01-11 | 2017-09-26 | Nvidia Corporation | Graphics processor clock scaling based on idle time |
JP5620433B2 (en) * | 2012-04-30 | 2014-11-05 | 泰章 岩井 | Information processing system and information processing method |
US9152449B2 (en) | 2012-07-13 | 2015-10-06 | International Business Machines Corporation | Co-location of virtual machines with nested virtualization |
US11181938B2 (en) * | 2012-08-31 | 2021-11-23 | Blue Goji Llc | Full body movement control of dual joystick operated devices |
DE112013005688B4 (en) * | 2012-11-28 | 2024-07-18 | Nvidia Corporation | Apparatus for providing graphics processing and network-attached GPU device |
US9566505B2 (en) * | 2012-12-27 | 2017-02-14 | Sony Interactive Entertainment America Llc | Systems and methods for generating and sharing video clips of cloud-provisioned games |
US20140196054A1 (en) * | 2013-01-04 | 2014-07-10 | International Business Machines Corporation | Ensuring performance of a computing system |
US20140274408A1 (en) * | 2013-03-14 | 2014-09-18 | Zynga Inc. | Methods and systems for provisioning a game container within a cloud computing system |
US9295915B2 (en) | 2013-05-20 | 2016-03-29 | Microsoft Technology Licensing, Llc | Game availability in a remote gaming environment |
US9304877B2 (en) * | 2014-01-24 | 2016-04-05 | International Business Machines Corporation | Mobile agent based memory replication |
US10296391B2 (en) * | 2014-06-30 | 2019-05-21 | Microsoft Technology Licensing, Llc | Assigning a player to a machine |
US10007965B2 (en) * | 2014-12-16 | 2018-06-26 | Intel Corporation | Dynamic kernel modification for graphics processing units |
WO2016144657A1 (en) * | 2015-03-06 | 2016-09-15 | Sony Computer Entertainment America Llc | Predictive instant play for an application over the cloud |
US10062181B1 (en) | 2015-07-30 | 2018-08-28 | Teradici Corporation | Method and apparatus for rasterizing and encoding vector graphics |
MX2018001256A (en) | 2015-07-30 | 2018-09-28 | Wix Com Ltd | System integrating a mobile device application creation, editing and distribution system with a website design system. |
US10268493B2 (en) | 2015-09-22 | 2019-04-23 | Amazon Technologies, Inc. | Connection-based resource management for virtual desktop instances |
US10019360B2 (en) * | 2015-09-26 | 2018-07-10 | Intel Corporation | Hardware predictor using a cache line demotion instruction to reduce performance inversion in core-to-core data transfers |
US10037221B2 (en) * | 2015-12-28 | 2018-07-31 | Amazon Technologies, Inc. | Management of virtual desktop instance pools |
JP2017174038A (en) | 2016-03-23 | 2017-09-28 | 富士通株式会社 | Information processing system, information processing method, and program |
US10972574B2 (en) * | 2016-04-27 | 2021-04-06 | Seven Bridges Genomics Inc. | Methods and systems for stream-processing of biomedical data |
US11327566B2 (en) * | 2019-03-29 | 2022-05-10 | Facebook Technologies, Llc | Methods and apparatuses for low latency body state prediction based on neuromuscular data |
CN108023167A (en) * | 2016-11-04 | 2018-05-11 | 深圳富泰宏精密工业有限公司 | The radio communication device of antenna structure and the application antenna structure |
US10049426B2 (en) | 2017-01-03 | 2018-08-14 | Qualcomm Incorporated | Draw call visibility stream |
US10341198B2 (en) | 2017-03-17 | 2019-07-02 | Verizon Patent And Licensing Inc. | Configuring a back-end container and a corresponding front-end proxy container on a network device |
US10491666B2 (en) | 2017-04-03 | 2019-11-26 | Sony Interactive Entertainment America Llc | Systems and methods for using a distributed game engine |
US20180321981A1 (en) * | 2017-05-04 | 2018-11-08 | Huawei Technologies Co., Ltd. | System and method for self organizing data center |
US10838920B2 (en) * | 2017-05-05 | 2020-11-17 | Esoptra NV | Plug-in function platform and methods |
US10610779B2 (en) | 2017-06-19 | 2020-04-07 | Sony Interactive Entertainment LLC | Methods and systems for scheduling game play of a video game |
US10721214B2 (en) * | 2017-10-18 | 2020-07-21 | Citrix Systems, Inc. | Method to track SSL session states for SSL optimization of SaaS based applications |
US10668378B2 (en) | 2018-01-26 | 2020-06-02 | Valve Corporation | Distributing shaders between client machines for precaching |
US10560349B2 (en) | 2018-01-31 | 2020-02-11 | Salesforce.Com, Inc. | Data consistency of policy enforcement for distributed applications |
US11077364B2 (en) * | 2018-04-02 | 2021-08-03 | Google Llc | Resolution-based scaling of real-time interactive graphics |
EP3701489B1 (en) * | 2018-04-10 | 2022-10-26 | Google LLC | Memory management in gaming rendering |
US10848571B2 (en) | 2018-09-24 | 2020-11-24 | Citrix Systems, Inc. | Systems and methods for consistent enforcement policy across different SaaS applications via embedded browser |
US11077362B2 (en) | 2018-12-03 | 2021-08-03 | Sony Interactive Entertainment LLC | Machine learning driven resource allocation |
WO2020141427A2 (en) * | 2019-01-02 | 2020-07-09 | BlueStack Systems, Inc. | Methods, systems and computer program products for optimizing computer system resource utilization during in-game resource farming |
US10908771B2 (en) | 2019-01-31 | 2021-02-02 | Rypplzz, Inc. | Systems and methods for augmented reality with precise tracking |
US10918941B2 (en) * | 2019-03-27 | 2021-02-16 | Electronic Arts Inc. | Predictive execution of distributed game engines |
US11297116B2 (en) | 2019-12-04 | 2022-04-05 | Roblox Corporation | Hybrid streaming |
US20210208918A1 (en) | 2020-01-07 | 2021-07-08 | Citrix Systems, Inc. | Intelligent session timeouts for virtual workspace |
US11418852B2 (en) | 2020-05-28 | 2022-08-16 | Nvidia Corporation | Detecting latency anomalies from pipeline components in cloud-based systems |
-
2021
- 2021-10-20 US US17/506,640 patent/US20220129295A1/en active Pending
- 2021-10-21 US US17/507,310 patent/US20220131943A1/en not_active Abandoned
- 2021-10-21 US US17/507,303 patent/US11583768B2/en active Active
- 2021-10-21 US US17/507,041 patent/US20230336624A1/en not_active Abandoned
- 2021-10-21 US US17/507,292 patent/US11638870B2/en active Active
- 2021-10-21 US US17/507,359 patent/US20220129296A1/en active Pending
- 2021-10-21 US US17/507,299 patent/US20220126203A1/en not_active Abandoned
- 2021-10-22 US US17/508,293 patent/US20240269549A1/en active Pending
- 2021-10-23 CN CN202180073109.6A patent/CN116964559A/en active Pending
- 2021-10-23 WO PCT/US2021/056370 patent/WO2022087500A1/en active Application Filing
- 2021-10-23 WO PCT/US2021/056369 patent/WO2022087499A1/en active Application Filing
- 2021-10-23 EP EP21807458.1A patent/EP4232899A1/en not_active Withdrawn
- 2021-10-23 CN CN202180073085.4A patent/CN116802610A/en active Pending
- 2021-10-23 EP EP21810213.5A patent/EP4232900A1/en not_active Withdrawn
- 2021-10-23 EP EP21816576.9A patent/EP4232897A1/en not_active Withdrawn
- 2021-10-23 CN CN202180073074.6A patent/CN116802604A/en active Pending
- 2021-10-23 EP EP21811187.0A patent/EP4232896A1/en not_active Withdrawn
- 2021-10-23 WO PCT/US2021/056387 patent/WO2022087514A1/en active Application Filing
- 2021-10-23 WO PCT/US2021/056371 patent/WO2022087501A1/en unknown
- 2021-10-23 EP EP21810490.9A patent/EP4232901A1/en not_active Withdrawn
- 2021-10-23 WO PCT/US2021/056373 patent/WO2022087503A1/en active Application Filing
- 2021-10-23 EP EP21811188.8A patent/EP4232902A1/en not_active Withdrawn
- 2021-10-23 CN CN202180073112.8A patent/CN116830084A/en active Pending
- 2021-10-23 WO PCT/US2021/056372 patent/WO2022087502A1/en active Application Filing
- 2021-10-23 CN CN202180073114.7A patent/CN116348854A/en active Pending
- 2021-10-23 CN CN202180073113.2A patent/CN116802611A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150081764A1 (en) * | 2013-09-13 | 2015-03-19 | Curious Olive, Inc. | Remote Virtualization of Mobile Apps |
US20170170990A1 (en) * | 2015-12-15 | 2017-06-15 | Microsoft Technology Licensing, Llc | Scalable Tenant Networks |
US20200167175A1 (en) * | 2018-11-26 | 2020-05-28 | Red Hat, Inc. | Filtering based containerized virtual machine networking |
Also Published As
Publication number | Publication date |
---|---|
US20220129295A1 (en) | 2022-04-28 |
EP4232900A1 (en) | 2023-08-30 |
WO2022087502A1 (en) | 2022-04-28 |
CN116348854A8 (en) | 2023-09-22 |
WO2022087499A1 (en) | 2022-04-28 |
EP4232896A1 (en) | 2023-08-30 |
US20220126203A1 (en) | 2022-04-28 |
CN116830084A (en) | 2023-09-29 |
WO2022087503A1 (en) | 2022-04-28 |
CN116802611A (en) | 2023-09-22 |
WO2022087500A1 (en) | 2022-04-28 |
US11583768B2 (en) | 2023-02-21 |
CN116964559A (en) | 2023-10-27 |
WO2022087514A1 (en) | 2022-04-28 |
WO2022087501A1 (en) | 2022-04-28 |
EP4232902A1 (en) | 2023-08-30 |
US11638870B2 (en) | 2023-05-02 |
CN116802604A (en) | 2023-09-22 |
EP4232901A1 (en) | 2023-08-30 |
CN116802610A (en) | 2023-09-22 |
EP4232899A1 (en) | 2023-08-30 |
US20220126202A1 (en) | 2022-04-28 |
US20220131943A1 (en) | 2022-04-28 |
CN116348854A (en) | 2023-06-27 |
EP4232897A1 (en) | 2023-08-30 |
US20230336624A1 (en) | 2023-10-19 |
US20240269549A1 (en) | 2024-08-15 |
US20220126199A1 (en) | 2022-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220129296A1 (en) | Service network approach for dynamic container network management | |
US10333889B2 (en) | Central namespace controller for multi-tenant cloud environments | |
US10320674B2 (en) | Independent network interfaces for virtual network environments | |
US10701139B2 (en) | Life cycle management method and apparatus | |
US8693485B2 (en) | Virtualization aware network switch | |
US11294735B2 (en) | Method and apparatus for accessing desktop cloud virtual machine, and desktop cloud controller | |
US20170024224A1 (en) | Dynamic snapshots for sharing network boot volumes | |
WO2017157156A1 (en) | Method and apparatus for processing user requests | |
US11882184B2 (en) | Reuse of execution environments while guaranteeing isolation in serverless computing | |
EP3276490B1 (en) | Extension of a private cloud end-point group to a public cloud | |
CN101924693A (en) | Be used for method and system in migrating processes between virtual machines | |
WO2016092386A1 (en) | Fast initiation of workloads using memory-resident post-boot snapshots | |
US9697144B1 (en) | Quality of service enforcement and data security for containers accessing storage | |
US11036535B2 (en) | Data storage method and apparatus | |
US12088430B2 (en) | Systems and methods for preserving system contextual information in an encapsulated packet | |
US9882873B2 (en) | MAC address allocation for virtual machines | |
WO2020247235A1 (en) | Managed computing resource placement as a service for dedicated hosts | |
US11785054B2 (en) | Deriving system architecture from security group relationships | |
US10949234B2 (en) | Device pass-through for virtualized environments | |
US10931581B2 (en) | MAC learning in a multiple virtual switch environment | |
US10853129B1 (en) | Accelerator based inference service | |
US20150081909A1 (en) | Secure public connectivity to virtual machines of a cloud computing environment | |
US10250696B2 (en) | Preserving stateful network connections between virtual machines | |
US20220012081A1 (en) | System and method to support port mapping for virtual machine based container |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: FACEBOOK, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OKAMOTO, JACOB MATTHEW;ZHANG, QUNSHU;OU, YANGPENG;AND OTHERS;SIGNING DATES FROM 20211028 TO 20211101;REEL/FRAME:058618/0902 |
|
AS | Assignment |
Owner name: META PLATFORMS, INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:FACEBOOK, INC.;REEL/FRAME:058964/0049 Effective date: 20211028 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |