CN108604992B - System and method for software defined switching between lightweight virtual machines using host kernel resources - Google Patents

System and method for software defined switching between lightweight virtual machines using host kernel resources Download PDF

Info

Publication number
CN108604992B
CN108604992B CN201780009111.0A CN201780009111A CN108604992B CN 108604992 B CN108604992 B CN 108604992B CN 201780009111 A CN201780009111 A CN 201780009111A CN 108604992 B CN108604992 B CN 108604992B
Authority
CN
China
Prior art keywords
message
node
data packet
destination
crossbar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780009111.0A
Other languages
Chinese (zh)
Other versions
CN108604992A (en
Inventor
拉各韦卓·克沙瓦穆西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN108604992A publication Critical patent/CN108604992A/en
Application granted granted Critical
Publication of CN108604992B publication Critical patent/CN108604992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/04Switchboards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides an application level crossbar that allows communication across container/lightweight virtual machines. The application level crossbar or software defined switch provides a uniform communication interface for the application layer, abstracting the details of connection management and message sending and receiving. The software defined switch/application level crossbar switch utilizes domain sockets that can be used to connect network spaces without affecting the isolation of message exchanges within the host, improving the message exchange performance of virtual machines within the host.

Description

System and method for software defined switching between lightweight virtual machines using host kernel resources
Technical Field
The subject matter described herein relates generally to communication data processing, and more particularly, to improving performance of network capabilities in lightweight virtual machines.
Background
In the computer field, a Virtual Machine (VM) is an emulation of a particular computer system. Virtual machines operate based on the computer architecture and functionality of real or hypothetical computers, the implementation of which may involve dedicated hardware, software, or a combination of both. It is well known that running a virtual machine has many benefits. Virtual machines make better use of hardware, are easier to backup and swap, and isolate traffic from each other. However, running a virtual machine also has drawbacks. Virtual machine images are clumsy. Additionally, and more importantly, virtual machines require significant resources because they emulate hardware and run a full stack operating system.
By using the Linux container, a lightweight substitute is provided to replace the mature virtual machine, while the advantages of the mature virtual machine can be retained. A container is an operating system level virtual environment for running multiple independent Linux systems, sometimes referred to as lightweight virtualization or lightweight virtual machines, on a single Linux host. runC, Docker, and Warden are some examples of lightweight virtual machines (containers) that can be used to build "Platform as a service" (PaaS). runC/Docker/Warden is a Linux kernel namespace and CGroup based scheme that abstracts complex kernel APIs through easy-to-use console/API/image formats and provides on-demand abstraction to compute storage and network capabilities.
Container networking can be thought of roughly as creating a consistent network environment for a set of containers. This may be accomplished using overlay networks, which exist in a variety of implementations, such as Docker default network mode, weaves, flannel, and socketplane. The main advantage of all these overlay networks is that the application can be deployed as is without changing the application code. The way the network namespace connects to the physical network devices is also another part of the container networking.
There are a number of Linux kernel modules that allow the network namespace to communicate with network hardware such as veth, OpenVSwith, domain sockets, etc. Namespace is a feature of the Linux kernel that allows a set of processes to be separated so that they cannot "see" the resources in other sets. According to the Linux kernel document of the network namespace, the network namespace may communicate via a veth pair and/or a domain socket without compromising the level of network isolation. Fig. 1(a) shows a network namespace for communication using a veth pair. The veth pair is an ethernet-like virtual device that can be used within a container. The veth pair captures ethernet frames, which may be sent to a destination through a bridge or router. Domain sockets are an efficient lightweight inter-process communication (IPC). FIG. 1(b) shows a network namespace for communication using domain sockets. The domain socket can be controlled by file authority, so it is more secure than the TCP port that anyone can connect, therefore the TCP port needs further security protection.
Fig. 2 shows the network setup in Docker. Fig. 2 illustrates a general approach used by virtual machines/containers to solve network problems. The general method is to capture ethernet packets from a veth or other virtual device and then send the packets to the required containers in the same host or different hosts through a bridge/router. The veth pair is an ethernet-like virtual device that can be used within a container. Fig. 3 shows the internal arrangement in Docker. As shown in FIG. 3, each generated/instantiated container connects to the Linux bridge using a veth pair. The container end of the veth pair is called eth0, and the veth of the Linux bridge side is called vethxx, such as vethab or vethabb. A similar path pair is connected between the host and the Linux bridge.
As is generally known, bridges operate at Layer2 (L2) and are protocol independent. The L2 network layer is responsible for physical addressing, error correction, and preparing information for the medium. A bridge is a way to connect two separate segments together in a protocol independent manner. The data packet is forwarded based on the ethernet address rather than the IP address (e.g., router). Forwarding is done at Layer 2(Layer2, L2) so all protocols can pass through the bridge. All network traffic goes through Linux bridges or other configured bridges such as openvswitch (ovs). An IP table (as shown in fig. 3) is used so that each container port can be mapped to a host port. Each generated/instantiated IP address changes each time the container is restarted, which also presents a problem because the traffic in one container must acquire a new IP address to access the traffic.
FIG. 4 shows communications in an overlay network or L2/L3 scheme. An overlay network is a computer network that is built on top of another network. Nodes in an overlay network may be viewed as being connected by virtual or logical links, where each link corresponds to a path in the underlying network, which may traverse many physical links. The L2-based scheme implies that L2 frames are captured from the virtual ethernet-like device and then sent over other transport mechanisms such as TCP/UDP/SecureTCP.
Fig. 4 shows a method of capturing a data packet at the ethernet level (virtual ethernet device) and then sending the data packet by TCP/UDP or the like transmission as available in the prior art. This is called an overlay network or L2/L3 scheme. However, the method shown in fig. 4 has problems associated with network efficiency and operational complexity. The network is very inefficient because the data is captured at L2 (layer2 of the network stack) and then re-encapsulated to send it to the correct destination or routed through the bridge. The L2 solution in a multi-host network can cause operational problems such as configuration errors and the like, requiring network expert commissioning.
Fig. 5 shows details of the message exchange across two hosts. As shown in fig. 5 (a): the sender application (App) creates a socket to the destination, the sender application formats the data and sends the message using the socket interface, and the TCP/IP stack of the kernel further processes and sends it to the network card. On the side of receiving the App, the network card receives data and forwards the data to a TCP/IP stack, and the TCP/IP stack sends the data to the App. When the default network mode of the lightweight virtual machine is used as the legacy mode using the overlay network, the steps performed in fig. 5(a) described above require further processing, requiring additional CPU and memory, as shown in fig. 5(b), which in turn affects network performance.
Thus, in view of the above, it is apparent that when lightweight virtual machines are used, the performance of the computing and storage capabilities is not affected, but the performance of the network capabilities is severely limited.
The above-mentioned drawbacks of lightweight virtual machines implemented in terminal devices today are only for the purpose of summarizing some of the problems of conventional systems/mechanisms/techniques, and are non-exhaustive. Other problems with conventional systems/mechanisms/techniques and the corresponding benefits of the various non-limiting embodiments described herein will become more apparent upon reading the following description.
Disclosure of Invention
This summary is provided to introduce concepts related to improving the performance of network capabilities in lightweight virtual machines, which are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended as an aid in determining or limiting the scope of the claimed subject matter.
The main object of the present invention is to provide a mechanism to improve the performance of network capabilities, in particular container/lightweight virtual machines, thereby solving the above technical problem.
Thus, the present invention provides an application level crossbar that allows communication across container/lightweight virtual machines. The application level crossbar or software defined switch provides a uniform communication interface for the application layer, abstracting the details of connection management and message sending and receiving. The software defined switch/application level crossbar switch utilizes domain sockets that can be used to connect network spaces without affecting the isolation of message exchanges within the host, improving the message exchange performance of virtual machines within the host.
In one embodiment, the software crossbar/application-level crossbar/software-defined switch is a software switch capable of transferring data for an application. Applications are abstracted from connection management.
In one embodiment, the present invention provides a node in a communication system. The node includes a processor and a memory coupled to the processor for executing a plurality of modules residing in the memory. The plurality of modules includes at least one interface module and at least one processing module. The interface module is adapted to send and/or receive at least one message/data packet, the interface being initialized by at least one application residing in the node. The processing module is configured to provide at least one connection management to one or more other nodes in the communication system to enable unified communications, the connection management being obtained based on at least one of: an IP address, a shared memory key, a common communication port, and any combination thereof.
In one embodiment, the present invention provides a node in a communication system. The node includes a processor and a memory coupled to the processor for executing a plurality of modules residing in the memory. The plurality of modules includes at least one crossbar processing module. The crossbar processing module is configured to: receiving at least one message/data packet initialized by at least one application residing in the node using at least one crossbar library interface; verifying a destination of the message/data packet, the destination being a same node, a different node, or any combination thereof; if the destination of the message/packet is the same node, creating/using an open domain socket connection in the same node; or transmitting the message/data packet to one or more other nodes based on the IP addresses of the other nodes received in the message/data packet, the other nodes including lightweight virtual machine processes.
In one embodiment, the present invention provides a communication system. The communication system includes: a plurality of host devices and a plurality of server devices; a processor; and at least one crossbar embedded in the processor. The crossbar is interconnected with the host device and the server device and is used for providing a uniform communication interface for communication between the host device and/or the server device. The crossbar is configured to: receiving at least one message/data packet initialized by at least one application residing in the host using at least one crossbar library interface; verifying a destination of the message/data packet, the destination being the host, the server, or any combination thereof; creating/using an open domain socket connection in the same host device and/or server device if the destinations of the messages/packets are the same node; or transmitting the message/data packet to at least one other host/server device based on the IP address of the other host/server device received in the message/data packet, the other host/server device comprising a lightweight virtual machine process.
In one embodiment, the present invention provides a method performed by a node in a communication system. The method comprises the following steps: at least one interface sends and/or receives at least one message/data packet initialized by at least one application residing in the node; at least one processing module provides at least one connection management to one or more other nodes in the communication system to enable unified communications, the connection management obtained based on at least one of: an IP address, a shared memory key, a common communication port, and any combination thereof.
In one embodiment, the present invention provides a method performed by a node in a communication system. The method comprises the following steps: receiving at least one message/data packet initialized by at least one application residing in the node using at least one crossbar library interface; verifying a destination of the message/data packet, the destination being a same node, a different node, or any combination thereof; if the destination of the message/packet is the same node, creating a connection using an open domain socket in the same node; or transmitting the message/data packet to one or more other nodes based on the IP addresses of the other nodes received in the message/data packet, the other nodes including lightweight virtual machine processes.
Compared with the prior art, the invention has the main advantages that the invention provides a software crossbar or an application crossbar or a software-defined switch, which provides network abstraction for applications running in the lightweight virtual machine, simplifies application development/deployment and improves network performance.
In addition, the present invention is applicable to a Physical Machine (PM), a Physical machine running virtual machine (PM-VM) running a plurality of virtual machines, a Physical machine running lightweight virtual machine (PM-VM-LVM) running a plurality of virtual machines each running a plurality of lightweight virtual machines, and a PM-LVM deployment combination.
Compared with the prior art, the network performance realized by the invention is better because the overhead of the communication of the lightweight virtual machine in the host is less.
Compared with the prior art, the invention simplifies the multi-host network, because the local crossbar uses the domain socket to forward the message to the gateway crossbar, and does not need additional message encapsulation.
Compared with the prior art, the invention does not need IPtable rules, port mapping and other network domain configurations, and can still realize better network performance by reducing operation overhead.
Furthermore, by the present invention, the same performance as that provided by local domain sockets is provided for message exchange across lightweight virtual machines within a host, but may be reduced by 10% due to operational and maintenance requirements such as login, as compared to the prior art. Furthermore, for message exchange across host lightweight virtual machines, performance will be superior to overlay network solutions on the open source market.
The various options and preferred embodiments mentioned above in relation to the first embodiment are also applicable to the other embodiments.
Drawings
The detailed description is given with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the reference number as first appearing in the drawing. The same numbers are used throughout the drawings to reference like features of components.
FIG. 1 illustrates network namespace communications using (a) a veth pair and (b) a domain socket;
FIG. 2 shows the network setup in Docker;
FIG. 3 shows the internal arrangement in Docker;
FIG. 4 shows an overlay network or L2/L3 scheme;
FIG. 5 illustrates the message exchange across two hosts (a) sender side and receiver side, and (b) the processing cost of message reception;
FIG. 6 illustrates cross-container/lightweight virtual machine communication using an application level crossbar, according to embodiments of the present subject matter;
FIG. 7 illustrates a crossbar design and process using a crossbar according to embodiments of the present subject matter;
FIG. 8 illustrates a sequence flow of operations in the same host, different container scenarios, according to an embodiment of the present subject matter;
FIG. 9 illustrates a sequential flow of operations in different hosts, different containers, according to an embodiment of the present subject matter;
FIG. 10 illustrates a node in a communication system according to an embodiment of the present subject matter;
FIG. 11 illustrates a node in a communication system according to an embodiment of the present subject matter;
fig. 12 illustrates a method performed by a node in a communication system according to an embodiment of the present subject matter.
It is to be understood that the drawings are for purposes of illustrating the concepts of the invention and may not be to scale.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, belong to the protection scope of the present invention.
The invention can be implemented in numerous ways, including as a process, an apparatus, a system, a composition of matter, a computer readable medium having stored thereon computer program instructions, or a computer network wherein the program instructions are sent over optical or electronic communication links. In this specification, these embodiments, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps in disclosed processes may be altered within the scope of the invention.
The following provides a detailed description of one or more embodiments of the invention and the accompanying drawings that illustrate the principles of the invention. The invention is described in connection with these embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, modules, units and/or circuits have not been described in detail so as not to obscure the invention.
Although embodiments of the invention are not limited in this respect, discussions utilizing terms such as, for example, "processing," "computing," "calculating," "determining," "establishing", "analyzing", "checking", or the like, may refer to operation(s) and/or process (es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computer registers and/or memories into other data similarly represented as physical quantities within the computer registers and/or memories or other non-transitory information storage medium that may store instructions to perform operations and/or processes.
Although embodiments of the present invention are not limited in this respect, the terms "plurality" and "a plurality" as used herein may include, for example, "several" or "two or more". The terms "plurality" or "a plurality" may be used throughout the specification to describe two or more components, devices, elements, units, parameters and the like. Unless explicitly stated, the method embodiments described herein are not limited to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof may occur or be performed concurrently, or in parallel.
When lightweight virtual machines are used, the performance of the computing and storage capabilities is not affected, but the performance of the network capabilities is severely limited.
Thus, the present invention provides an application level crossbar that allows communication across container/lightweight virtual machines. An application level crossbar or software defined switch as shown in fig. 6 provides a unified communication interface for the application layer, abstracting the details of connection management and message sending and receiving. The software defined switch/application level crossbar switch utilizes domain sockets that can be used to connect network spaces without affecting the isolation of message exchanges within the host, improving the message exchange performance of virtual machines within the host.
In one embodiment, the software crossbar/application-level crossbar/software-defined switch is a software switch capable of transferring data for an application. Applications are abstracted from connection management.
Systems, methods, and nodes for software defined switching between lightweight virtual machines for improving performance of message exchanges in the virtual machines are disclosed.
While aspects of the system and method for software defined switching between lightweight virtual machines using host kernel resources are described, the present invention can be implemented in any number of different computing systems, environments, and/or configurations, with embodiments being described in the context of the following exemplary systems, devices/nodes/apparatus, and methods.
The following explains embodiments of the present invention with the aid of an exemplary illustration and one or more examples. However, these exemplary illustrations and examples are for illustrative purposes in order to better understand the present invention and should not be construed as limiting the scope of the present invention.
Referring now to fig. 7, a crossbar design and process using a crossbar according to embodiments of the present subject matter is shown.
In one embodiment, as shown in fig. 7, the crossbar may be designed in two parts: a crossbar library and a crossbar process.
In one embodiment, the crossbar library provides an interface, namely an Application Program Interface (API), for sending and receiving messages. The application must initialize this library, exposing the initialization API for this purpose as well. The crossbar banks will use shared memory internally to exchange messages faster between the crossbar banks and the crossbar process.
In one embodiment, the crossbar process provides actual connection management between other virtual machines. Each crossbar process must configure the relevant detailed information such as IP address, shared memory key and shared communication port. Generally, the common communication port is 9999.
In an embodiment, each lightweight virtual machine may deploy at least one crossbar, referred to as a local crossbar. All applications to run in the lightweight virtual machine must be linked to the crossbar library.
All applications to be run in the lightweight virtual machine are linked to the crossbar library. For example, g + + -gmypp-L/home/abc/lib-lCrossbar-o myApp, where myApp (application) is linked to the crossbar library and uses its API to send/receive messages.
In an embodiment, at least one crossbar switch, referred to as a gateway crossbar switch, may be deployed at the host level. Each local crossbar is configured with the address of the gateway crossbar. Those skilled in the art will note and appreciate that each lightweight virtual machine launched must have a unique IP in the network setting.
In one embodiment, if a message is destined for a process in a different lightweight virtual machine, the crossbar library may have information (destination address) for placing the message into a local crossbar process queue. In one example, if the target IP is different from the current host IP, the destination is not the host. Therefore, messages must be placed in the network queue rather than the local crossbar process queue.
In one embodiment, the crossbar process will poll the messages and, in the event of a message event, will check whether the destination process is on the same host or a different host.
In one embodiment, if the destination is the same host, the crossbar process will create/use an open domain socket connection with the destination lightweight virtual machine process.
In one embodiment, if the destination is a different host, the crossbar process may forward the message to a gateway crossbar process running in that host.
In one embodiment, the gateway crossbar will be configured with the routes of other host gateways. The gateway crossbar will find other host gateways based on the destination IP and forward the message using TCP transport. The gateway crossbar will use the host mode of the network for better performance.
In one embodiment, this information will be automatically synchronized to the entire gateway crossbar if any updates occur, such as the addition of a new gateway crossbar. There may be a window of time for this synchronization during which the application may receive a failure message that the destination has not been reached. The addition of a new gateway crossbar can be seen as a network management activity, and therefore the application needs to be designed with some notification services to achieve the same purpose.
Referring again to FIG. 7, in one embodiment, the crossbar process may have a shared memory management process, a domain socket connection thread process, a domain socket listening thread process, a TCP socket connection thread process, a TCP socket listening thread process, a send receive thread process.
In an embodiment, a shared memory management process may create/attach to a given shared memory, create a queue from a shared memory region, and map an application to the queue.
In an embodiment, the domain socket connection thread process may establish a connection with other local lightweight virtual machine processes, and if the destination application is in a lightweight virtual machine of a different host, the domain socket connection thread process may forward the application message to the gateway crossbar.
In one embodiment, a domain socket snoop thread process may receive a new connection request and process the new connection request for subsequent use.
In an embodiment, a TCP socket connection thread process may establish connections with other gateway crossbars in other hosts and update the routes (mapping of gateway IP, IP mask).
In one embodiment, a TCP socket snoop thread process may receive a new connection request and process the new connection request, e.g., preparing a data structure for subsequent use.
In one embodiment, the send receive thread process may browse the data structures created by the listening/connect thread, retrieve the appropriate socket handle, and send/receive messages.
Referring again to FIG. 7, in one embodiment, the crossbar library may have a shared memory management process and interface.
In an embodiment, a shared memory management process may create/attach to a given shared memory, create/attach a queue from a shared memory region, and map an App to the queue.
In an embodiment, the interface is used to send or receive or initiate or discard received data packets or messages.
Referring now to fig. 8, a sequence flow of operations in the same host, different container scenario is shown, according to an embodiment of the present subject matter.
Referring now to fig. 9, a flowchart of a sequence of operations in different hosts, different containers is shown, according to an embodiment of the present subject matter.
Referring now to fig. 10 and 11, a node 1000 in a communication system is shown, according to an embodiment of the present subject matter. In one embodiment, a node 1000 is disclosed. While the present subject matter is illustrated by the implementation of the present invention in node 1000, it is to be appreciated that the present invention may also be implemented in a variety of computing systems, such as laptop computers, desktop computers, notebook computers, workstations, mainframe computers, servers, network servers, and the like. It will be appreciated that node 1000 may be accessed by multiple users or applications residing in a database system. Examples of node 1000 may include, but are not limited to, portable computers, personal digital assistants, handheld nodes, sensors, routers, gateways, and workstations. Node 1000 may be communicatively coupled to other nodes or to a node or device to form a network (not shown). Examples of other nodes or a node or device may include, but are not limited to, portable computers, personal digital assistants, handheld nodes, sensors, routers, gateways, and workstations.
In an embodiment, the network (not shown) may be a wireless network, a wired network, or a combination thereof. The network may be implemented as one of different types of networks, such as GSM, CDMA, LTE, UMTS, intranet, Local Area Network (LAN), Wide Area Network (WAN), the internet, etc. The network may be a private network or a shared network. The shared network represents a combination of different types of networks that perform mutual communication using various protocols such as a Hypertext Transfer Protocol (HTTP), a Transmission Control Protocol/Internet Protocol (TCP/IP), and a Wireless Application Protocol (WAP). Further, the network may include various network nodes, including routers, bridges, servers, computing nodes, storage nodes, and the like.
Node 1000 may include a processor 1002, an interface 1004, and a memory 1006. The processor 1002 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any nodes that manipulate signals based on operational instructions. Among other things, the at least one processor is configured to retrieve and execute computer-readable instructions or modules stored in the memory 1006.
The interfaces (I/O interfaces) 1004 may include a variety of software and hardware interfaces, such as network interfaces, graphical user interfaces, and so forth. The I/O interface may allow the database system, the first node, the second node, and the third node to interact directly with the user. Further, the I/O interfaces may enable node 1000 to communicate with other nodes, or computing nodes, such as network servers and external data servers (not shown). The I/O interface may facilitate a variety of communications within a variety of network and protocol types, including wired networks such as GSM, CDMA, LAN, cable, etc., and wireless networks such as WLAN, cellular, or satellite. The I/O interface may include one or more ports for connecting multiple nodes to each other or to another server. The I/O interface may provide for interaction between a user and the node 1000 through a screen provided to the interface.
The memory 1006 may include any computer-readable medium known in the art, including, for example, volatile memory, such as Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM), and/or non-volatile memory, such as Read Only Memory (ROM), erasable programmable ROM, flash memory, a hard disk, an optical disk, and magnetic tape. The memory 1006 may include a plurality of instructions, modules, or applications to perform various functions. The memory includes routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
In one embodiment, the present invention provides a node 1000 in a communication system. The node 1000 includes a processor 1002 and a memory 1006, the memory 1006 being coupled to the processor 1002, the processor 1002 being configured to execute a plurality of modules residing in the memory 1006. The plurality of modules includes at least one interface module 1008 and at least one processing module 1010. The interface module 1006 is used to send and/or receive at least one message/data packet, and the interface is initialized by at least one application residing in the node. The processing module 1010 is configured to provide at least one connection management to one or more other nodes in the communication system to enable unified communications; the connection management is obtained based on at least one of: an IP address, a shared memory key, a common communication port, and any combination thereof.
In one embodiment, the present invention provides a node 1000 in a communication system. The node 1000 includes a processor 1002 and a memory 1006, the memory 1006 being coupled to the processor 1002, the processor 1002 being configured to execute a plurality of modules residing in the memory 1006. The plurality of modules includes at least one crossbar processing module 1102. The crossbar processing module 1102 is configured to receive at least one message/packet initialized by at least one application residing in a node using at least one crossbar library interface; verifying the destination of the message/data packet, the destination being the same node, different nodes or any combination thereof; if the message/packet is destined for the same node, then an open domain socket connection is created/used in the same node; or to transmit messages/packets to other nodes, including lightweight virtual machine processes, based on the IP addresses of one or more other nodes received in the messages/packets.
In one embodiment, the present invention provides a communication system. A communication system includes: a plurality of host devices and a plurality of server devices; a processor; and at least one crossbar embedded in the processor. The crossbar is interconnected with the host device and the server device for providing a unified communication interface for communication between the host device and/or the server device. The crossbar is used for: receiving at least one message/data packet initialized by at least one application residing in a host using at least one crossbar library interface; verifying the destination of the message/data packet, the destination being a host, a server, or any combination thereof; if the message/packet is destined for the same node, then an open domain socket connection is created/used in the same host device and/or server device; or transmitting the message/data packet to at least one other host/server device based on the IP address of the other host/server device received in the message/data packet, the other host/server device including a lightweight virtual machine process.
In an embodiment, the interface module 1008 is further configured to queue messages/packets associated with the application into at least one queue within the at least one shared memory.
In one embodiment, the processing module 1010 is further configured to: extracting messages/packets associated with the application in the queue, and verifying the destination of the messages/packets, the destination being the same node, a different node, or any combination thereof.
In an embodiment, if the message/packet is destined for the same node, the processing module is further configured to create/use an open domain socket connection in the same node.
In one embodiment, the processing module is further configured to transmit the message/packet based on the IP address of the destination if the destination of the message/packet is a different node.
In an embodiment, the IP address based messages/packets are transmitted using at least one route predefined/preconfigured and pre-stored in the processing module, which route follows the TCP transmission mechanism.
In an embodiment, the processing module 1010 is configured to connect the network spaces using domain sockets without affecting isolation of message exchanges within the host.
In an embodiment, IP addresses, shared memory keys, common communication ports, or any combination thereof associated with other nodes are pre-stored/pre-configured in the memory of the node.
In one embodiment, received messages/packets are stored in at least one queue within at least one shared memory.
In one embodiment, crossbar processing module 1102 is also configured to extract the associated message/packet stored in the queue to thereby verify the destination of the message/packet.
In an embodiment, messages/packets are transmitted to other nodes using at least one route that is predefined/preconfigured and pre-stored in the crossbar processing module. The routing follows the TCP transport mechanism.
In an embodiment, the crossbar processing module 1102 is further configured to establish connections with other nodes in the communication system, the other nodes preferably comprising lightweight virtual machine processes, the connections being obtained based on at least one of: an IP address, a shared memory key, a common communication port, and any combination thereof.
In one embodiment, other nodes in the communication system, upon receiving the message/packet, are configured to: processing the new connection request by creating at least one data structure based on the received message/data packet; updating routes that are predefined/preconfigured and pre-stored in the crossbar processing module, preferably the routes are updated based on the received message/data packet by mapping of gateway IP, IP mask or any combination thereof; scanning the created data structure to extract an appropriate socket handle; thus, if the message/packet is destined for the same node, an open domain socket connection is created/used in that same node; or to transmit messages/packets to other nodes based on the IP addresses of one or more other nodes received in the messages/packets.
Referring now to fig. 12, a method performed by a node in a communication system is shown, according to an embodiment of the present subject matter. The method may be described in the general context of computer-executable instructions. Generally, computer-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types. The method may also be practiced in distributed computing environments where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer-executable instructions may be located in both local and remote computer storage media, including memory storage devices.
The order in which the method is described should not be construed as a limitation, and any number of the described method steps can be combined in any order to implement the method, or an alternate method. In addition, individual steps may be deleted from the method without departing from the scope of the subject matter described herein. Further, the method may be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method may be considered to be implemented in the node 1000 described above.
Referring now to fig. 12, a method performed by a node 1000 in a communication system is disclosed.
At step 1202, at least one message/packet initialized by at least one application residing in node 1000 is sent or received. The messages/data packets are sent or received by at least one interface of node 1000.
At step 1204, messages/packets associated with the application are queued/stored in at least one queue within at least one shared memory of node 1000.
At step 1206, messages/packets associated with the application stored in the queue are extracted.
At step 1208, node 1000 validates the destination of the message/packet associated with the application in the queue. The destinations are the same node, different nodes, or any combination thereof.
In an embodiment, if the message/packet is destined for the same node, the processing module is further configured to create/use an open domain socket connection in the same node.
In one embodiment, the processing module is further configured to transmit the message/packet based on the IP address of the destination if the destination of the message/packet is a different node.
At step 1210, establishing at least one connection with one or more other nodes in the communication system to achieve unified communication; the connection management is obtained based on at least one of: an IP address, a shared memory key, a common communication port, and any combination thereof. In an embodiment, one or more domain sockets are used to connect the network spaces, while affecting isolation of message interactions within the host. IP addresses associated with other nodes, shared memory keys, common communication ports, or any combination thereof may be pre-stored/pre-configured in the memory of the node.
In an embodiment, the other nodes preferably comprise lightweight virtual machine processes, the connections being obtained based on at least one of: an IP address, a shared memory key, a common communication port, and any combination thereof. Other nodes in the communication system, upon receiving the message/data packet, process the new connection request by creating at least one data structure based on the received message/data packet; updating routes that are predefined/preconfigured and pre-stored in the crossbar processing module, preferably the routes are updated based on the received message/data packet by mapping of gateway IP, IP mask or any combination thereof; scanning the created data structure to retrieve the appropriate socket handle; if the message/packet is destined for the same node, then an open domain socket connection is created/used in the same node; or to transmit messages/packets to other nodes based on the IP addresses of one or more other nodes received in the messages/packets.
At step 1212, the message/data packet is transmitted based on the IP address of the destination. In an embodiment, the IP address based messages/packets are transmitted using at least one route predefined/preconfigured and pre-stored in the processing module, which route follows the TCP transmission mechanism.
Compared with the prior art, the method has the main advantage of better network performance because the overhead of the communication of the lightweight virtual machine in the host is low. In addition, the multi-host network is simplified because the local crossbar forwards messages to the gateway crossbar using domain sockets, without the need for additional message encapsulation. In addition, the invention does not need IPtable rules, port mapping and other network domain configurations.
In one embodiment, the present invention achieves a technical advance, simplifies application development/deployment and improves network performance because a software crossbar or application crossbar or software-defined switch provides network abstraction to applications running in a lightweight virtual machine.
In one embodiment, the present invention may be implemented in any application running a lightweight virtual machine and requiring high performance/scalable network requirements, which may use this approach. For legacy applications, the API needs to be passed to the crossbar library. New applications may base their transport APIs on crossbar libraries.
One skilled in the art will appreciate that the present invention may be implemented using any known or new algorithm. It should be noted, however, that regardless of which known or new algorithms are used, the present invention provides a method that can be used in a backup operation to achieve the benefits and technological advances mentioned above.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the examples disclosed in the embodiments disclosed herein may be embodied in electronic hardware or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that for the sake of convenience and brevity of description, for the detailed working processes of the foregoing systems, apparatuses and units, reference may be made to the corresponding processes in the foregoing method embodiments, and further description is omitted here.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the described apparatus embodiments are merely exemplary. For example, the cell division is only a logical functional division, and may be other divisions in an actual implementation. For example, various elements or components may be combined or integrated in another system or portions of features may be omitted, or not implemented. Further, the shown or discussed mutual coupling or direct coupling or communicative connection may be achieved through some interfaces. An indirect coupling or communicative connection between devices or units may be achieved through electrical, mechanical, or other means.
When these functions are implemented in the form of software functional units and sold or used as separate products, they may be stored in a computer-readable storage medium. Based on this understanding, the solution of the invention can be implemented substantially as or as part of the state of the art or as part of a software product. A computer software product is stored on a storage medium and includes instructions for instructing a computer node (which may be a personal computer, a server, or a network node) to perform all or part of the steps of the method described in an embodiment of the present invention. The storage medium includes: any medium that can store program code, such as a USB disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.
When a single device or article is described herein, it will be readily apparent that more than one device/article, whether or not they cooperate, may be used in place of the single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used in place of the shown number of devices or programs. Alternatively, the functionality and/or the features of a device may be embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the present invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, it is intended that the scope of the invention be limited not by this detailed description, but rather by any claims appended hereto as applicable. Accordingly, the disclosed embodiments of the invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
With respect to the use of any plural and/or singular terms herein, those having skill in the art may translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. For clarity, the singular/plural forms may be explicitly set forth herein.
Although embodiments of systems and methods for software defined switching between lightweight virtual machines using host kernel resources have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. These specific features and methods are disclosed as examples of embodiments of systems and methods for software-defined switching between lightweight virtual machines using host kernel resources.

Claims (25)

1. A node in a communication system, the node comprising:
a processor;
an interface initialized by at least one application residing in the node;
a memory coupled to the processor, the processor to execute a plurality of modules present in the memory, the plurality of modules comprising:
at least one interface module for transmitting and/or receiving at least one message/data packet; the interface module is further for queuing the messages/data packets associated with the application into at least one queue within at least one shared memory; and
at least one processing module for providing at least one connection management to one or more other nodes in the communication system for unified communications, the connection management being obtained based on at least one of: an IP address, a shared memory key, and a public communication port; the processing module is further configured to: extracting the message/data packet associated with the application in the queue and verifying a destination of the message/data packet, the destination being the same node or a different node; the processing module is further configured to create/use an open domain socket connection in the same node if the destination of the message/packet is the same node.
2. The node of claim 1, wherein if the destination of the message/packet is a different node, the processing module is further configured to transmit the message/packet based on an IP address of the destination.
3. A node according to claim 1 or 2, characterized in that at least one of an IP address, a shared memory key and a common communication port associated with the other node is pre-stored/pre-configured in the memory of the node; the other nodes include lightweight virtual machine processes.
4. The node according to claim 3, characterized in that said message/data packet based on said IP address is transmitted using at least one route pre-defined/pre-configured and pre-stored in said processing module, said route following TCP transmission mechanism.
5. A node in a communication system, the node comprising:
a processor;
a memory coupled to the processor, the processor to execute a plurality of modules present in the memory, the plurality of modules comprising:
at least one crossbar processing module to:
receiving at least one message/data packet initialized by at least one application residing in the node using at least one crossbar library interface;
verifying the destination of the message/data packet, the destination being the same node or a different node;
if the destination of the message/packet is the same node, creating/using an open domain socket connection in the same node; or
If the destination of the message/data packet is a different node, transmitting the message/data packet to one or more other nodes received in the message/data packet based on IP addresses of the other nodes; the other nodes include lightweight virtual machine processes.
6. The node according to claim 5, wherein the received messages/data packets are stored in at least one queue in at least one shared memory.
7. The node of claim 5 or 6, wherein the crossbar processing module is further configured to extract associated messages/packets stored in the queue and validate the destination of the messages/packets.
8. The node according to claim 7, characterized in that said messages/packets are transmitted to said other nodes using at least one route predefined/preconfigured and pre-stored in said crossbar processing module, said route following the TCP transmission mechanism.
9. The node of claim 8, wherein the crossbar processing module is further configured to establish a connection with the other nodes in the communication system, wherein the other nodes comprise lightweight virtual machine processes, and wherein the connection is obtained based on at least one of: an IP address, a shared memory key, and a public communication port.
10. The node of claim 5, wherein the other nodes in the communication system, upon receiving the message/packet, are configured to:
processing a new connection request based on the received message/data packet; or
Creating at least one data structure based on the received message/data packet;
updating routes that are predefined/preconfigured and pre-stored in the crossbar processing module;
scanning the received message/data packet to retrieve an appropriate socket handle;
if the destination of the message/packet is the same node, creating/using an open domain socket connection in the same node; or
If the destination of the message/data packet is a different node, transmitting the message/data packet to the other node based on the IP addresses of the one or more other nodes received in the message/data packet.
11. The node according to claim 10, wherein the routing is updated based on a mapping of the received message/data packet by a gateway IP and/or an IP mask.
12. A communication system, comprising:
a plurality of host devices and a plurality of server devices;
a processor; the processor includes:
at least one crossbar embedded in the processor, the at least one crossbar interconnected with the host device and the server device for providing a unified communication interface for communication between the host device and/or the server device, wherein the crossbar is configured to:
receiving at least one message/data packet initialized by at least one application residing in the host device using at least one crossbar library interface;
verifying a destination of the message/data packet, the destination being the host device or the server device;
creating/using an open domain socket connection in the same host device and/or the same server device if the destinations of the messages/packets are the same node; or
Transmitting the message/data packet to at least one other host device/server device based on the IP address of the other host device/server device received in the message/data packet, the other host device/server device comprising a lightweight virtual machine process.
13. A method performed by a node in a communication system comprising a plurality of host devices and a plurality of server devices, the method comprising:
at least one interface sends and/or receives at least one message/data packet initialized by at least one application residing in the node;
at least one processing module provides at least one connection management to one or more other nodes in the communication system to enable unified communications, the connection management obtained based on at least one of: an IP address, a shared memory key, and a public communication port;
queuing the message/data packets associated with the application into at least one queue within at least one shared memory of the node;
extracting the message/data packet associated with the application in the queue; and
verifying the destination of the message/data packet, the destination being the same node or a different node; the processing module is further configured to create/use an open domain socket connection in the same node if the destination of the message/packet is the same node.
14. The method of claim 13, wherein if the destination of the message/packet is a different node, the processing module is further configured to transmit the message/packet based on an IP address of the destination.
15. The method according to claim 13 or 14, characterized in that the method further comprises: the processing module uses one or more domain sockets to connect the network spaces without affecting isolation of message exchanges within the host device.
16. The method according to claim 15, characterized in that at least one of an IP address, a shared memory key and a common communication port associated with the other node is pre-stored/pre-configured in a memory of the node.
17. The method according to claim 16, characterized in that said message/data packet based on said IP address is transmitted using at least one route pre-defined/pre-configured and pre-stored in said processing module, said route following TCP transmission mechanism.
18. A method performed by a node in a communication system, the method comprising:
receiving at least one message/data packet initialized by at least one application residing in the node using at least one crossbar library interface;
verifying the destination of the message/data packet, the destination being the same node or a different node;
if the destination of the message/packet is the same node, creating/using an open domain socket connection in the same node; or
If the destination of the message/data packet is a different node, transmitting the message/data packet to one or more other nodes received in the message/data packet based on IP addresses of the other nodes; the other nodes include lightweight virtual machine processes.
19. The method of claim 18, further comprising: storing the received message/data packet in at least one queue within at least one shared memory.
20. The method of claim 19, further comprising: extracting the associated message/packet stored in the queue, verifying the destination of the message/packet.
21. The method according to any one of claims 18-20, further comprising: the message/data packet is transmitted to the other node using at least one route that is predefined/preconfigured and pre-stored in the crossbar processing module, the route following the TCP transmission mechanism.
22. The method of claim 21, further comprising: establishing a connection with the other node in the communication system, the other node preferably comprising a lightweight virtual machine process, the connection being obtained based on at least one of: an IP address, a shared memory key, and a public communication port.
23. The method of claim 18, further comprising: processing a new connection request based on the received message/data packet after the other node in the communication system receives the message/data packet; or creating at least one data structure based on the received message/data packet;
updating routes that are predefined/preconfigured and pre-stored in the crossbar processing module;
scanning the received message/data packet to retrieve an appropriate socket handle;
if the destination of the message/packet is the same node, creating/using an open domain socket connection in the same node; or
If the destination of the message/data packet is a different node, transmitting the message/data packet to the other node based on the IP addresses of the one or more other nodes received in the message/data packet.
24. The method according to claim 23, wherein said routing is updated based on a mapping of said received message/data packet through a gateway IP and/or an IP mask.
25. A computer-readable storage medium comprising a computer program which, when run on a node in a communication system, causes the node to perform the method of any of claims 13-24.
CN201780009111.0A 2016-05-26 2017-05-22 System and method for software defined switching between lightweight virtual machines using host kernel resources Active CN108604992B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IN201641018137 2016-05-26
IN201641018137 2016-05-26
PCT/CN2017/085416 WO2017202272A1 (en) 2016-05-26 2017-05-22 System and method of software defined switches between light weight virtual machines using host kernel resources

Publications (2)

Publication Number Publication Date
CN108604992A CN108604992A (en) 2018-09-28
CN108604992B true CN108604992B (en) 2020-09-29

Family

ID=60412053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780009111.0A Active CN108604992B (en) 2016-05-26 2017-05-22 System and method for software defined switching between lightweight virtual machines using host kernel resources

Country Status (2)

Country Link
CN (1) CN108604992B (en)
WO (1) WO2017202272A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990052B (en) * 2019-11-29 2023-09-26 杭州迪普科技股份有限公司 Configuration preservation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436966A (en) * 2008-12-23 2009-05-20 北京航空航天大学 Network monitoring and analysis system under virtual machine circumstance
CN102520944A (en) * 2011-12-06 2012-06-27 北京航空航天大学 Method for realizing virtualization of Windows application program
KR20130122326A (en) * 2012-04-30 2013-11-07 주식회사 케이티 Lightweight virtual machine image system and method for input/output and generating virtual storage image thereof
CN103503386A (en) * 2012-12-31 2014-01-08 华为技术有限公司 Network device and method for processing message
CN105407164A (en) * 2011-06-15 2016-03-16 瞻博网络公司 Routing proxy for resource requests and resources

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006127461A (en) * 2004-09-29 2006-05-18 Sony Corp Information processing device, communication processing method, and computer program
US7620953B1 (en) * 2004-10-05 2009-11-17 Azul Systems, Inc. System and method for allocating resources of a core space among a plurality of core virtual machines
US20100070552A1 (en) * 2008-09-12 2010-03-18 Charles Austin Parker Providing a Socket Connection between a Java Server and a Host Environment
US8780923B2 (en) * 2010-01-15 2014-07-15 Dell Products L.P. Information handling system data center bridging features with defined application environments
CN102103518B (en) * 2011-02-23 2013-11-13 运软网络科技(上海)有限公司 System for managing resources in virtual environment and implementation method thereof
US9329843B2 (en) * 2011-08-02 2016-05-03 International Business Machines Corporation Communication stack for software-hardware co-execution on heterogeneous computing systems with processors and reconfigurable logic (FPGAs)
US9246741B2 (en) * 2012-04-11 2016-01-26 Google Inc. Scalable, live transcoding with support for adaptive streaming and failover
US9710357B2 (en) * 2012-08-04 2017-07-18 Microsoft Technology Licensing, Llc Function evaluation using lightweight process snapshots
CN104883302B (en) * 2015-03-18 2018-11-09 华为技术有限公司 A kind of method, apparatus and system of data packet forwarding
CN105591815A (en) * 2015-12-10 2016-05-18 北京匡恩网络科技有限责任公司 Network control method for power supply relay device of cloud testing platform
CN105550576B (en) * 2015-12-11 2018-09-11 华为技术服务有限公司 The method and apparatus communicated between container

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436966A (en) * 2008-12-23 2009-05-20 北京航空航天大学 Network monitoring and analysis system under virtual machine circumstance
CN105407164A (en) * 2011-06-15 2016-03-16 瞻博网络公司 Routing proxy for resource requests and resources
CN102520944A (en) * 2011-12-06 2012-06-27 北京航空航天大学 Method for realizing virtualization of Windows application program
KR20130122326A (en) * 2012-04-30 2013-11-07 주식회사 케이티 Lightweight virtual machine image system and method for input/output and generating virtual storage image thereof
CN103503386A (en) * 2012-12-31 2014-01-08 华为技术有限公司 Network device and method for processing message

Also Published As

Publication number Publication date
WO2017202272A1 (en) 2017-11-30
CN108604992A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
US10812378B2 (en) System and method for improved service chaining
US10541836B2 (en) Virtual gateways and implicit routing in distributed overlay virtual environments
US11128494B2 (en) Distributed virtual gateway appliance
US9825900B2 (en) Overlay tunnel information exchange protocol
JP6487979B2 (en) Framework and interface for offload device-based packet processing
CN110999265B (en) Managing network connectivity between cloud computing service endpoints and virtual machines
JP2022539497A (en) Plug and play on site with TLOC extension
US9258272B1 (en) Stateless deterministic network address translation
EP1325591A1 (en) Wireless provisioning device
CN113302884A (en) Service insertion in a public cloud environment
US11595303B2 (en) Packet handling in software-defined net working (SDN) environments
US10103995B1 (en) System and method for automated policy-based routing
CN111756565A (en) Managing satellite devices within a branch network
CN108604992B (en) System and method for software defined switching between lightweight virtual machines using host kernel resources
US11743365B2 (en) Supporting any protocol over network virtualization
US20170149663A1 (en) Control device, communication system, control method, and non-transitory recording medium
Tarasiuk et al. The IPv6 QoS system implementation in virtual infrastructure
CN113596192A (en) Communication method, device, equipment and medium based on network gate networking
Berisha 5G SA and NSA solutions
US20240039832A1 (en) Hitless migration of interconnected data center networks for network virtualization overlay using gateways
US20230087159A1 (en) Apparatus and method for implementing user plane function
CN117478446A (en) Cloud network access method, cloud network access equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant