WO2017202272A1 - System and method of software defined switches between light weight virtual machines using host kernel resources - Google Patents

System and method of software defined switches between light weight virtual machines using host kernel resources Download PDF

Info

Publication number
WO2017202272A1
WO2017202272A1 PCT/CN2017/085416 CN2017085416W WO2017202272A1 WO 2017202272 A1 WO2017202272 A1 WO 2017202272A1 CN 2017085416 W CN2017085416 W CN 2017085416W WO 2017202272 A1 WO2017202272 A1 WO 2017202272A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
node
packet
destination
crossbar
Prior art date
Application number
PCT/CN2017/085416
Other languages
French (fr)
Inventor
Raghavendra Keshavamurthy
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to CN201780009111.0A priority Critical patent/CN108604992B/en
Publication of WO2017202272A1 publication Critical patent/WO2017202272A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/04Switchboards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • the present subject matter described herein in general, relates to communication data processing, and more particularly for improving the performance of networking capability in lightweight virtual machines.
  • a virtual machine In computing, a virtual machine (VM) is an emulation of a particular computer system.
  • Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer and their implementations may involve specialized hardware, software, or a combination of both.
  • running virtual machines has many benefits. They utilize your hardware much better, are easy to backup and exchange, and isolate services from each other.
  • running virtual machines also has downsides. Virtual machine images are clunky. Also and more importantly, virtual machines require a fair amount of resources as they emulate hardware and run a full stack operating system.
  • Container is an operating system level virtualization environment for running multiple isolated Linux systems on a single Linux host. This is also sometimes referred as Light weight virtualization or Lightweight Virtual Machine.
  • RunC, Docker and warden are some of the examples of lightweight virtual machines (Containers) which can be used in building “Platform as a Solution” (PaaS) .
  • runC/Docker/Warden are solutions based on Linux Kernel namespace and CGroup. They abstract complex kernel APIs with easy to use Console/APIs/Image formats. They provide the on Demand abstraction to compute storage and network capability.
  • Container networking could be about creating a consistent network environment for a group of containers. This could be achieved using an overlay network, of which multiple implementations exist like Docker default networking mode, weave, flannel, and socket plane. The key advantage of all these overlay network is application code need not change and applications can be deployed as is. There is also another part of container networking the way a networking namespace connects to the physical network device.
  • the namespace is a feature of Linux kernel, allowing group of processes to be separated such that they cannot "see"resources in other groups.
  • network namespaces can communicate via veth pairs and/or Domain Sockets.
  • the network namespaces communicating using the Veth pair is shown in figure 1 (a) .
  • the Veth pair is an Ethernet-like virtual device that can be used inside a container. Veth pair captures the Ethernet frame and captured frame can be directed to the destination via a bridge or router.
  • the Domain socket is an IPC (inter process communication) which is light weight and efficient.
  • the network namespaces communicating using the domain socket is shown in figure 1 (b) .
  • the domain socket can be controlled by file permissions thus more secure than a TCP port to which any one can connect and hence TCP port needs further security shield.
  • FIG. 2 illustrates a networking setup in docker.
  • the figure 2 shows a general approach to solving networking problem employed by the virtual machines/container is capturing ethernet Packets from a virtual device like Veth and then tunnel them using a bridge/router to the required container in the same host or different host.
  • Veth pair is an ethernet-like virtual device that can be used inside a container.
  • Figure 3 shows internal arrangement in docker. As shown in figure 3, each container spawned/instantiated is connected to the Linux Bridge using veth pair. Container end of the veth pair is called eth0 and Linux Bridge side veth pair is called vethxx like vethaa or vethbb. Similar veth pair is connected between the host and the Linux Bridge.
  • the bridge operates at Layer 2 (L2) and is protocol independent.
  • the L2 network layer is responsible for physical addressing, error correction, and preparing the information for the media.
  • a bridge is a way to connect two separate network segments together in a protocol-independent way. Packets are forwarded based on Ethernet address, rather than IP address (like a router) . Since forwarding is done at Layer 2 (L2) , all protocols can go transparently through a bridge. All the networking traffic passes through the Linux Bridge or other configured bridge like OpenVSwitch (OVS) .
  • IP Tables (as shown in figure 3) are used so that mapping can be implemented for each container port to host port. IP address of each spawned/instantiated changes on each reboot of container, this also poses problem as services in one container has to get the new IP address to access the services.
  • Figure 4 illustrates a communication in overlay network or L2/L3 Solution.
  • An overlay network is a computer network that is built on top of another network. Nodes in the overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network.
  • the L2 based solution means capturing L2 Frames from virtual Ethernet like devices and then tunneling those frames over other transport mechanism like TCP/UDP/SecureTCP.
  • Figure 4 shows an approach to capture the packets at Ethernet level (Virtual Ethernet Device) and then tunnel the packets over a transport like TCP/UDP as available in the prior-art. This is called overlay network or L2/L3 Solution.
  • L2/L3 Solution overlay network
  • the approach as shown in figure 4 has network efficiency and operational complexity related issues. Network efficiency is very low as data gets captured at L2 (Layer 2 of network stack) and then re encapsulated for sending it to right destination or routed through a bridge. L2 Solutions in Mutli Host networking can pose operational issues like configuration errors, debugging and would need a network expert.
  • Figure 5 shows details of the message exchange across 2 hosts.
  • a sender application (App) creates a socket to the destination, sender application formats data and using socket interface sends message, Kernel’s TCP/IP stack further process and sends to network card, and on the Receiving App side, network card receives data, forward to TCP/IP stack and stack gives to App.
  • App creates a socket to the destination
  • sender application formats data and using socket interface sends message
  • Kernel’s TCP/IP stack further process and sends to network card
  • network card receives data, forward to TCP/IP stack and stack gives to App.
  • a default networking mode of Lightweight Virtual machines is used as conventional mode, which uses overlay networks, above steps performed for figure 5 (a) need further processing incurring additional CPU and memory as shown in figure 5 (b) thus hitting network performance.
  • a main objective of the present invention is to solve the technical problem as recited above by providing a mechanism to improve the performance of networking capability, specifically, the container /lightweight virtual machines.
  • the present invention provides an application level crossbar for allowing communication across containers/lightweight virtual machines
  • the application level crossbar or software crossbar or software defined switch provides a unified communication interface for the application layer. It abstracts the details of connection management and message sending and receiving.
  • the software defined switch /application level crossbar utilize the fact that the domain sockets can be used to connect network spaces without affecting isolation for intra host message exchange thus improving performance of message exchange for intra host virtual machines.
  • software crossbar /application level crossbar/software defined switch is a software switch capable of channeling data for applications. Applications are abstracted from connection management.
  • the present invention provides a node in a communication system.
  • the node comprises a processor; and a memory coupled to the processor for executing a plurality of modules present in the memory.
  • the plurality of modules includes at least one interface module and at least one processing module.
  • the interface module configured to send and/or receive at least one message/packet, the interface is initialized by at least one application residing in the node.
  • the processing module configured to provide at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  • the present invention provides a node in a communication system.
  • the node comprises a processor; and a memory coupled to the processor for executing a plurality of modules present in the memory.
  • the plurality of modules includes at least a crossbar processing module.
  • the crossbar processing module is configured to receive at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the same node or different node or any combination thereof; create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
  • the present invention provides a communication system.
  • the communication system comprises a plurality of host and server devices, a processor, and at least one crossbar embedded on the processor.
  • the crossbar interconnected to the host and server devices, and adapted to provide a unified communication interface for communication between the host and/or server devices.
  • the crossbar is configured to receive at least one message/packet initialized by at least one application residing in the host using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the host or the server or any combination thereof; create/use an open domain socket connection in the same host device and/or server device if the destination of the message/packet is the same node; or communicate the message/packet to at least one other host /server device based on the IP address of the other host /server device, received in the message/packet, the other host /server device comprise a lightweight virtual machine processes.
  • the present invention provides a method performed by a node in a communication system.
  • the method comprises sending and/or receiving, by at least interface, at least one message/packet initialized by at least one application residing in the node; and providing, by at least one processing module, at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  • the present invention provides a method performed by a node in a communication system.
  • the method comprises receiving at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface; verifying the destination of the message/packet, the destination is either within the same node or different node or any combination thereof; creating using an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicating the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
  • the main benefits of the present invention is that, the invention provides a software crossbar or application crossbar or software defined switch that providing network abstraction to applications running in a lightweight virtual machine simplifies Application development/deployment and improves network performance.
  • PMs Physical machines
  • PM-VM Physical machine running many Virtual Machines
  • PM-VM-LVM Physical machine running many virtual machine, each virtual machine running many Light weight virtual machines
  • the network performance achieved by the present invention is better as there are no much overhead for intra host lightweight virtual machine communication.
  • a multi host networking by means of the present invention, is simplified as local crossbars use domain sockets to forward message to gateway crossbar. There are no additional message encapsulations needed.
  • Figure 1 illustrates a network namespaces communication using (a) veth pairs, and (b) domain sockets.
  • Figure 2 illustrates a networking setup in Docker.
  • Figure 3 illustrates internal arrangement in docker.
  • Figure 4 illustrates overlay network or L2/L3 Solution.
  • Figure 5 illustrates message exchange across 2 hosts (a) sender side and receiver side and (b) processing cost on receipt of message.
  • Figure 6 illustrates a communication across containers/lightweight virtual machines using application level crossbar, in accordance with an embodiment of the present subject matter.
  • Figure 7 illustrates a crossbar design and the processing using the crossbar, in accordance with an embodiment of the present subject matter.
  • Figure 8 illustrates a sequence flow of the operation in same host, different container scenario, in accordance with an embodiment of the present subject matter.
  • FIG. 9 illustrates a sequence flow of the operation in different host, different container, in accordance with an embodiment of the present subject matter.
  • Figure 10 illustrates a node in a communication system, in accordance with an embodiment of the present subject matter.
  • Figure 11 illustrates a node in a communication system, in accordance with an embodiment of the present subject matter.
  • Figure 12 illustrates a method performed by a node in a communication system, in accordance with an embodiment of the present subject matter.
  • the invention can be implemented in numerous ways, as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • the terms “plurality” and “aplurality” as used herein may include, for example, “multiple” or “two or more” .
  • the terms “plurality” or “aplurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
  • the present invention provides an application level crossbar for allowing communication across containers/lightweight virtual machines.
  • the application level crossbar or software crossbar or software defined switch as shown in figure 6 provides a unified communication interface for the application layer. It abstracts the details of connection management and message sending and receiving.
  • the software defined switch /application level crossbar utilize the fact that the domain sockets can be used to connect network spaces without affecting isolation for intra host message exchange thus improving performance of message exchange for intra host virtual machines.
  • software crossbar /application level crossbar/software defined switch is a software switch capable of channeling data for applications. Applications are abstracted from connection management.
  • the Crossbar may be designed in 2 parts: Crossbar lib and Crossbar process.
  • the crossbar lib provides an interface, application program interface (API) for sending and receiving the message.
  • API application program interface
  • the application has to initialize this library and initialization API is exposed for doing the same.
  • Crossbar lib would internally use shared memory for faster message exchange between crossbar lib and the crossbar process.
  • the crossbar process provides the actual connection management between other virtual machines. Every Crossbar process must be configured with relevant details like IP address, shared memory key and common communication port. Generally common communication port is set 9999.
  • each lightweight virtual machine may have at least one crossbar deployed called a local crossbar and all the applications to be run in that Lightweight virtual machine must link with the Crossbar lib.
  • the gateway crossbar there may be at least one crossbar deployed at host level called the gateway crossbar. Every local crossbar is configured with the address of the gateway crossbar. IT may be noted and understood by the person skilled in the art that, every lightweight virtual machine started must have unique IP in the network setup.
  • the crossbar lib may have an intelligence (destination address) to put the messages to the local crossbar process queue if the message is designated to a process in a different lightweight virtual machine.
  • the Destination IP is not the same as the current HOST IP then Destination is outside the HOST, thus the messages have to be put to a network queue instead of local crossbar process queue.
  • the crossbar process would poll for the messages and on message event, it shall check if the destination process is within the same host or different host.
  • the crossbar process would create/use already open domain socket connection with destination lightweight virtual machine’s process.
  • crossbar process may forward the message to a Gateway crossbar process running in that host.
  • the gateway crossbar would be configured with the routes of other hosts gateways. Based on the destination IP, Gateway crossbar would find the other host gateway and forward the message using TCP transport. Gateway crossbar will use Host mode of networking for better performance.
  • this information shall be auto sync to the entire gateway crossbar. There may be a window of time for this sync to happen and during this period Applications may get failure messages to reach destination. Addition of new gateway crossbar can be seen as network mgmt activity and hence applications be designed for the same via some notification services.
  • the crossbar process may have shared memory management process, domain socket connection thread process, domain socket listen thread process, TCP socket connection thread process, TCP socket listen thread process, SENDRECV thread process.
  • the shared memory management process may create/attach to a given shared memory, create queue from the shared memory region, and mapping the application to queue.
  • the domain socket connection thread process may establish connection with other local lightweight virtual machine processes, and forward application message to gateway crossbar if the destination application is in a different host’s light weight virtual machine.
  • the domain socket listen thread process may receive new connection request and process the new connection request for further use.
  • the TCP socket connection thread process may establish connection with other gateway crossbar in other host, and update routes (map of GatewayIP, IPMask) .
  • the TCP socket listen thread process may receive new connection request and process the new connection request for example, prepares the data structure. for further use.
  • the SendRecv thread process may browse through the data structures created by listen /connect thread and retrieve the appropriate socket handles and send/receive message.
  • the Crossbar lib may have shared memory management process and an interface.
  • the shared memory management process may create/attach to the given shared memory, create/attach queue from the shared memory region, and map App to queue.
  • the interface is configured to send, or receive or initiate or teardown the packet or message received.
  • a node 1000 in a communication system is illustrated, in accordance with an embodiment of the present subject matter.
  • the node 1000 is disclosed.
  • the present subject matter is explained considering that the present invention is implemented in the node 1000, it may be understood that the present invention may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. It will be understood that the node 1000 may be accessed by multiple users, or applications residing on the database system.
  • Examples of the node 1000 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld node, sensors, routers, gateways and a workstation.
  • the node 1000 may be communicatively coupled to other nodes or a nodes or apparatuses to form a network (not shown) .
  • Examples of the other nodes or a nodes or apparatuses may include, but are not limited to, a portable computer, a personal digital assistant, a handheld node, sensors, routers, gateways and a workstation.
  • the network may be a wireless network, a wired network or a combination thereof.
  • the network can be implemented as one of the different types of networks, such as GSM, CDMA, LTE, UMTS, intranet, local area network (LAN) , wide area network (WAN) , the internet, and the like.
  • the network may either be a dedicated network or a shared network.
  • the shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP) , Transmission Control Protocol/Internet Protocol (TCP/IP) , Wireless Application Protocol (WAP) , and the like, to communicate with one another.
  • HTTP Hypertext Transfer Protocol
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • WAP Wireless Application Protocol
  • the network may include a variety of network nodes, including routers, bridges, servers, computing nodes, storage nodes, and the like.
  • the node 1000 may include a processor 1002, an interface 1004, and a memory 1006.
  • the processor 1002 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any nodes that manipulate signals based on operational instructions.
  • the at least one processor is configured to fetch and execute computer-readable instructions or modules stored in the memory 1006.
  • the interface (I/O interface) 1004 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like.
  • the I/O interface may allow the database system, the first node, the second node, and the third node to interact with a user directly. Further, the I/O interface may enable the node 1000 to communicate with other nodes or nodes, computing nodes, such as web servers and external data servers (not shown) .
  • the I/O interface can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, GSM, CDMA, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite.
  • the I/O interface may include one or more ports for connecting a number of nodes to one another or to another server.
  • the I/O interface may provide interaction between the user and the node 1000 via, a screen provided for the interface.
  • the memory 1006 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM) , and/or non-volatile memory, such as read only memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static random access memory (SRAM) and dynamic random access memory (DRAM)
  • non-volatile memory such as read only memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • ROM read only memory
  • erasable programmable ROM erasable programmable ROM
  • flash memories hard disks
  • optical disks optical disks
  • magnetic tapes magnetic tapes.
  • the memory 1006 may include plurality of instructions or modules or applications to perform various functionalities.
  • the memory includes routines, programs, objects, components, data structures, etc., which perform particular tasks
  • the present invention provides a node 1000 in a communication system.
  • the node 1000 comprises a processor 1002; and a memory 1006 coupled to the processor 1004 for executing a plurality of modules present in the memory 1006.
  • the plurality of modules includes at least one interface module 1008 and at least one processing module 1010.
  • the interface module 1006 is configured to send and/or receive at least one message/packet; the interface is initialized by at least one application residing in the node.
  • the processing module 1010 is configured to provide at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  • the present invention provides a node 1000 in a communication system.
  • the node 1000 comprises a processor 1002; and a memory 1006 coupled to the processor 1002 for executing a plurality of modules present in the memory 1006.
  • the plurality of modules includes at least a crossbar processing module 1102.
  • the crossbar processing module 1102 is configured to receive at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the same node or different node or any combination thereof; create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
  • the present invention provides a communication system.
  • the communication system comprises a plurality of host and server devices, a processor, and at least one crossbar embedded on the processor.
  • the crossbar interconnected to the host and server devices, and adapted to provide a unified communication interface for communication between the host and/or server devices.
  • the crossbar is configured to receive at least one message/packet initialized by at least one application residing in the host using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the host or the server or any combination thereof; create/use an open domain socket connection in the same host device and/or server device if the destination of the message/packet is the same node; or communicate the message/packet to at least one other host /server device based on the IP address of the other host /server device, received in the message/packet, the other host /server device comprise a lightweight virtual machine processes.
  • the interface module 1008 is further configured to queue the message/packet associated with the application in at least one queue in at least a shared memory.
  • the processing module 1010 is further configured to fetch the message/packet associated with the application in the queue and verify the destination of the message/packet; the destination is either within the same node or different node or any combination thereof.
  • the processing module is further configured to create/use an open domain socket connection in the same node.
  • the processing module is further configured to communicate the message/packet based on the IP address of the destination.
  • the message/packet based on the IP address is communicated using at least one route pre-defined/pre-configure and pre-stored in the processing module, the route follows a TCP transport mechanism.
  • the processing module 1010 is adapted to utilize domain sockets to connect network spaces without affecting isolation for intra host message exchange.
  • the IP address or the shared memory key, or the common communication port or any combination thereof associated with the other nodes is pre-stored/pre-configured in the memory of the node.
  • the message/packet received is stored in at least one queue in at least a shared memory.
  • the crossbar processing module 1102 is further configured to fetch the message/packet associated stored in the queue, and thereby verify the destination of the message/packet.
  • the message/packet is communicated to the other node using at least one route pre-defined/pre-configured and pre-stored in the crossbar processing module.
  • the route follows a TCP transport mechanism.
  • the crossbar processing module 1102 is further configured to establish a connection with the other nodes in the communication system, the other nodes preferably comprise a lightweight virtual machine processes and the connection is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  • the other nodes in the communication system on receipt of the message/packet, are adapted to process the new connection request, by creating at least one data structure based on the message/packet received; update the routes pre-defined/pre-configured and pre-stored in the crossbar processing module, the routes are updated preferably by mapping of Gateway IP, IP Mask, or any combination thereof based on the message/packet received; scan through the data structure created to retrieve the appropriate socket handles; and thereby create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet.
  • a method performed by a node in a communication system is illustrated, in accordance with an embodiment of the present subject matter.
  • the method may be described in the general context of computer executable instructions.
  • computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types.
  • the method may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network.
  • computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
  • the order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method or alternate methods. Additionally, individual blocks may be deleted from the method without departing from the protection scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method may be considered to be implemented in the above described node 1000.
  • At block 1202 at least one message/packet initialized by at least one application residing in the node 1000 is sent or received.
  • the message/packet is sent or received by at least interface of the node 1000.
  • the message/packet associated with the application is queued/stored in at least one queue, in at least a shared memory of the node 1000.
  • the message/packet associated with the application in the queue is verified for destination, by the node 1000.
  • the destination is either within the same node or different node or any combination thereof.
  • the processing module is further configured to create/use an open domain socket connection in the same node.
  • the processing module is further configured to communicate the message/packet based on the IP address of the destination.
  • connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  • one or more domain sockets are utilized to connect network spaces without affecting isolation for intra host message exchange.
  • the IP address or the shared memory key, or the common communication port or any combination thereof associated with the other nodes may be pre-stored/pre-configured in the memory of the node.
  • the other nodes preferably comprise a lightweight virtual machine processes and the connection is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  • the other nodes in the communication system on receipt of the message/packet, processes the new connection request, by creating at least one data structure based on the message/packet received; update the routes pre-defined/pre-configured and pre-stored in the crossbar processing module, the routes are updated preferably by mapping of Gateway IP, IP Mask, or any combination thereof based on the message/packet received; scan through the data structure created to retrieve the appropriate socket handles; and create /use an open domain socket connection in the same node if the destination of the message/packet is the same node or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet.
  • the message/packet is communicated based on the IP address of the destination.
  • the message/packet based on the IP address is communicated using at least one route pre-defined/pre-configure and pre-stored in the processing module, the route follows a TCP transport mechanism.
  • the main benefits according to the present invention is that the network performance is better as there are no much overhead for intra host lightweight virtual machine communication.
  • the multi host networking is simplified as local crossbars use Domain sockets to forward message to Gateway Crossbar. There are no additional message encapsulations needed.
  • networking domain configurations like IPTable rules, port mapping are not required for the implementation of the present invention.
  • the present invention achieves technical advancement as the software crossbar or application crossbar or software defined switch providing network abstraction to applications running in a lightweight virtual machine, simplifies the application development/deployment and improves network performance.
  • the present invention may be implemented in any application which runs in lightweight virtual machine and needing high performance/scalable network needs can use this method.
  • APIs needs to be transported to the Crossbar lib.
  • the new applications can base their transport APIs on the Crossbar lib.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the functions When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or a part of the technical solutions may be implemented in a form of a software product.
  • the computer software product is stored in a storage medium, and includes several instructions for instructing a computer node (which may be a personal computer, a server, or a network node) to perform all or a part of the steps of the methods described in the embodiment of the present invention.
  • the foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM) , a random access memory (Random Access Memory, RAM) , a magnetic disk, or an optical disc.
  • program code such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM) , a random access memory (Random Access Memory, RAM) , a magnetic disk, or an optical disc.
  • Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise.
  • devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides an application level crossbar for allowing communication across containers/lightweight virtual machines, the application level crossbar or software crossbar or software defined switch provides a unified communication interface for the application layer. It abstracts the details of connection management and message sending and receiving. The software defined switch /application level crossbar utilize the fact that the domain sockets can be used to connect network spaces without affecting isolation for intra host message exchange thus improving performance of message exchange for intra host virtual machines.

Description

SYSTEM AND METHOD OF SOFTWARE DEFINED SWITCHES BETWEEN LIGHT WEIGHT VIRTUAL MACHINES USING HOST KERNEL RESOURCES TECHNICAL FIELD
The present subject matter described herein, in general, relates to communication data processing, and more particularly for improving the performance of networking capability in lightweight virtual machines.
BACKGROUND
In computing, a virtual machine (VM) is an emulation of a particular computer system. Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer and their implementations may involve specialized hardware, software, or a combination of both. As well known, running virtual machines has many benefits. They utilize your hardware much better, are easy to backup and exchange, and isolate services from each other. However, running virtual machines also has downsides. Virtual machine images are clunky. Also and more importantly, virtual machines require a fair amount of resources as they emulate hardware and run a full stack operating system.
With Linux Containers there exist lightweight alternative to full blown virtual machines while retaining their benefits. Container is an operating system level virtualization environment for running multiple isolated Linux systems on a single Linux host. This is also sometimes referred as Light weight virtualization or Lightweight Virtual Machine. RunC, Docker and warden are some of the examples of lightweight virtual machines (Containers) which can be used in building “Platform as a Solution” (PaaS) . runC/Docker/Warden are solutions based on Linux Kernel namespace and CGroup. They abstract complex kernel APIs with easy to use Console/APIs/Image formats. They provide the on Demand abstraction to compute storage and network capability.
Container networking could be about creating a consistent network environment for a group of containers. This could be achieved using an overlay  network, of which multiple implementations exist like Docker default networking mode, weave, flannel, and socket plane. The key advantage of all these overlay network is application code need not change and applications can be deployed as is. There is also another part of container networking the way a networking namespace connects to the physical network device.
There are multiple Linux kernel modules that allow a networking namespace to communicate with the networking hardware like veth, OpenVSwith, Domain Sockets. The namespace is a feature of Linux kernel, allowing group of processes to be separated such that they cannot "see"resources in other groups. As per linux kernel documentation of network namespace, without compromise in network isolation level, network namespaces can communicate via veth pairs and/or Domain Sockets. The network namespaces communicating using the Veth pair is shown in figure 1 (a) . The Veth pair is an Ethernet-like virtual device that can be used inside a container. Veth pair captures the Ethernet frame and captured frame can be directed to the destination via a bridge or router. The Domain socket is an IPC (inter process communication) which is light weight and efficient. The network namespaces communicating using the domain socket is shown in figure 1 (b) . The domain socket can be controlled by file permissions thus more secure than a TCP port to which any one can connect and hence TCP port needs further security shield.
Figure 2 illustrates a networking setup in docker. The figure 2 shows a general approach to solving networking problem employed by the virtual machines/container is capturing ethernet Packets from a virtual device like Veth and then tunnel them using a bridge/router to the required container in the same host or different host. Veth pair is an ethernet-like virtual device that can be used inside a container. Figure 3 shows internal arrangement in docker. As shown in figure 3, each container spawned/instantiated is connected to the Linux Bridge using veth pair. Container end of the veth pair is called eth0 and Linux Bridge side veth pair is called vethxx like vethaa or vethbb. Similar veth pair is connected between the host and the Linux Bridge.
As conventionally known, the bridge operates at Layer 2 (L2) and is protocol independent. The L2 network layer is responsible for physical addressing, error correction, and preparing the information for the media. A bridge is a way to  connect two separate network segments together in a protocol-independent way. Packets are forwarded based on Ethernet address, rather than IP address (like a router) . Since forwarding is done at Layer 2 (L2) , all protocols can go transparently through a bridge. All the networking traffic passes through the Linux Bridge or other configured bridge like OpenVSwitch (OVS) . IP Tables (as shown in figure 3) are used so that mapping can be implemented for each container port to host port. IP address of each spawned/instantiated changes on each reboot of container, this also poses problem as services in one container has to get the new IP address to access the services.
Figure 4 illustrates a communication in overlay network or L2/L3 Solution. An overlay network is a computer network that is built on top of another network. Nodes in the overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. The L2 based solution means capturing L2 Frames from virtual Ethernet like devices and then tunneling those frames over other transport mechanism like TCP/UDP/SecureTCP.
Figure 4 shows an approach to capture the packets at Ethernet level (Virtual Ethernet Device) and then tunnel the packets over a transport like TCP/UDP as available in the prior-art. This is called overlay network or L2/L3 Solution. However, the approach as shown in figure 4 has network efficiency and operational complexity related issues. Network efficiency is very low as data gets captured at L2 (Layer 2 of network stack) and then re encapsulated for sending it to right destination or routed through a bridge. L2 Solutions in Mutli Host networking can pose operational issues like configuration errors, debugging and would need a network expert.
Figure 5 shows details of the message exchange across 2 hosts. As shown in figure 5 (a) : a sender application (App) creates a socket to the destination, sender application formats data and using socket interface sends message, Kernel’s TCP/IP stack further process and sends to network card, and on the Receiving App side, network card receives data, forward to TCP/IP stack and stack gives to App. When a default networking mode of Lightweight Virtual machines is used as conventional mode, which uses overlay networks, above steps performed for figure  5 (a) need further processing incurring additional CPU and memory as shown in figure 5 (b) thus hitting network performance.
Thus, in view of the above, it is evident that when using lightweight virtual machines, compute and storage capability does not suffer a performance hit but networking capability is severely constrained in performance.
The above-described deficiencies of today's Lightweight Virtual machines that are implemented in the end devices are merely intended to provide an overview of some of the problems of conventional systems /mechanism /techniques, and are not intended to be exhaustive. Other problems with conventional systems/mechanism/techniques and corresponding benefits of the various non-limiting embodiments described herein may become further apparent upon review of the following description.
SUMMARY
This summary is provided to introduce concepts related to improving the performance of networking capability in lightweight virtual machines, and the same are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
A main objective of the present invention is to solve the technical problem as recited above by providing a mechanism to improve the performance of networking capability, specifically, the container /lightweight virtual machines.
Accordingly, the present invention provides an application level crossbar for allowing communication across containers/lightweight virtual machines, the application level crossbar or software crossbar or software defined switch provides a unified communication interface for the application layer. It abstracts the details of connection management and message sending and receiving. The software defined switch /application level crossbar utilize the fact that the domain sockets can be used to connect network spaces without affecting isolation  for intra host message exchange thus improving performance of message exchange for intra host virtual machines.
In one implementation, software crossbar /application level crossbar/software defined switch is a software switch capable of channeling data for applications. Applications are abstracted from connection management.
In one implementation, the present invention provides a node in a communication system. The node comprises a processor; and a memory coupled to the processor for executing a plurality of modules present in the memory. The plurality of modules includes at least one interface module and at least one processing module. The interface module configured to send and/or receive at least one message/packet, the interface is initialized by at least one application residing in the node. The processing module configured to provide at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
In one implementation, the present invention provides a node in a communication system. The node comprises a processor; and a memory coupled to the processor for executing a plurality of modules present in the memory. The plurality of modules includes at least a crossbar processing module. The crossbar processing module is configured to receive at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the same node or different node or any combination thereof; create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
In one implementation, the present invention provides a communication system. The communication system comprises a plurality of host and server devices, a processor, and at least one crossbar embedded on the processor.  The crossbar interconnected to the host and server devices, and adapted to provide a unified communication interface for communication between the host and/or server devices. The crossbar is configured to receive at least one message/packet initialized by at least one application residing in the host using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the host or the server or any combination thereof; create/use an open domain socket connection in the same host device and/or server device if the destination of the message/packet is the same node; or communicate the message/packet to at least one other host /server device based on the IP address of the other host /server device, received in the message/packet, the other host /server device comprise a lightweight virtual machine processes.
In one implementation, the present invention provides a method performed by a node in a communication system. The method comprises sending and/or receiving, by at least interface, at least one message/packet initialized by at least one application residing in the node; and providing, by at least one processing module, at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
In one implementation, the present invention provides a method performed by a node in a communication system. The method comprises receiving at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface; verifying the destination of the message/packet, the destination is either within the same node or different node or any combination thereof; creating using an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicating the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
In contrast to the prior-art, the main benefits of the present invention is that, the invention provides a software crossbar or application crossbar or software defined switch that providing network abstraction to applications running  in a lightweight virtual machine simplifies Application development/deployment and improves network performance.
Further, the present invention work in PMs (Physical machines) , can work in PM-VM (Physical machine running many Virtual Machines) , can work in PM-VM-LVM (Physical machine running many virtual machine, each virtual machine running many Light weight virtual machines) and it works in PM-LVM deployment combination as well.
In contrast to the prior-art, the network performance achieved by the present invention is better as there are no much overhead for intra host lightweight virtual machine communication.
In contrast to the prior-art, a multi host networking, by means of the present invention, is simplified as local crossbars use domain sockets to forward message to gateway crossbar. There are no additional message encapsulations needed.
In contrast to the prior-art, by means of the present invention no networking domain configurations like IPTable rules, port mapping are required but still achieves better the network performance by reducing operation overhead.
Furthermore, in contrast to the prior-art, by means of the present invention, for message exchange within a host across lightweight virtual machine, performance provided would be equal to native domain sockets and may degrade by 10%due to operation and maintenance requirements like logging. Furthermore, for message exchanged across host’s lightweight virtual machine, performance would be better than overlay network solutions available in open source market.
The various options and preferred embodiments referred to above in relation to the first implementation are also applicable in relation to the other implementations.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit (s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
Figure 1 illustrates a network namespaces communication using (a) veth pairs, and (b) domain sockets.
Figure 2 illustrates a networking setup in Docker.
Figure 3 illustrates internal arrangement in docker.
Figure 4 illustrates overlay network or L2/L3 Solution.
Figure 5 illustrates message exchange across 2 hosts (a) sender side and receiver side and (b) processing cost on receipt of message.
Figure 6 illustrates a communication across containers/lightweight virtual machines using application level crossbar, in accordance with an embodiment of the present subject matter.
Figure 7 illustrates a crossbar design and the processing using the crossbar, in accordance with an embodiment of the present subject matter.
Figure 8 illustrates a sequence flow of the operation in same host, different container scenario, in accordance with an embodiment of the present subject matter.
Figure 9 illustrates a sequence flow of the operation in different host, different container, in accordance with an embodiment of the present subject matter.
Figure 10 illustrates a node in a communication system, in accordance with an embodiment of the present subject matter.
Figure 11 illustrates a node in a communication system, in accordance with an embodiment of the present subject matter.
Figure 12 illustrates a method performed by a node in a communication system, in accordance with an embodiment of the present subject matter.
It is to be understood that the attached drawings are for purposes of illustrating the concepts of the invention and may not be to scale.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
The following clearly describes the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Apparently, the described  embodiments are merely a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall fall within the protection scope of the present invention.
The invention can be implemented in numerous ways, as a process, an apparatus, a system, a composition of matter, a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components, modules, units and/or circuits have not been described in detail so as not to obscure the invention.
Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing, ” “computing, ”  “calculating, ” “determining, ” “establishing” , “analyzing” , “checking” , or the like, may refer to operation (s) and/or process (es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes.
Although embodiments of the invention are not limited in this regard, the terms “plurality” and “aplurality” as used herein may include, for example, “multiple” or “two or more” . The terms “plurality” or “aplurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
When using lightweight virtual machines, compute and storage capability does not suffer a performance hit but networking capability is severely constrained in performance.
Accordingly, the present invention provides an application level crossbar for allowing communication across containers/lightweight virtual machines. The application level crossbar or software crossbar or software defined switch as shown in figure 6 provides a unified communication interface for the application layer. It abstracts the details of connection management and message sending and receiving. The software defined switch /application level crossbar utilize the fact that the domain sockets can be used to connect network spaces without affecting isolation for intra host message exchange thus improving performance of message exchange for intra host virtual machines.
In one implementation, software crossbar /application level crossbar/software defined switch is a software switch capable of channeling data for applications. Applications are abstracted from connection management.
System, method and nodes of software defined switches between light weight virtual machines for improving performance of message exchange in virtual machines are disclosed.
While aspects are described for system and method of software defined switches between light weight virtual machines using host kernel resources , the present invention may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary systems, devices/nodes/apparatus, and methods.
Henceforth, embodiments of the present disclosure are explained with the help of exemplary diagrams and one or more examples. However, such exemplary diagrams and examples are provided for the illustration purpose for better understanding of the present disclosure and should not be construed as limitation on scope of the present disclosure.
Referring now to figure 7, a crossbar design and the processing using the crossbar is illustrated, in accordance with an embodiment of the present subject matter.
In one implementation, as shown in figure 7, the Crossbar may be designed in 2 parts: Crossbar lib and Crossbar process.
In one implementation, the crossbar lib provides an interface, application program interface (API) for sending and receiving the message. The application has to initialize this library and initialization API is exposed for doing the same. Crossbar lib would internally use shared memory for faster message exchange between crossbar lib and the crossbar process.
In one implementation, the crossbar process provides the actual connection management between other virtual machines. Every Crossbar process must be configured with relevant details like IP address, shared memory key and common communication port. Generally common communication port is set 9999.
In one implementation, each lightweight virtual machine may have at least one crossbar deployed called a local crossbar and all the applications to be run in that Lightweight virtual machine must link with the Crossbar lib.
All the applications to be run in that Lightweight virtual machine links with the Crossbar lib. For example, g++ -g myapp. cpp –L /home/abc/lib-lCrossbar –o myApp, wherein myApp (application) links to a Crossbar library and utilizes its APIs for sending/receiving the messages.
In one implementation, there may be at least one crossbar deployed at host level called the gateway crossbar. Every local crossbar is configured with the address of the gateway crossbar. IT may be noted and understood by the person skilled in the art that, every lightweight virtual machine started must have unique IP in the network setup.
In one implementation, the crossbar lib may have an intelligence (destination address) to put the messages to the local crossbar process queue if the message is designated to a process in a different lightweight virtual machine. In one example, if the Destination IP is not the same as the current HOST IP then Destination is outside the HOST, thus the messages have to be put to a network queue instead of local crossbar process queue.
In one implementation, the crossbar process would poll for the messages and on message event, it shall check if the destination process is within the same host or different host.
In one implementation, if a same host is the destination then the crossbar process would create/use already open domain socket connection with destination lightweight virtual machine’s process.
In one implementation, if the destination is a different host then crossbar process may forward the message to a Gateway crossbar process running in that host.
In one implementation, the gateway crossbar would be configured with the routes of other hosts gateways. Based on the destination IP, Gateway crossbar would find the other host gateway and forward the message using TCP transport. Gateway crossbar will use Host mode of networking for better performance.
In one implementation, in case of any updates like addition of new gateway crossbar then this information shall be auto sync to the entire gateway crossbar. There may be a window of time for this sync to happen and during this period Applications may get failure messages to reach destination. Addition of new gateway crossbar can be seen as network mgmt activity and hence applications be designed for the same via some notification services.
Referring again to figure 7, in one implementation, the crossbar process may have shared memory management process, domain socket connection thread process, domain socket listen thread process, TCP socket connection thread process, TCP socket listen thread process, SENDRECV thread process.
In one implementation, the shared memory management process may create/attach to a given shared memory, create queue from the shared memory region, and mapping the application to queue.
In one implementation, the domain socket connection thread process may establish connection with other local lightweight virtual machine processes, and forward application message to gateway crossbar if the destination application is in a different host’s light weight virtual machine.
In one implementation, the domain socket listen thread process may receive new connection request and process the new connection request for further use.
In one implementation, the TCP socket connection thread process may establish connection with other gateway crossbar in other host, and update routes (map of GatewayIP, IPMask) .
In one implementation, the TCP socket listen thread process may receive new connection request and process the new connection request for example, prepares the data structure. for further use.
In one implementation, the SendRecv thread process may browse through the data structures created by listen /connect thread and retrieve the appropriate socket handles and send/receive message.
Referring again to figure 7, in one implementation, the Crossbar lib may have shared memory management process and an interface.
In one implementation, the shared memory management process may create/attach to the given shared memory, create/attach queue from the shared memory region, and map App to queue.
In one implementation, the interface is configured to send, or receive or initiate or teardown the packet or message received.
Referring now to figure 8, a sequence flow of the operation in same host, different container scenario is illustrated, in accordance with an embodiment of the present subject matter.
Referring now to figure 9, a sequence flow of the operation in different host, different container is illustrated, in accordance with an embodiment of the present subject matter.
Referring now to figure 10 and 11, a node 1000 in a communication system is illustrated, in accordance with an embodiment of the present subject matter. In one implementation, the node 1000 is disclosed. Although the present subject matter is explained considering that the present invention is implemented in the node 1000, it may be understood that the present invention may also be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, and the like. It will be understood that the node 1000 may be accessed by multiple users, or applications residing on the database system.  Examples of the node 1000 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld node, sensors, routers, gateways and a workstation. The node 1000 may be communicatively coupled to other nodes or a nodes or apparatuses to form a network (not shown) . Examples of the other nodes or a nodes or apparatuses may include, but are not limited to, a portable computer, a personal digital assistant, a handheld node, sensors, routers, gateways and a workstation.
In one implementation, the network (not shown) may be a wireless network, a wired network or a combination thereof. The network can be implemented as one of the different types of networks, such as GSM, CDMA, LTE, UMTS, intranet, local area network (LAN) , wide area network (WAN) , the internet, and the like. The network may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP) , Transmission Control Protocol/Internet Protocol (TCP/IP) , Wireless Application Protocol (WAP) , and the like, to communicate with one another. Further the network may include a variety of network nodes, including routers, bridges, servers, computing nodes, storage nodes, and the like.
The node 1000 may include a processor 1002, an interface 1004, and a memory 1006. The processor 1002 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any nodes that manipulate signals based on operational instructions. Among other capabilities, the at least one processor is configured to fetch and execute computer-readable instructions or modules stored in the memory 1006.
The interface (I/O interface) 1004, may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface may allow the database system, the first node, the second node, and the third node to interact with a user directly. Further, the I/O interface may enable the node 1000 to communicate with other nodes or nodes, computing nodes, such as web servers and external data servers (not shown) . The I/O interface can facilitate multiple communications within a wide variety of  networks and protocol types, including wired networks, for example, GSM, CDMA, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface may include one or more ports for connecting a number of nodes to one another or to another server. The I/O interface may provide interaction between the user and the node 1000 via, a screen provided for the interface.
The memory 1006 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM) , and/or non-volatile memory, such as read only memory (ROM) , erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 1006 may include plurality of instructions or modules or applications to perform various functionalities. The memory includes routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types.
In one implementation, the present invention provides a node 1000 in a communication system. The node 1000 comprises a processor 1002; and a memory 1006 coupled to the processor 1004 for executing a plurality of modules present in the memory 1006. The plurality of modules includes at least one interface module 1008 and at least one processing module 1010. The interface module 1006 is configured to send and/or receive at least one message/packet; the interface is initialized by at least one application residing in the node. The processing module 1010 is configured to provide at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
In one implementation, the present invention provides a node 1000 in a communication system. The node 1000 comprises a processor 1002; and a memory 1006 coupled to the processor 1002 for executing a plurality of modules present in the memory 1006. The plurality of modules includes at least a crossbar processing module 1102. The crossbar processing module 1102 is configured to receive at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface; verify the destination of the  message/packet, the destination is either within the same node or different node or any combination thereof; create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
In one implementation, the present invention provides a communication system. The communication system comprises a plurality of host and server devices, a processor, and at least one crossbar embedded on the processor. The crossbar interconnected to the host and server devices, and adapted to provide a unified communication interface for communication between the host and/or server devices. The crossbar is configured to receive at least one message/packet initialized by at least one application residing in the host using at least a crossbar lib interface; verify the destination of the message/packet, the destination is either within the host or the server or any combination thereof; create/use an open domain socket connection in the same host device and/or server device if the destination of the message/packet is the same node; or communicate the message/packet to at least one other host /server device based on the IP address of the other host /server device, received in the message/packet, the other host /server device comprise a lightweight virtual machine processes.
In one implementation, the interface module 1008 is further configured to queue the message/packet associated with the application in at least one queue in at least a shared memory.
In one implementation, the processing module 1010 is further configured to fetch the message/packet associated with the application in the queue and verify the destination of the message/packet; the destination is either within the same node or different node or any combination thereof.
In one implementation, if the destination of the message/packet is the same node, the processing module is further configured to create/use an open domain socket connection in the same node.
In one implementation, if the destination of the message/packet is different node, the processing module is further configured to communicate the message/packet based on the IP address of the destination.
In one implementation, the message/packet based on the IP address is communicated using at least one route pre-defined/pre-configure and pre-stored in the processing module, the route follows a TCP transport mechanism.
In one implementation, the processing module 1010 is adapted to utilize domain sockets to connect network spaces without affecting isolation for intra host message exchange.
In one implementation, the IP address or the shared memory key, or the common communication port or any combination thereof associated with the other nodes is pre-stored/pre-configured in the memory of the node.
In one implementation, the message/packet received is stored in at least one queue in at least a shared memory.
In one implementation, the crossbar processing module 1102 is further configured to fetch the message/packet associated stored in the queue, and thereby verify the destination of the message/packet.
In one implementation, the message/packet is communicated to the other node using at least one route pre-defined/pre-configured and pre-stored in the crossbar processing module. The route follows a TCP transport mechanism.
In one implementation, the crossbar processing module 1102 is further configured to establish a connection with the other nodes in the communication system, the other nodes preferably comprise a lightweight virtual machine processes and the connection is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
In one implementation, the other nodes in the communication system, on receipt of the message/packet, are adapted to process the new connection request, by creating at least one data structure based on the message/packet received; update the routes pre-defined/pre-configured and pre-stored in the crossbar processing module, the routes are updated preferably by mapping of Gateway IP, IP Mask, or any combination thereof based on the message/packet received; scan through the data structure created to retrieve the appropriate socket handles; and thereby create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet.
Referring now to figure 12, a method performed by a node in a communication system is illustrated, in accordance with an embodiment of the present subject matter. The method may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method or alternate methods. Additionally, individual blocks may be deleted from the method without departing from the protection scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method may be considered to be implemented in the above described node 1000.
Referring now to figure 12, a method performed by a node 1000 in a communication system is disclosed.
At block 1202, at least one message/packet initialized by at least one application residing in the node 1000 is sent or received. The message/packet is sent or received by at least interface of the node 1000.
At block 1204, the message/packet associated with the application, is queued/stored in at least one queue, in at least a shared memory of the node 1000.
At block 1206, the message/packet associated with the application stored in the queue.
At block 1208, the message/packet associated with the application in the queue is verified for destination, by the node 1000. The destination is either within the same node or different node or any combination thereof.
In one implementation, if the destination of the message/packet is the same node, the processing module is further configured to create/use an open domain socket connection in the same node.
In one implementation, if the destination of the message/packet is different node, the processing module is further configured to communicate the message/packet based on the IP address of the destination.
At block 1210, at least a connection is established with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof. In one implementation, one or more domain sockets are utilized to connect network spaces without affecting isolation for intra host message exchange. The IP address or the shared memory key, or the common communication port or any combination thereof associated with the other nodes may be pre-stored/pre-configured in the memory of the node.
In one implementation, the other nodes preferably comprise a lightweight virtual machine processes and the connection is attained based on at  least one of an IP address or a shared memory key, or a common communication port or any combination thereof. The other nodes in the communication system, on receipt of the message/packet, processes the new connection request, by creating at least one data structure based on the message/packet received; update the routes pre-defined/pre-configured and pre-stored in the crossbar processing module, the routes are updated preferably by mapping of Gateway IP, IP Mask, or any combination thereof based on the message/packet received; scan through the data structure created to retrieve the appropriate socket handles; and create /use an open domain socket connection in the same node if the destination of the message/packet is the same node or communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet.
At block 1012, the message/packet is communicated based on the IP address of the destination. In one implementation, the message/packet based on the IP address is communicated using at least one route pre-defined/pre-configure and pre-stored in the processing module, the route follows a TCP transport mechanism.
In contrast to the prior-art, the main benefits according to the present invention is that the network performance is better as there are no much overhead for intra host lightweight virtual machine communication. Further, the multi host networking is simplified as local crossbars use Domain sockets to forward message to Gateway Crossbar. There are no additional message encapsulations needed. Furthermore, networking domain configurations like IPTable rules, port mapping are not required for the implementation of the present invention.
In one implementation, the present invention achieves technical advancement as the software crossbar or application crossbar or software defined switch providing network abstraction to applications running in a lightweight virtual machine, simplifies the application development/deployment and improves network performance.
In one implementation, the present invention may be implemented in any application which runs in lightweight virtual machine and needing high performance/scalable network needs can use this method. For legacy applications,  APIs needs to be transported to the Crossbar lib. The new applications can base their transport APIs on the Crossbar lib.
A person skilled in the art may understand that any known or new algorithms by be used for the implementation of the present invention. However, it is to be noted that, the present invention provides a method to be used during back up operation to achieve the above mentioned benefits and technical advancement irrespective of using any known or new algorithms.
A person of ordinary skill in the art may be aware that in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on the particular applications and design constraint conditions of the technical solution. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the present invention.
It may be clearly understood by a person skilled in the art that for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to a corresponding process in the foregoing method embodiments, and details are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication  connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present invention essentially, or the part contributing to the prior art, or a part of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer node (which may be a personal computer, a server, or a network node) to perform all or a part of the steps of the methods described in the embodiment of the present invention. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM) , a random access memory (Random Access Memory, RAM) , a magnetic disk, or an optical disc.
Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.
When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate) , it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been  selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
Although implementations for system and method of software defined switches between light weight virtual machines using host kernel resources have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations of the system and method of software defined switches between light weight virtual machines using host kernel resources.

Claims (35)

  1. A node in a communication system, the node comprising:
    a processor;
    a memory coupled to the processor for executing a plurality of modules present in the memory, the plurality of modules comprising:
    at least one interface module configured to send and/or receive at least one message/packet, the interface is initialized by at least one application residing in the node; and
    at least one processing module configured to provide at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  2. The node as claimed in claim 1, wherein the interface module is further configured to queue the message/packet associated with the application in at least one queue in at least a shared memory.
  3. The node as claimed in claims 1 and 2, wherein the processing module is further configured to fetch the message/packet associated with the application in the queue and verify the destination of the message/packet, the destination is either within the same node or different node or any combination thereof.
  4. The node as claimed in claim 3, wherein if the destination of the message/packet is the same node, the processing module is further configured to create/use an open domain socket connection in the same node.
  5. The node as claimed in claim 3, wherein if the destination of the message/packet is different node, the processing module is further configured to communicate the message/packet based on the IP address of the destination.
  6. The node as claimed in claim 5, wherein the message/packet based on the IP address is communicated using at least one route pre-defined/pre-configure and pre-stored in the processing module, the route follows a TCP transport mechanism.
  7. The node as claimed in claim 1 is characterized by improving network performance of a container/lightweight virtual machines.
  8. The node as claimed in claim 1 provides a unified communication interface.
  9. The node as claimed in claim 1 improves the performance of networking capability in communication system.
  10. The node as claimed in claim 1, wherein the processing module is adapted to utilize domain sockets to connect network spaces without affecting isolation for intra host message exchange.
  11. The node as claimed in claim 1, wherein the IP address or the shared memory key, or the common communication port or any combination thereof associated with the other nodes is pre-stored/pre-configured in the memory of the node.
  12. The node as claimed in claim 1 is preferably a lightweight virtual machine.
  13. A node in communication system, the node comprising:
    a processor;
    a memory coupled to the processor for executing a plurality of modules present in the memory, the plurality of modules comprising:
    at least a crossbar processing module configured:
    receive at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface;
    verify the destination of the message/packet, the destination is either within the same node or different node or any combination thereof;
    create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or
    communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
  14. The node as claimed in claim 13, wherein the message/packet received is stored in at least one queue in at least a shared memory.
  15. The node as claimed in claim 13 and 14, wherein the crossbar processing module is further configured to fetch the message/packet associated stored in the queue, and thereby verify the destination of the message/packet.
  16. The node as claimed in claim 13, wherein the message/packet is communicated to the other node using at least one route pre-defined/pre-configured and pre-stored in the crossbar processing module, the route follows a TCP transport mechanism.
  17. The node as claimed in claim 13, wherein the crossbar processing module is further configured to establish a connection with the other nodes in the communication system, the other nodes preferably comprise a lightweight virtual machine processes and the connection is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  18. The node as claimed in claim 13 is characterized by improving network performance of a container/lightweight virtual machines.
  19. The node as claimed in claim 13, wherein the other nodes in the communication system, on receipt of the message/packet, are adapted to:
    process the new connection request based on the message/packet received; or
    create at least one data structure based on the message/packet received;
    update the routes pre-defined/pre-configured and pre-stored in the crossbar processing module, the routes are updated preferably by mapping of Gateway IP, IP Mask, or any combination thereof based on the message/packet received;
    scan through the message/packet received to retrieve the appropriate socket handles; and thereby
    create/use an open domain socket connection in the same node if the destination of the message/packet is the same node; or
    communicate the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet.
  20. A communication system, comprising:
    a plurality of host and server devices;
    a processor;
    at least one crossbar embedded on the processor, interconnected to the host and server devices, and adapted to provide a unified communication interface for communication between the host and/or server devices, wherein the crossbar is configured to:
    receive at least one message/packet initialized by at least one application residing in the host using at least a crossbar lib interface;
    verify the destination of the message/packet, the destination is either within the host or the server or any combination thereof;
    create/use an open domain socket connection in the same host device and/or server device if the destination of the message/packet is the same node; or
    communicate the message/packet to at least one other host/server device based on the IP address of the other host/server device, received in the message/packet, the other host/server device comprise a lightweight virtual machine processes.
  21. The communication system as claimed in claim 20 is characterized by improving network performance of a container/lightweight virtual machines
  22. A method performed by a node in a communication system, the method comprising:
    sending and/or receiving, by at least interface, at least one message/packet initialized by at least one application residing in the node;
    providing, by at least one processing module, at least a connection management with one or more other nodes in the communication system to achieve a unified communication, the connection management is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  23. The method as claimed in claim 22 further comprises, queuing the message/packet associated with the application, in at least one queue, in at least a shared memory of the node.
  24. The method as claimed in claims 22 and 23, further comprises:
    fetching the message/packet associated with the application in the queue; and
    verifying the destination of the message/packet, the destination is either within the same node or different node or any combination thereof.
  25. The method as claimed in claim 24, wherein: if the destination of the message/packet is the same node, the processing module is further configured to create/use an open domain socket connection in the same node.
  26. The method as claimed in claim 24, wherein: if the destination of the message/packet is different node, the processing module is further configured to communicate the message/packet based on the IP address of the destination.
  27. The method as claimed in claim 26, wherein the message/packet based on the IP address is communicated using at least one route pre-defined/pre-configure and pre-stored in the processing module, the route follows a TCP transport mechanism.
  28. The method as claimed in claim 22 further comprises: utilizing, by the processing module, one or more domain sockets to connect network spaces without affecting isolation for intra host message exchange.
  29. The method as claimed in claim 22, wherein the IP address or the shared memory key, or the common communication port or any combination thereof associated with the other nodes are pre-stored/pre-configured in the memory of the node.
  30. A method performed by a node in a communication system, the method comprising:
    receiving at least one message/packet initialized by at least one application residing in the node using at least a crossbar lib interface;
    verifying the destination of the message/packet, the destination is either within the same node or different node or any combination thereof;
    creating using an open domain socket connection in the same node if the destination of the message/packet is the same node; or
    communicating the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet, the other node comprise a lightweight virtual machine processes.
  31. The method as claimed in claim 30 further comprises: storing the message/packet received in at least one queue in at least a shared memory.
  32. The method as claimed in claim 31 further comprises:
    fetching the message/packet associated stored in the queue; and thereby verifying the destination of the message/packet.
  33. The method as claimed in claim 30 further comprises: communicating the message/packet to the other node using at least one route pre-defined/pre-configured and pre-stored in the crossbar processing module, the route follows a TCP transport mechanism.
  34. The method as claimed in claim 30 further comprises: establishing a connection with the other nodes in the communication system, the other nodes preferably comprise a lightweight virtual machine processes and the connection is attained based on at least one of an IP address or a shared memory key, or a common communication port or any combination thereof.
  35. The method as claimed in claim 30, wherein the other nodes in the communication system, on receipt of the message/packet, comprising:
    process the new connection request based on the message/packet received; or
    creating at least one data structure based on the message/packet received;
    updating the routes pre-defined/pre-configured and pre-stored in the crossbar processing module, the routes are updated preferably by mapping of Gateway IP, IP Mask, or any combination thereof based on the message/packet received;
    scanning through the message/packet received to retrieve the appropriate socket handles; and thereby
    creating/using an open domain socket connection in the same node if the destination of the message/packet is the same node; or
    communicating the message/packet to one or more other nodes based on the IP address of the other node, received in the message/packet.
PCT/CN2017/085416 2016-05-26 2017-05-22 System and method of software defined switches between light weight virtual machines using host kernel resources WO2017202272A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780009111.0A CN108604992B (en) 2016-05-26 2017-05-22 System and method for software defined switching between lightweight virtual machines using host kernel resources

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201641018137 2016-05-26
IN201641018137 2016-05-26

Publications (1)

Publication Number Publication Date
WO2017202272A1 true WO2017202272A1 (en) 2017-11-30

Family

ID=60412053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085416 WO2017202272A1 (en) 2016-05-26 2017-05-22 System and method of software defined switches between light weight virtual machines using host kernel resources

Country Status (2)

Country Link
CN (1) CN108604992B (en)
WO (1) WO2017202272A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990052B (en) * 2019-11-29 2023-09-26 杭州迪普科技股份有限公司 Configuration preservation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436966A (en) * 2008-12-23 2009-05-20 北京航空航天大学 Network monitoring and analysis system under virtual machine circumstance
US20140208299A1 (en) * 2011-08-02 2014-07-24 International Business Machines Corporation COMMUNICATION STACK FOR SOFTWARE-HARDWARE CO-EXECUTION ON HETEROGENEOUS COMPUTING SYSTEMS WITH PROCESSORS AND RECONFIGURABLE LOGIC (FPGAs)

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006127461A (en) * 2004-09-29 2006-05-18 Sony Corp Information processing device, communication processing method, and computer program
US7620953B1 (en) * 2004-10-05 2009-11-17 Azul Systems, Inc. System and method for allocating resources of a core space among a plurality of core virtual machines
US20100070552A1 (en) * 2008-09-12 2010-03-18 Charles Austin Parker Providing a Socket Connection between a Java Server and a Host Environment
US8780923B2 (en) * 2010-01-15 2014-07-15 Dell Products L.P. Information handling system data center bridging features with defined application environments
CN102103518B (en) * 2011-02-23 2013-11-13 运软网络科技(上海)有限公司 System for managing resources in virtual environment and implementation method thereof
US8504723B2 (en) * 2011-06-15 2013-08-06 Juniper Networks, Inc. Routing proxy for resource requests and resources
CN102520944B (en) * 2011-12-06 2014-07-02 北京航空航天大学 Method for realizing virtualization of Windows application program
US9246741B2 (en) * 2012-04-11 2016-01-26 Google Inc. Scalable, live transcoding with support for adaptive streaming and failover
KR101512716B1 (en) * 2012-04-30 2015-04-17 주식회사 케이티 Lightweight virtual machine image system and method for input/output and generating virtual storage image thereof
US9710357B2 (en) * 2012-08-04 2017-07-18 Microsoft Technology Licensing, Llc Function evaluation using lightweight process snapshots
CN103503386B (en) * 2012-12-31 2016-05-25 华为技术有限公司 The network equipment and processing message method
CN104883302B (en) * 2015-03-18 2018-11-09 华为技术有限公司 A kind of method, apparatus and system of data packet forwarding
CN105591815A (en) * 2015-12-10 2016-05-18 北京匡恩网络科技有限责任公司 Network control method for power supply relay device of cloud testing platform
CN105550576B (en) * 2015-12-11 2018-09-11 华为技术服务有限公司 The method and apparatus communicated between container

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101436966A (en) * 2008-12-23 2009-05-20 北京航空航天大学 Network monitoring and analysis system under virtual machine circumstance
US20140208299A1 (en) * 2011-08-02 2014-07-24 International Business Machines Corporation COMMUNICATION STACK FOR SOFTWARE-HARDWARE CO-EXECUTION ON HETEROGENEOUS COMPUTING SYSTEMS WITH PROCESSORS AND RECONFIGURABLE LOGIC (FPGAs)

Also Published As

Publication number Publication date
CN108604992A (en) 2018-09-28
CN108604992B (en) 2020-09-29

Similar Documents

Publication Publication Date Title
US10812378B2 (en) System and method for improved service chaining
US20220061059A1 (en) Distributed network connectivity monitoring of provider network edge location resources from cellular networks
US11812362B2 (en) Containerized router with a disjoint data plane
US11190424B2 (en) Container-based connectivity check in software-defined networking (SDN) environments
US11558255B2 (en) Logical network health check in software-defined networking (SDN) environments
US10938681B2 (en) Context-aware network introspection in software-defined networking (SDN) environments
EP3671452A1 (en) System and method for user customization and automation of operations on a software-defined network
US20170310611A1 (en) System and method for automated rendering of service chaining
US10536362B2 (en) Configuring traffic flow monitoring in virtualized computing environments
US11627080B2 (en) Service insertion in public cloud environments
US11219034B1 (en) Distributed network connectivity monitoring of provider network edge location resources from cellular networks
US11356362B2 (en) Adaptive packet flow monitoring in software-defined networking environments
US11470071B2 (en) Authentication for logical overlay network traffic
US11652717B2 (en) Simulation-based cross-cloud connectivity checks
EP4307639A1 (en) Containerized router with virtual networking
US11695665B2 (en) Cross-cloud connectivity checks
CN116506329A (en) Packet loss monitoring in virtual router
US11546242B2 (en) Logical overlay tunnel monitoring
WO2017202272A1 (en) System and method of software defined switches between light weight virtual machines using host kernel resources
US10931523B2 (en) Configuration change monitoring in software-defined networking environments
US10911338B1 (en) Packet event tracking
US20210226869A1 (en) Offline connectivity checks
EP4304152A2 (en) Edge services using network interface cards having processing units
WO2022046364A1 (en) Distributed network connectivity monitoring of provider network edge location resources from cellular networks
CN117255019A (en) System, method, and storage medium for virtualizing computing infrastructure

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17802118

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17802118

Country of ref document: EP

Kind code of ref document: A1