WO2017053441A1 - Fast and scalable database cluster communication path - Google Patents

Fast and scalable database cluster communication path Download PDF

Info

Publication number
WO2017053441A1
WO2017053441A1 PCT/US2016/052902 US2016052902W WO2017053441A1 WO 2017053441 A1 WO2017053441 A1 WO 2017053441A1 US 2016052902 W US2016052902 W US 2016052902W WO 2017053441 A1 WO2017053441 A1 WO 2017053441A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
client
virtual
links
network
Prior art date
Application number
PCT/US2016/052902
Other languages
French (fr)
Inventor
Jun Xu
Yu Dong
Rangaraju Iyengar
Ravi Shanker CHUPPALA
Yunxia CHEN
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to JP2018515086A priority Critical patent/JP6511194B2/en
Priority to CN201680051225.7A priority patent/CN108370280B/en
Priority to EP16849515.8A priority patent/EP3338386A4/en
Publication of WO2017053441A1 publication Critical patent/WO2017053441A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0281Proxies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0272Virtual private networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/16Implementing security features at a particular protocol layer
    • H04L63/164Implementing security features at a particular protocol layer at the network layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/16Implementing security features at a particular protocol layer
    • H04L63/168Implementing security features at a particular protocol layer above the transport layer

Definitions

  • NAT network address translation
  • Firewall network address translation
  • Enterprise NAT/Firewalls require opening multiple ports to allow for each of the sessions. As the number of sessions increase, the number of ports also increases.
  • there is a method of transmitting an application payload in a network comprising receiving one or more application payloads corresponding to one or more applications residing on a client, the application payload formed from a client request comprising a transport layer protocol; terminating the transport layer protocol and reading the application payload associated with a current session; preparing header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads; and encrypting the application payloads, including the header information, for transmission in the network via a single virtual communication link.
  • non-transitory computer-readable medium storing computer instructions for transmitting application payloads in a network, that when executed by one or more processors, causes the one or more processors to perform the steps of receiving one or more application payloads corresponding to one or more applications residing on a client, the application payload formed from a client request comprising a transport layer protocol; terminating the transport layer protocol and reading the application payload associated with a current session; preparing header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads; and encrypting the application payloads, including the header information, for transmission in the network via a single virtual communication link.
  • a method for application containers to communicate via a direct communication link comprising constructing one or more first dedicated virtual links for direct application container level communication between one or more first application containers; and communicating data between the one or more first application containers via the corresponding one or more first dedicated virtual links, where each of the one or more first dedicated virtual links is connected to a respective one of the one or more first application containers at a first end and connected to a respective virtual input/output (VIO) at a second end.
  • VIO virtual input/output
  • there is a method for providing direct database to application level communication via a virtual input/output comprising constructing one or more first dedicated virtual links for direct application level communication between one or more first database instances; and communicating data between the one or more first database instances via the corresponding one or more first dedicated virtual links, where each of the one or more first dedicated virtual links is connected to a respective one of the one or more first database instances at a first end and
  • FIG. 1 is illustrates an example network environment in which various embodiments of the disclosure may be implemented.
  • FIG. 2 is illustrates a virtual communication link environment in which application payloads may be multiplexed.
  • FIG. 3 illustrates a client and server proxy in an application crypto multiplexing (ACM) environment in accordance with FIG. 2.
  • ACM application crypto multiplexing
  • FIGS. 4A and 4B illustrate flow diagrams for sending and receiving payloads across the virtual communication link.
  • FIG. 5 illustrates a shim layer added to a payload of the application client or application server.
  • FIG. 6 illustrates an example of an ACM header of FIG. 5.
  • FIG. 7 illustrates a state diagram of an ACM data session state machine.
  • FIGS. 8A and 8B illustrate example flow diagrams of transmitting an application payload in a network.
  • FIG. 9 illustrates an example network in which the disclosure may be implemented.
  • FIG. 10 illustrates an example container packet communication using a virtual input/output interface for intra-host communication.
  • FIG. 1 1 illustrates an example container packet communication using a virtual input/output interface for inter-host communication.
  • FIGS. 12 and 13 illustrate various embodiments of direct database/application level communication using VIO without the need to consume the TCP and related sockets.
  • FIGS. 14A and 14B illustrate example flow diagrams of constructing virtual links for container and database instances.
  • FIG. 15 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
  • FIG. 16 is a block diagram of a network system that can be used to implement various embodiments.
  • FIG. 17 illustrates a block diagram in accordance with the disclosed technology.
  • the disclosure relates to technology for transmitting an application payload in a network.
  • One or more application payloads corresponding to one or more applications residing on a client are received, where the application payload formed from a client request comprising a transport layer protocol.
  • the transport layer protocol is termination and the application payload associated with a current session is read.
  • Header information is prepared for each of the received applications for insertion into a corresponding one of the application payloads.
  • the application payloads are encrypted, including the header information, for transmission in the network via a single virtual communication link.
  • the disclosed technology generally provides a 'many-to-one' integrated proxy and tunnel solution that adds a shim layer between the application payload and transmission control protocol (TCP)/ secure socket layer (SSL) headers, where a state machine may be configured to control sessions.
  • TCP transmission control protocol
  • SSL secure socket layer
  • Such an integrated proxy and tunnel solution may be implemented using a client or a server based on the location of the network node/device.
  • the network node/devices include, but are not limited to, a router, a switch, a WIFI device, an Internet-of-things (IOT) device, or any physical and virtual devices as would be appreciated by one of ordinary skill in the art upon viewing this disclosure.
  • IOT Internet-of-things
  • the disclosed technology may provide a single point for security services.
  • applications may delegate crypto responsibilities to a device, the device may setup channels and exchange the crypto data between the device and a controller to thereby provide a secure connection between devices.
  • Implementing the disclosed technology on the server-side provides a crypto server and implementing the disclosed technology on the client-side provides crypto client functionality.
  • Communication channels may authenticate and provide message integrity, user authentication, and confidentiality.
  • the communication channels may also support standard symmetric/asymmetric cryptographic functionality and may be able to setup secure channels behind a NAT/firewall.
  • FIG. 1 is illustrates an example network environment in which various embodiments of the disclosure may be implemented.
  • the network environment 100 includes, for example, client(s) 102, server(s) 104, SDN controller 1 12 and administrator 1 14.
  • SDNs involve the use of a standalone controller that performs the control functionality for a set of network devices.
  • software defined networking in the case of routing, rather than routers performing individual analyses to determine routes through the network, the controller can determine the routes and program other devices in the network to behave according to the determinations made by the controller.
  • Different protocols may be used to implement software defined networking, including open protocols like OpenFlow, and proprietary protocols from network vendors.
  • the SDN 106 includes network nodes 108 and 1 10 and service devices 1 16.
  • Network nodes 108 and 1 10 may comprise switches, and other devices (not shown). These network nodes 108 and 1 10 can be physical instantiations or virtual instantiations that generally serve to forward network traffic.
  • SDN 106 may also include other types of devices, such as routers, load balancers and various L4-L7 network devices, among other network devices.
  • SDN 106 may connect various endpoint devices, such as client 102 and server 104.
  • SDN 106 may provide services to network traffic flowing between client device 102 and server device 104.
  • administrator 1 14 may use SDN controller 1 12 to program network devices of SDN 106 to direct network traffic for client 102 to one or more of service devices 1 16.
  • Service devices 1 16 may include, for example, intrusion detection service (IDS) devices, intrusion prevention system (IPS) devices, web proxies, web servers, web- application firewalls and the like.
  • service devices 1 16 may, additionally or alternatively, include devices for providing services such as, for example, denial of service (DoS) protection, distributed denial of service (DDoS) protection, traffic filtering, wide area network (WAN) acceleration, or other such services.
  • DoS denial of service
  • DDoS distributed denial of service
  • WAN wide area network acceleration
  • service devices 1 16 may be physical devices, multi-tenant devices, or virtual services (e.g., cloud-based services) and may be readily applied to virtual devices and cloud-based applications, in addition or in the alternative to physical devices.
  • virtual services e.g., cloud-based services
  • FIG. 2 is illustrates a virtual communication link environment in which application payloads may be multiplexed.
  • the environment 200 herein referred to as an application crypto multiplexing (ACM) environment 200 includes, for example, a virtual communication link 202, a client proxy 204, application client 206, server proxy 208 and application server 210.
  • ACM application crypto multiplexing
  • a virtual communication link (e.g., a virtual tunnel) allows two computer programs (e.g., client and server applications) that are not able to otherwise address each other directly. For example, when a client application of application client 206 needs to connect to a server application of application server 210 at a remote site.
  • the server application 210 may be on a computer on a customer or partner's non- addressable local network (e.g., behind a firewall). As such, the application client 206 will not be able to address the application server 210 directly.
  • the virtual communication link therefore provides application client 206 access to the application server 210, and vice versa.
  • the virtual communication link 202 allows one or more application(s) residing on application client 206 and/or application server 210 to share a single communication channel (e.g., virtual communication link or tunnel) by multiplexing and/or demultiplexing the payload of the application(s) residing on application client 206 and/or application server 210 for services from the same device.
  • a single communication channel e.g., virtual communication link or tunnel
  • the client proxy 204 and server proxy 208 may be integrated or combined at each end of the channel to form a single socket interface by multiplexing and/or demultiplexing the payload of the application client 206 and application server 210 for communication via virtual communication link 202, such as a crypto virtual tunnel (VT).
  • VT crypto virtual tunnel
  • the multiplexing and/or demultiplexing may be implemented by virtue of adding an ACM header into the payload of a particular application that carries application specific information. Headers are explained in more detail below with reference to FIGS. 5 and 6.
  • the application client 206 may comprise a first client application 206A, such as network configuration protocol (NETCONF) plugin, a second client application 206B, such as a simple network management protocol (SNMP) plugin, and/or a third client application 206C, such as a control and provisioning of wireless access points (CAPWAP) plugin.
  • NETCONF network configuration protocol
  • SNMP simple network management protocol
  • CAPWAP control and provisioning of wireless access points
  • These plugins may be used for remote configuration of devices and allow traffic patterns to be seamlessly injected into the existing network devices that form the network. That is, rather than deploying traffic generators to strategically introduce traffic patterns into various points of the network, the desired traffic patterns are encapsulated and communicated to the existing network devices of the network via the plugins that would otherwise be used to manipulate configuration data of the network devices.
  • NETCONF provides mechanisms for configuring network devices and uses an Extensible Markup Language (XML)-based data encoding for configuration data, which may include policy data;
  • SNMP allows device management systems to traverse and modify management information bases (MIBs) that store configuration data within managed elements;
  • CAPWAP is a protocol used to exchange messages between any mesh node and the controller via the virtual communication link, and was originally designed for so-called lightweight access points.
  • the client proxy 204 includes a crypto client 204A that is operably coupled to the first, second and third client applications 206A, 206B and 206C, respectively, via sockets (as explained with reference to FIG. 3).
  • the application server 210 may comprise one or more of a first server application 21 OA, such as a NETCONF plugin, a second server application 210B, such as an SNMP plugin, and/or a third server application, such as a CAPWAP plugin, operably coupled to the application crypto server via sockets.
  • a first server application 21 OA such as a NETCONF plugin
  • a second server application 210B such as an SNMP plugin
  • a third server application such as a CAPWAP plugin
  • the server proxy 208 includes an crypto server 208A that is operably coupled to the first, second and third client applications 21 OA, 210B and 210C, respectively, via sockets (as explained with reference to FIG. 3).
  • the ACM environment 200 allows an SDN controller, such as SDN controller 1 12, to communicate with and to manage network devices, such as network nodes 108 and 1 10, using a network, such as a public cloud or the Internet.
  • SDN controller 1 12 By employing the disclosed ACM environment 200, a firewall may not need to open multiple ports to support multiple applications. Rather, the ACM environment 200 allows SDN controller 1 12 to easily manage multiple applications.
  • the ACM environment 200 may also reduce TCP proxy session overhead and tunnel payload overhead, and may be used for control plane traffic with different applications running on the same device.
  • the ACM environment 200 is different from other crypto technologies.
  • other technologies such as Internet key exchange (IKE)/ Internet protocol security (IPSEC) use tunneling based at layer 3 and SSL technologies use layer 4. These technologies use one session for each application.
  • IKE Internet key exchange
  • IPSEC Internet protocol security
  • FIG. 3 illustrates a client and server proxy in an ACM environment in accordance with FIG. 2.
  • the ACM environment 300 includes, for example, a virtual communication link 202, such as a crypto virtual tunnel, that communicatively couples a client proxy 204 and a server proxy 208.
  • a virtual communication link 202 such as a crypto virtual tunnel
  • the client proxy 204 includes a session manager 302A, a MUX/DEMUX 306A, a transport layer security (TLS)/ datagram TLS (DTLS) client 304A, a NETCONF client (session 1 ) 310A, and nan SNMP client (session 2) 312A.
  • the server proxy 208 includes a session manager 302B, a MUX/DEMUX 306B, a transport layer security (TLS)/ datagram TLS (DTLS) client 304B, a NETCONF client (session 1 ) 310B and an SNMP client (session 2) 312B.
  • the session manager 302A enables transparent secure and open communication between the application client 206 (FIG. 2) and the client proxy 204.
  • session manager 302A may perform encrypted session processing, including managing an encrypted session handshake, managing keys, certificates, authentication, authorization, or similar.
  • session manager 302A may in one embodiment establish encrypted sessions and/or connections, terminate encrypted sessions and/or connections, establish itself as a man-in-the-middle of an encrypted session and/or connection, or similar.
  • the NETCONF client 1 (session 1 ) 31 OA and the SNMP client 2 (session 2) 312A are communicatively coupled to the session manager 302A via the sockets.
  • the NETCONF server 1 (session 1 ) 31 OB and the SNMP server 2 (session 2) 312B are communicatively coupled to the session manager 302B via the sockets
  • the MUX/DEMUX 306A on client proxy 204 may be configured to route application payloads from multiple sockets on an application client 206 to a single socket on client proxy 204 by multiplexing the application payloads.
  • the multiplexed application payload may transported across the virtual communication link 202 to the proxy server 208 and then delivered to the application server 210.
  • the MUX/DEMUX 306A is configured to transmit application payloads over a single secure connection (e.g., virtual communication link 202) from the client proxy 204 to corresponding multiple sockets on the application server 210 (after demultiplexing at the server proxy 208).
  • MUX/DEMUX 306A on client proxy 204 may be configured to receive application payloads from the virtual communication link 202 using a single socket on client proxy 204.
  • the application payload, received from application server 210, may be demultiplexed by the MUX/DEMUX 308A into discrete application payloads and each discrete application payload may be transported to one or more corresponding sockets on the application client 206.
  • the MUX/DEMUX 306A is configured to receive application payloads over a single secure connection (e.g., virtual communication link 202) from the server proxy 204 to corresponding multiple sockets on the application client 206.
  • the MUX/DEMUX 306A is also responsible for preparing and adding ACM header information into the application payload.
  • socket refers to a port, buffer, logical node or object configured to receive data in any format, such as HTTP format, from a remote device via a network connection.
  • the MUX/DEMUX 306B may be configured in a similar manner.
  • TLS/DTLS clients 304A client proxy
  • 304B server proxy
  • the TLS/DTLS clients 304A and 304B are responsible for encrypting/decrypting the multiplexed/demultiplexed application payloads.
  • the TLS protocol aims primarily to provide privacy and data integrity between two communicating computer applications.
  • TLS was designed to operate on top of a transport protocol, such as TCP, and below the application layer, such as HTTP.
  • the connection peers In order to establish a cryptographically secure data channel, the connection peers must agree on which ciphersuites will be used and the keys used to encrypt the data.
  • TLS has also been adapted to run over datagram protocols, such as the user datagram protocol (UDP).
  • Datagram TLS (DTLS) is a protocol based on TLS that is capable of securing datagram transport, such as UDP, and is well suited for tunneling applications, e.g., the CAPWAP tunnel to the controller in a mesh network.
  • FIGS. 4A and 4B illustrate flow diagrams for sending and receiving payloads across the virtual communication link.
  • the processes described herein are implemented using the client proxy 204 and server proxy 208 depicted, for example, in FIG. 3.
  • client proxy 204 and server proxy 208 depicted, for example, in FIG. 3.
  • server proxy 208 depicted, for example, in FIG. 3.
  • any network component or element may be responsible for such implementation and the disclosed embodiment is a non-limiting example.
  • FIG. 4A illustrates a flow chart for sending client data to a server.
  • Client 31 OA and/or client 312A creates a TCP/UDP client request and sends application data (e.g., payload) to client proxy 204.
  • the application data is received by the client proxy 204 via session manager 302A at 402A.
  • the session manager 302A terminates the TCP connection and reads the application data from the local session and acquires the session information.
  • the state (FIG. 7) and sessions are managed and session details are sent to MUX/DEMUX 306A.
  • the MUX/DEMUX 306A prepares ACM headers and adds the ACM header to the application data (payload).
  • the ACM header and application payload is described in detail with reference to FIGS. 5 and 6 below.
  • the TLS/DTLS client 304A is responsible for encrypting /decrypting the data (application data + ACM header) and sending the application data to the application server 210 via virtual communication link 202.
  • the encrypted/decrypted data (application + ACM header) is received by the server proxy 208 via the virtual communication link 202 at 412A, and the MUX/DEMUX 306B adds/removes the ACM header to the payload at 41 OA.
  • the session manager 302B at the server proxy 208 then reads the application data from the MUX/DEMUX 306B and creates/manages a local session with the application server 210 at 408A.
  • the application data is then sent to the application server 210 via the secure sockets of the session, and clients 310B and/or 312B read the request and prepare an application response.
  • FIG. 4B illustrates a flow chart for sending server data to a client.
  • the server 1 31 OB and/or server 2 312B prepares an application response in response to the client request of FIG. 4A.
  • the session manager 302B reads application data (e.g. payload) from application server 210 and creates/manages records for the session information.
  • the session manager 302B then sends the session information and application data to MUX/DEMUX 306B.
  • application data e.g. payload
  • MUX/DEMUX 306 reads the session information and application data and prepares an ACM header to be added/removed from the ACM header to the application data (payload).
  • TLS/DTLS service 304B encrypts the application data (application + ACM header) and sends the encrypted application data to the application client 206 via client proxy 204 and virtual tunnel link 202.
  • the application data (application + ACM header) is decrypted at 408B
  • the MUX/DEMUX 306A removes the ACM header from the application data (payload) and sends the decrypted application data to session manager 302A at 410B.
  • the session manager 302A then reads the application data from MUX/DEMUX 306A and sends the application data to respective sockets for client 1 31 OA and/or client 2 312A, where the response is received from the application server 210.
  • FIG. 5 illustrates a shim layer added to a payload of the application client or application server.
  • the shim layer is shown as being added between layers 4 and 7 of the open systems interconnection (OSI) layers 502.
  • the figure also illustrates three payloads including the NETCONF payload 502A, the SNMP payload 502B and the CAPWAP payload 502C.
  • layer 3 is the packet later that structures and manages a multi-node network, including addressing, routing and traffic control.
  • Layer 4 (TCP/UDP) is the transport layer that is responsible for transmission of data segments between points on the network, including segmentation, acknowledgement and multiplexing.
  • Layers 5 and/or 6 may include the added shim layer as part of the presentation (layer 6) and session (layer 5) layers that manage communication sessions, such as continuation exchange of information in the form of multiple back- and-forth transmission between two nodes.
  • Layer 7 (NETCONF/SNMP/CAPWAP) is the application layer that includes high-level APIs, including resource sharing and remote file access.
  • FIG. 6 illustrates an example of an ACM header of FIG. 5.
  • the ACM header includes, for example, an ACM version field, an ACM operation (Op) type field, a security session control field, an application session/source port field, an application identifier (ID) field, and a payload length field. It is appreciated that the illustrated header is a non-limiting example of a header configuration, and that any number of variations may be implemented.
  • the fields defined in the ACM header may vary in size and type of information.
  • the ACM version field may be 4 bits and may indicate an initial version
  • the ACM Op type field may be 4 bits and may indicate an operation type
  • the security session control field may be 2 bytes and indicate a security session control type
  • the application session/source port field may be 2 bytes and may indicate a session ID or source port
  • the application ID field may be 2 bytes and may indicate an application TCP, a UDP ID, or a destination port
  • the payload length may be 2 bytes and may indicate the size of a payload.
  • a payload may be configured to be any one or more of the following: an ACM hello request, an ACM hello response, an ACM acknowledgement, an ACM data or data transfer, an ACM service update, an ACM service update acknowledgement, an ACM health statistics request, an ACM health statistics response, an ACM control/alert/error, an ACM heartbeat request and/or an ACM heartbeat response.
  • FIG. 7 illustrates a state diagram of an ACM data session state machine.
  • the ACM data session state machine includes an initial state 702 a data write state 704/710, a data read state 706/712 and a session close state 708.
  • the data operational states may be defined according to the following: session-start even (CE) - 1 1 , data-read/data-write even (DRWS) - 01 ; data-read/data-write-end even (DRWE) - 10 and session close event (CT) - 00, as illustrated in FIG. 7 that follows.
  • FIGS. 8A and 8B illustrate example flow diagrams of transmitting an application payload in a network.
  • one or more application payloads 502A, 502B and 502C are received, for example by a proxy client 204, corresponding to one or more applications 206A, 206B and 206C, respectively, residing on an application client 206.
  • the application payload formed from a client request comprising a transport layer protocol, such as TCP or UDP.
  • the application payloads received at the proxy client 204 are terminated, for example by session manager 302A, and the application payload is read for the current session.
  • MUX/DE MUX 306A prepares header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads at 806, and the TLS/DTLS client 304A encrypts the application payloads, including the header information, for transmission in the network via a single virtual communication link 202 at 808.
  • the one or more application payloads may be multiplexed, including the header information inserted into the application payload, to share across a single communication channel (i.e., virtual communication link).
  • the multiplexed application payloads are transmitted via the shared communication link.
  • the application payload Upon arrival at the end point (e.g., server proxy 208), the application payload is demultiplexed (and decrypted) such that application server 210 may respond to the request from the client at 814.
  • FIG. 9 illustrates an example network in which the disclosure may be implemented.
  • the network 900 includes cloud 902, network 906, cloud provider 908 and clients 914A, 914B to 914N.
  • Cloud 902 includes one or more hosts 904 to 904N (collectively, 904N), where each host 904N includes one or more nodes 904N1 .
  • the node 904N1 is a virtual machine (VM) that is hosted on a physical machine, such as host 904 through host 904N.
  • the host machine(s) 904N may be located in a data center.
  • the one or more nodes 904N1 is hosted on physical machine 904N in cloud 902 provided by cloud provider 908.
  • applications 904N1 -2 and 904N1 -3 When hosted on host machines 904N, users can interact with one or more applications, such as applications 904N1 -2 and 904N1 -3, executing on the one or more nodes 904N1 using client computer systems, such as clients 914A, 914B to 914N.
  • client computer systems such as clients 914A, 914B to 914N.
  • the applications 904N1 -2 and 904N1 -3 may be hosted on hosts 904N without the use of VMs.
  • the one or more nodes 904N1 execute one or more applications 904N1 -2 and 904N1 -3 that may be owned or managed by different users and/or organizations.
  • a customer may deploy applications 904N1 -2 and 904N1 -3 that may co-exist with another customer's applications on the same or a different node 904N that is hosting the first customer's applications.
  • portions of or separate applications 904N1 -2 and 904N1 -3 execute on different nodes 904N.
  • the data used for execution of applications 904N1 -2 and 904N1 -3 includes application images built from pre-existing application components and source code of users managing the applications 904N1 -2 and 904N1 -3.
  • An image within the context of SDNs and container networking, refers to data representing executables and files of the application used to deploy functionality for a runtime instance of the application.
  • the image is built using a Docker tool and is referred to as a Docker image.
  • a docket bridge will not be required in implementing the various embodiments of the invention, although such a docker bridge is not excluded from use.
  • the one or more nodes 904N1 -2 and 904N1 -3 may execute an application by launching an instance of an application image as a container 904N1 -2A, 904N1 -2B, 904N1 -3A and 904N1 -3B in one or more of nodes 904N1 -2 and 904N1 -3.
  • Containers 904N1 -2A, 904N1 -2B, 904N1 -3A and 904N1 -3B in one or more of nodes 904N1 -2 and 904N1 -3 may implement functionality of the applications 904N1 -2 and 904N1 -3.
  • Containers 904N1 -2A, 904N1 -2B, 904N1 -3A and 904N1 -3B may implement operating system-level virtualization, wherein an abstraction layer is provided on top of the kernel of an operating system on a host computer (not shown).
  • the abstraction layer supports, for example, multiple containers each including an application and its dependencies. Each container may be executed as an isolated process on the operating system and shares the kernel with other containers.
  • the container relies on the kernel's functionality to make use of resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces and to completely isolate the application's view of the operating environments.
  • containers By using containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
  • Clients 914A, 914B to 914N may be connected to hosts 904N in cloud 902 by cloud provider 908 via a network 906, which may be a private network (e.g., a local area network (LAN), a wide area network (WAN), intranet, or other similar private networks) or a public network (e.g., the Internet).
  • a network 906 may be a private network (e.g., a local area network (LAN), a wide area network (WAN), intranet, or other similar private networks) or a public network (e.g., the Internet).
  • Each client 914A, 914B to 914N may be a mobile device, a PDA, a laptop, a desktop computer, a tablet computing device, a server device, or any other computing device.
  • Each host 904N may be a server computer system, a desktop computer or any other computing device.
  • network is a non-limiting example and may be implemented in a variety of other configurations including a single, monolithic computer system, as well as various other combinations of computer systems or similar devices connected in various ways.
  • FIG. 10 illustrates an example container packet communication using a virtual input/output interface for intra-host communication.
  • a container is produced from an application image gathered, for example, from a designated registry.
  • the container is assigned a unique network address that connects the container to a virtual Ethernet bridge, such as a docker bridge. All containers in the system communicate with each other by directing packets to the docker bridge, which then forwards those packets through the container network.
  • the containers communicate with each over the bridge ports which are heavy and utilize the open vSwitch (OVS) and/or a linux kernel bridge mechanism.
  • OVS open vSwitch
  • containers 1002, 1004 and 1006 may leverage server virtualization methods such as operating system-level virtualization, where the kernel of an operating system allows for multiple isolated user space instances. Some instances of this may include, but are not limited to, containers, virtualization engines (VEs), virtual private servers (VPS), jails, or zones, and/or any hybrid combination thereof.
  • VEs virtualization engines
  • VPN virtual private servers
  • jails or zones, and/or any hybrid combination thereof.
  • containers 1002, 1004 and 1006 include chroot, Linux-VServer, Imctfy ("let me contain that for you"), LXC (Linux containers), OpenVZ (Open Virtuozzo), Parallels Virtuozzo Containers, Solaris Containers (and Solaris Zones), FreeBSD Jail, sysjail, WPARs (workload partitions), HP-UX Containers (SRP, secure resource partitions), iCore Virtual Accounts, and Sandboxie.
  • VIOs 1010A/B virtual input/outputs
  • IOV input/output virtualization
  • NICs virtual network interface cards
  • HBAs virtual host bus adapters
  • VIOs 1010A/B may be loaded onto the host 1008 (as depicted in FIG. 10) and comprise VIO software and/or hardware that can be used to control data packets input to and output from containers 1002, 1004 and 1006 via the communication links 1 -6, such as a dedicated link.
  • Each VIO 1010A/B can multiplex and demultiplex the data packets to other containers 1002, 1004 and 1006 directly to thereby solve the intra-host container communication limitation.
  • Such a configuration may allow easy deployment, scalability, and less communication overhead.
  • VIOs 1010A/B may include, in one embodiment, a virtual network interface card (vNIC) that is connected to a respective one or more of the VIOs 1010A/B, without requiring a bridge to support the communication therebetween. It is also appreciated that the VIOs 1010A/B may send packets using a bridge or OVS.
  • vNIC virtual network interface card
  • containers 1002, 1004 and 1006 include dedicated links 1 , 2, 3 and 6 (for purposes of discussion, we will assume links 4 and 5 do not exist).
  • Each of the containers 1002, 1004 and 1006 create dedicated virtual links between a respective container and VIO.
  • container 1002 forms a link (1 ) between container 1002 and VIO 101 OA and a link (6) between container 1002 and VIO 1010B
  • container 1004 forms a link (2) between container 1004 and VIO 101 OA
  • container 1006 forms a link (3) between container 1006 and VIO 1010B.
  • FIG. 1 1 illustrates an example container packet communication using a virtual input/output interface for inter-host communication.
  • the inter-host instances comprise containers 1 102, 1 104, 1 106, 1 108, 1 1 10 and 1 1 12 that are communicatively coupled to VIOs 1 122A/B in respective host 1 1 14 and host 1 1 16. Similar to the description in FIG. 10, the containers create dedicated virtual links for direct application level communication via the VIOs 1 122A/B.
  • the dedicated virtual links may be multiplexed to one or multiple physical links 1 120 directly using, for example, an input queue.
  • the physical link 1 120 enables each of the hosts, such as host 1 1 14 and host 1 1 16.
  • the physical network interface 1 120 may be a network I/O device thai provides support in hardware, software, or a combination thereof for any form of I/O virtualization (IOV).
  • IOV I/O virtualization
  • Examples of the IOV device include, but are not limited to, PCI- SIG-comp!iani SR-IOV devices and non-SR-IOV devices, PCI-SIG-compliant MR 10V devices, multi-queue NICs, I/O adapters, converged NICs, and converged network adapters (CNA).
  • FIGS. 12 and 13 illustrate various embodiments of direct database/application level communication using VIO without the need to consume the TCP and related sockets.
  • One or more embodiments may be standardized for common application programming interfaces (APIs) for database (DB) use.
  • APIs application programming interfaces
  • DB database
  • the direct DB/application level communication via VIO allows easy use, scalability, and less communication overhead.
  • VIOs 1210A/B virtual input/outputs
  • a single physical adapter card acts as multiple virtual network interface cards (NICs) and virtual host bus adapters (HBAs).
  • NICs virtual network interface cards
  • HBAs virtual host bus adapters
  • These VIOs 1210A/B may be loaded onto the host 1208 and comprise VIO software and/or hardware that can be used to control data packets input to and output from DB instances 1202, 1204 and 1206 via the communication links 1 -6, such as a dedicated link.
  • Each VIO 1210A/B can multiplex and demultiplex the data packets to other DB instances 1202, 1204 and 1206 directly to thereby solve the intra-host instance communication limitation.
  • Such a configuration may allow easy deployment, scalability, and less communication overhead.
  • FIG. 12 illustrates an example database instance communicating using a virtual input/output interface for intra-host communication.
  • a database (DB) instance is a set of memory structure and background processes that access a set of database files. The process can be shared by all of the users.
  • Direct DB instance level communication is implemented whereby the DB instances 1202, 1204 and 1206 are communicatively coupled to each other using communication links 1 -6 via the VIOs 1210A/B, which interface with host 1208.
  • Each container 1202, 1204 and 1206 may include, in one embodiment, a virtual network interface card (vNIC) that is connected to a respective one or more of the VIOs 1210A/B, without requiring a bridge to support the communication there-between. It is also appreciated that the VIOs 1210A/B may send packets using a bridge or OVS.
  • vNIC virtual network interface card
  • containers 1202, 1204 and 1206 include dedicated links 1 , 2, 3 and 6 (for purposes of discussion, we will assume links 4 and 5 do not exist).
  • Each of the DB instances 1202, 1204 and 1206 create dedicated virtual links between a respective container and VIO.
  • DB instance 1202 forms a link (1 ) between DB instance 1202 and VIO 121 OA and a link (6) between DB instance 1202 and VIO 1210B
  • DB instance 1204 forms a link (2) between DB instance 1204 and VIO 121 OA
  • DB instance 1206 forms a link (3) between DB instance 1206 and VIO 1210B.
  • FIG. 13 illustrates an example DB instance packet communication using a virtual input/output interface for inter-host communication.
  • the inter-host instances comprise instance 1302, 1304, 1306, 1308, 1310 and 1312 that are communicatively coupled to VIOs 1322A/B in respective host 1314 and host 1316.
  • the DB instance create dedicated virtual links for direct application level communication via the VIOs 1322A/B.
  • the dedicated virtual links may be multiplexed to one or multiple physical links 1320 directly using, for example, an input queue without relying on TCP sockets.
  • the physical link 1320 enables each of the hosts, such as host 1314 and host 1316.
  • the physical network interface 1320 may be a network I/O device thai provides support in hardware, software, or a combination thereof for any form of I/O virtualization (IOV).
  • IOV I/O virtualization
  • Examples of the IOV device include, but are not limited to, PCS- SIG-compliani SR-IOV devices and non-SR-IOV devices, PCI-SIG-compliant MR- IOV devices, multi-queue NICs, I/O adapters, converged !Cs, and converged network adapters (C A).
  • DB instance 1302 creates link 1
  • DB instance 1304 creates link 2
  • DB instance 1306 creates link 3.
  • DB instance 1308 creates link 4
  • DB instance 1310 creates link 5
  • DB instance 1312 creates link 6.
  • information from DB instance 1302, 1304 and 1306 to VIOS 1322A/B may be multiplexed to one or more physical links 1320 for transmission to host 1316. It is appreciated that any one or more of the DB instance information may be multiplexed and or transmitted across one or more physical links.
  • the transmission may be demultiplexed and sent to a respective one or more of DB instance 1308, 1310 and 1312.
  • FIGS. 14A and 14B illustrate example flow diagrams of constructing virtual links for container and database instances.
  • the flow diagram relates to application containers communicating via a direct communication link.
  • one or more first dedicated virtual links 1 -6 are constructed (for example, by the individual containers 1002, 1004 ad 1006) for direct application container level communication between one or more first application containers.
  • data may then be communicated between the one or more first application containers 1002, 1004 and 1006 via the corresponding one or more first dedicated virtual links 1 -6, where each of the one or more first dedicated virtual links 1 -6 is connected to a respective one of the one or more first application containers 1002, 1004 and 1006 at a first end and connected to a respective virtual input/output (VIO) at a second end.
  • VIO virtual input/output
  • the flow diagram relates to providing direct database to application level communication via a virtual input/output (VIO).
  • VIO virtual input/output
  • one or more first dedicated virtual links 1 -6 are constructed (for example, by the individual DB instances 1202, 1204 ad 1206) for direct application level communication between one or more first DB instances 1202, 1204 and 1206.
  • data may then be communicated between the one or more first DB instances 1202, 1204 and 1206 via the corresponding one or more first dedicated virtual links 1 - 6, where each of the one or more first dedicated virtual links1 -6 is connected to a respective one of the one or more first DV instance 1202, 1204 and 1206 at a first end and connected to a respective virtual input/output (VIO) at a second end.
  • VIO virtual input/output
  • FIG. 15 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
  • the node may be, for example, the node 108 and 1 10 (FIG. 1 ) or any other node or router as described above in the network.
  • the node 1500 may comprise a plurality of input/output ports 151 10/1530 and/or receivers (Rx) 1512 and transmitters (Tx) 1532 for receiving and transmitting data from other nodes, a processing system or processor 1520 (or content aware unit), including a storage 1522 and programmable content forwarding plane 1528, to process data and determine which node to send the data.
  • the node 1500 may also receive application data (payload) as described above.
  • the processor 1520 is not so limited and may comprise multiple processors.
  • the processor 1520 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi- core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs.
  • the processor 1520 may be configured to implement any of the schemes described herein, such as the processes illustrated in FIGS. 4A/B, 8 and 14, using any one or combination of steps described in the embodiments.
  • the processor 1520 may be implemented using hardware, software, or both.
  • the storage 1522 may include cache 1524, long-term storage 1526 and database cluster communication module 1528 and may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single storage, storage 1522 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data).
  • ROM read only memory
  • RAM random access memory
  • secondary storage e.g., one or more disk drives or tape drives used for non-volatile storage of data.
  • the inclusion of the database cluster communication module 1528 provides an improvement to the functionality of node 1500.
  • the database cluster communication module 1528 also effects a transformation of node 1500 to a different state.
  • the database cluster communication module 1528 is implemented as instructions stored in the processor 1520.
  • FIG. 16 is a block diagram of a network system that can be used to implement various embodiments. Specific devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc.
  • the network system may comprise a processing unit 1601 equipped with one or more input/output devices, such as network interfaces, storage interfaces, and the like.
  • the processing unit 1601 may include a central processing unit (CPU) 1610, a memory 1620, a mass storage device 1630, and an I/O interface 1660 connected to a bus.
  • the bus may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus or the like.
  • the CPU 1610 may comprise any type of electronic data processor.
  • the memory 1620 may comprise any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like.
  • the memory 1620 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
  • the memory 1620 is non-transitory.
  • the mass storage device 1630 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus.
  • the mass storage device 1630 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
  • the processing unit 1601 also includes one or more network interfaces 1650, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks 1680.
  • the network interface 1650 allows the processing unit 1601 to communicate with remote units via the networks 1680.
  • the network interface 1650 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas.
  • the processing unit 1601 is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
  • FIG. 17 illustrates a block diagram in accordance with the disclosed technology.
  • a receiver/transmitter module 1702 receives and transmits one or more application payloads corresponding to one or more applications residing on a client.
  • a terminating module 1704 terminates the transport layer protocol and reads the application payload associated with a current session.
  • a preparer module 1706 prepares header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads.
  • Encrypting/Decrypting module 1708 encrypts/decrypts the application payloads, including the header information, for transmission in the network via a single virtual communication link.
  • a Multiplexing/Demultiplexing module 1710 multiplexes/demultiplexes the application payloads such that it may be transmitted across a single communication channel (the virtual communication link).
  • a virtual I/O module 1712 allows application containers to directly communicate with each other using a virtual input/output (VIO) that resides on a host.
  • VIO virtual input/output
  • the disclosed technology provides multiple secure applications from the same device having the application payloads multiplexed/demultiplexed and transported across a single crypto channel.
  • One or more advantages arise from this technology, including but not limited to, no session establishment overhead and tunnel overhead, reduce the over heads of TCP proxy sessions and tunnels payload overheads, can be used for control plane traffic with different applications running on same device, reduce number of secure session establishment (asymmetric and symmetric), SDN controller to Network Device Communication through public cloud or Internet, multiplexing multiple applications through single crypto session, firewall need not open multiple ports to support for multiple applications, manage networking devices (Router/Switches/WIFI/IOT) through internet/cloud, and easy managing the multiple application like Netconf, SNMP and Capwap.
  • VIO reduces the communication latency as it shortens the end-to-end communication path, so it improves the overall performance of the database
  • VIO can reduce the total number of concurrent connections among database instances, where there is usually a hard limit on the number of TCP connections one server can set up and send data messages through. This would improve the scalability of the database cluster, which means more database instances can be put into the database cluster and more queries can be concurrently processed by the database cluster. This would also improve the overall database system performance.
  • the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in a non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Virtual computer system processing can be constructed to implement one or more of the methods or functionalities as described herein, and a processor described herein may be used to support a virtual processing environment.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Abstract

The disclosure relates to technology for transmitting an application payload in a network. One or more application payloads corresponding to one or more applications residing on a client are received, where the application payload formed from a client request comprising a transport layer protocol. The transport layer protocol is termination and the application payload associated with a current session is read. Header information, including application specific information, is prepared for each of the received applications for insertion into a corresponding one of the application payloads. The application payloads are encrypted, including the header information, for transmission in the network via a single virtual communication link.

Description

FAST AND SCALABLE DATABASE CLUSTER COMMUNICATION PATH
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001]This application claims priority to and benefit of U.S. provisional patent application Serial No. 62/221 ,458, filed on September 21 , 2015, and entitled "Fast and Scalable Database Cluster Path," which application is hereby incorporated by reference.
BACKGROUND
[0002]To connect and manage a network node/device (e.g., router/switch/etc.) on an enterprise network through the Internet in a secure manner, multiple secure sessions must be created depending on the type of services the node/device is providing. When the node/device is located behind a network address translation (NAT)/Firewall, the issues surrounding connecting and managing the nodes/devices become increasingly problematic. Enterprise NAT/Firewalls require opening multiple ports to allow for each of the sessions. As the number of sessions increase, the number of ports also increases.
BRIEF SUMMARY
[0003] In one embodiment, there is a method of transmitting an application payload in a network, comprising receiving one or more application payloads corresponding to one or more applications residing on a client, the application payload formed from a client request comprising a transport layer protocol; terminating the transport layer protocol and reading the application payload associated with a current session; preparing header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads; and encrypting the application payloads, including the header information, for transmission in the network via a single virtual communication link.
[0004] In another embodiment, there is a non-transitory computer-readable medium storing computer instructions for transmitting application payloads in a network, that when executed by one or more processors, causes the one or more processors to perform the steps of receiving one or more application payloads corresponding to one or more applications residing on a client, the application payload formed from a client request comprising a transport layer protocol; terminating the transport layer protocol and reading the application payload associated with a current session; preparing header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads; and encrypting the application payloads, including the header information, for transmission in the network via a single virtual communication link.
[0005] In still another embodiment, there is a method for application containers to communicate via a direct communication link, comprising constructing one or more first dedicated virtual links for direct application container level communication between one or more first application containers; and communicating data between the one or more first application containers via the corresponding one or more first dedicated virtual links, where each of the one or more first dedicated virtual links is connected to a respective one of the one or more first application containers at a first end and connected to a respective virtual input/output (VIO) at a second end.
[0006] In yet another embodiment, there is a method for providing direct database to application level communication via a virtual input/output, comprising constructing one or more first dedicated virtual links for direct application level communication between one or more first database instances; and communicating data between the one or more first database instances via the corresponding one or more first dedicated virtual links, where each of the one or more first dedicated virtual links is connected to a respective one of the one or more first database instances at a first end and
[0007] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate like elements.
[0009] FIG. 1 is illustrates an example network environment in which various embodiments of the disclosure may be implemented.
[0010] FIG. 2 is illustrates a virtual communication link environment in which application payloads may be multiplexed.
[0011] FIG. 3 illustrates a client and server proxy in an application crypto multiplexing (ACM) environment in accordance with FIG. 2.
[0012] FIGS. 4A and 4B illustrate flow diagrams for sending and receiving payloads across the virtual communication link.
[0013] FIG. 5 illustrates a shim layer added to a payload of the application client or application server.
[0014] FIG. 6 illustrates an example of an ACM header of FIG. 5.
[0015] FIG. 7 illustrates a state diagram of an ACM data session state machine.
[0016] FIGS. 8A and 8B illustrate example flow diagrams of transmitting an application payload in a network.
[0017] FIG. 9 illustrates an example network in which the disclosure may be implemented.
[0018] FIG. 10 illustrates an example container packet communication using a virtual input/output interface for intra-host communication.
[0019] FIG. 1 1 illustrates an example container packet communication using a virtual input/output interface for inter-host communication.
[0020] FIGS. 12 and 13 illustrate various embodiments of direct database/application level communication using VIO without the need to consume the TCP and related sockets.
[0021] FIGS. 14A and 14B illustrate example flow diagrams of constructing virtual links for container and database instances.
[0022] FIG. 15 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
[0023] FIG. 16 is a block diagram of a network system that can be used to implement various embodiments.
[0024] FIG. 17 illustrates a block diagram in accordance with the disclosed technology.
DETAILED DESCRIPTION
[0025] The disclosure relates to technology for transmitting an application payload in a network. One or more application payloads corresponding to one or more applications residing on a client are received, where the application payload formed from a client request comprising a transport layer protocol. The transport layer protocol is termination and the application payload associated with a current session is read. Header information, including application specific information, is prepared for each of the received applications for insertion into a corresponding one of the application payloads. The application payloads are encrypted, including the header information, for transmission in the network via a single virtual communication link.
[0026] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[0027] The disclosed technology generally provides a 'many-to-one' integrated proxy and tunnel solution that adds a shim layer between the application payload and transmission control protocol (TCP)/ secure socket layer (SSL) headers, where a state machine may be configured to control sessions.
[0028] Such an integrated proxy and tunnel solution may be implemented using a client or a server based on the location of the network node/device. Examples of the network node/devices include, but are not limited to, a router, a switch, a WIFI device, an Internet-of-things (IOT) device, or any physical and virtual devices as would be appreciated by one of ordinary skill in the art upon viewing this disclosure.
[0029] As will become evident from the discussion below, the disclosed technology may provide a single point for security services. For example, applications may delegate crypto responsibilities to a device, the device may setup channels and exchange the crypto data between the device and a controller to thereby provide a secure connection between devices. Implementing the disclosed technology on the server-side provides a crypto server and implementing the disclosed technology on the client-side provides crypto client functionality. Communication channels may authenticate and provide message integrity, user authentication, and confidentiality. The communication channels may also support standard symmetric/asymmetric cryptographic functionality and may be able to setup secure channels behind a NAT/firewall.
[0030] FIG. 1 is illustrates an example network environment in which various embodiments of the disclosure may be implemented. The network environment 100 includes, for example, client(s) 102, server(s) 104, SDN controller 1 12 and administrator 1 14. In general, SDNs involve the use of a standalone controller that performs the control functionality for a set of network devices. As an example of software defined networking, in the case of routing, rather than routers performing individual analyses to determine routes through the network, the controller can determine the routes and program other devices in the network to behave according to the determinations made by the controller. Different protocols may be used to implement software defined networking, including open protocols like OpenFlow, and proprietary protocols from network vendors.
[0031] In the depicted embodiment, the SDN 106 includes network nodes 108 and 1 10 and service devices 1 16. Network nodes 108 and 1 10 may comprise switches, and other devices (not shown). These network nodes 108 and 1 10 can be physical instantiations or virtual instantiations that generally serve to forward network traffic. Although not depicted, SDN 106 may also include other types of devices, such as routers, load balancers and various L4-L7 network devices, among other network devices.
[0032] SDN 106 may connect various endpoint devices, such as client 102 and server 104. In addition, SDN 106 may provide services to network traffic flowing between client device 102 and server device 104. In one embodiment, administrator 1 14 may use SDN controller 1 12 to program network devices of SDN 106 to direct network traffic for client 102 to one or more of service devices 1 16.
[0033] Service devices 1 16 may include, for example, intrusion detection service (IDS) devices, intrusion prevention system (IPS) devices, web proxies, web servers, web- application firewalls and the like. In other examples, service devices 1 16 may, additionally or alternatively, include devices for providing services such as, for example, denial of service (DoS) protection, distributed denial of service (DDoS) protection, traffic filtering, wide area network (WAN) acceleration, or other such services.
[0034] Although shown as individual devices, it should be understood that service devices 1 16 may be physical devices, multi-tenant devices, or virtual services (e.g., cloud-based services) and may be readily applied to virtual devices and cloud-based applications, in addition or in the alternative to physical devices.
[0035] FIG. 2 is illustrates a virtual communication link environment in which application payloads may be multiplexed. The environment 200, herein referred to as an application crypto multiplexing (ACM) environment 200 includes, for example, a virtual communication link 202, a client proxy 204, application client 206, server proxy 208 and application server 210.
[0036]A virtual communication link (e.g., a virtual tunnel) allows two computer programs (e.g., client and server applications) that are not able to otherwise address each other directly. For example, when a client application of application client 206 needs to connect to a server application of application server 210 at a remote site. The server application 210 may be on a computer on a customer or partner's non- addressable local network (e.g., behind a firewall). As such, the application client 206 will not be able to address the application server 210 directly. The virtual communication link therefore provides application client 206 access to the application server 210, and vice versa.
[0037] In the embodiments of the disclosed technology, the virtual communication link 202 allows one or more application(s) residing on application client 206 and/or application server 210 to share a single communication channel (e.g., virtual communication link or tunnel) by multiplexing and/or demultiplexing the payload of the application(s) residing on application client 206 and/or application server 210 for services from the same device.
[0038] More specifically, the client proxy 204 and server proxy 208 (explained below with reference to FIG. 3) and virtual communication link 202 may be integrated or combined at each end of the channel to form a single socket interface by multiplexing and/or demultiplexing the payload of the application client 206 and application server 210 for communication via virtual communication link 202, such as a crypto virtual tunnel (VT).
[0039] In one embodiment, the multiplexing and/or demultiplexing may be implemented by virtue of adding an ACM header into the payload of a particular application that carries application specific information. Headers are explained in more detail below with reference to FIGS. 5 and 6.
[0040] The application client 206 may comprise a first client application 206A, such as network configuration protocol (NETCONF) plugin, a second client application 206B, such as a simple network management protocol (SNMP) plugin, and/or a third client application 206C, such as a control and provisioning of wireless access points (CAPWAP) plugin. These plugins may be used for remote configuration of devices and allow traffic patterns to be seamlessly injected into the existing network devices that form the network. That is, rather than deploying traffic generators to strategically introduce traffic patterns into various points of the network, the desired traffic patterns are encapsulated and communicated to the existing network devices of the network via the plugins that would otherwise be used to manipulate configuration data of the network devices.
[0041] For example, NETCONF provides mechanisms for configuring network devices and uses an Extensible Markup Language (XML)-based data encoding for configuration data, which may include policy data; SNMP allows device management systems to traverse and modify management information bases (MIBs) that store configuration data within managed elements; and CAPWAP is a protocol used to exchange messages between any mesh node and the controller via the virtual communication link, and was originally designed for so-called lightweight access points.
[0042] The client proxy 204 includes a crypto client 204A that is operably coupled to the first, second and third client applications 206A, 206B and 206C, respectively, via sockets (as explained with reference to FIG. 3).
[0043] The application server 210, similar to the application client 206, may comprise one or more of a first server application 21 OA, such as a NETCONF plugin, a second server application 210B, such as an SNMP plugin, and/or a third server application, such as a CAPWAP plugin, operably coupled to the application crypto server via sockets.
[0044] The server proxy 208 includes an crypto server 208A that is operably coupled to the first, second and third client applications 21 OA, 210B and 210C, respectively, via sockets (as explained with reference to FIG. 3). [0045] Accordingly, the ACM environment 200 allows an SDN controller, such as SDN controller 1 12, to communicate with and to manage network devices, such as network nodes 108 and 1 10, using a network, such as a public cloud or the Internet. By employing the disclosed ACM environment 200, a firewall may not need to open multiple ports to support multiple applications. Rather, the ACM environment 200 allows SDN controller 1 12 to easily manage multiple applications. The ACM environment 200 may also reduce TCP proxy session overhead and tunnel payload overhead, and may be used for control plane traffic with different applications running on the same device.
[0046] The ACM environment 200 is different from other crypto technologies. For example, other technologies such as Internet key exchange (IKE)/ Internet protocol security (IPSEC) use tunneling based at layer 3 and SSL technologies use layer 4. These technologies use one session for each application.
[0047] FIG. 3 illustrates a client and server proxy in an ACM environment in accordance with FIG. 2. The ACM environment 300 includes, for example, a virtual communication link 202, such as a crypto virtual tunnel, that communicatively couples a client proxy 204 and a server proxy 208.
[0048] The client proxy 204 includes a session manager 302A, a MUX/DEMUX 306A, a transport layer security (TLS)/ datagram TLS (DTLS) client 304A, a NETCONF client (session 1 ) 310A, and nan SNMP client (session 2) 312A. Similarly, the server proxy 208 includes a session manager 302B, a MUX/DEMUX 306B, a transport layer security (TLS)/ datagram TLS (DTLS) client 304B, a NETCONF client (session 1 ) 310B and an SNMP client (session 2) 312B.
[0049] The session manager 302A enables transparent secure and open communication between the application client 206 (FIG. 2) and the client proxy 204. In one embodiment, session manager 302A may perform encrypted session processing, including managing an encrypted session handshake, managing keys, certificates, authentication, authorization, or similar. Moreover, session manager 302A may in one embodiment establish encrypted sessions and/or connections, terminate encrypted sessions and/or connections, establish itself as a man-in-the-middle of an encrypted session and/or connection, or similar.
[0050] The NETCONF client 1 (session 1 ) 31 OA and the SNMP client 2 (session 2) 312A are communicatively coupled to the session manager 302A via the sockets. Similarly, the NETCONF server 1 (session 1 ) 31 OB and the SNMP server 2 (session 2) 312B are communicatively coupled to the session manager 302B via the sockets
[0051] The MUX/DEMUX 306A on client proxy 204 may be configured to route application payloads from multiple sockets on an application client 206 to a single socket on client proxy 204 by multiplexing the application payloads. The multiplexed application payload may transported across the virtual communication link 202 to the proxy server 208 and then delivered to the application server 210. In one embodiment, the MUX/DEMUX 306A is configured to transmit application payloads over a single secure connection (e.g., virtual communication link 202) from the client proxy 204 to corresponding multiple sockets on the application server 210 (after demultiplexing at the server proxy 208).
[0052] Similarly, MUX/DEMUX 306A on client proxy 204 may be configured to receive application payloads from the virtual communication link 202 using a single socket on client proxy 204. The application payload, received from application server 210, may be demultiplexed by the MUX/DEMUX 308A into discrete application payloads and each discrete application payload may be transported to one or more corresponding sockets on the application client 206. In one embodiment, the MUX/DEMUX 306A is configured to receive application payloads over a single secure connection (e.g., virtual communication link 202) from the server proxy 204 to corresponding multiple sockets on the application client 206.
[0053] The MUX/DEMUX 306A, as explained below, is also responsible for preparing and adding ACM header information into the application payload.
[0054] As used herein, the term socket refers to a port, buffer, logical node or object configured to receive data in any format, such as HTTP format, from a remote device via a network connection. The MUX/DEMUX 306B may be configured in a similar manner.
[0055] To secure communications across the virtual communication link 202, TLS/DTLS clients 304A (client proxy) and 304B (server proxy) may be employed. The TLS/DTLS clients 304A and 304B are responsible for encrypting/decrypting the multiplexed/demultiplexed application payloads. The TLS protocol aims primarily to provide privacy and data integrity between two communicating computer applications. TLS was designed to operate on top of a transport protocol, such as TCP, and below the application layer, such as HTTP. In order to establish a cryptographically secure data channel, the connection peers must agree on which ciphersuites will be used and the keys used to encrypt the data. TLS has also been adapted to run over datagram protocols, such as the user datagram protocol (UDP). Datagram TLS (DTLS) is a protocol based on TLS that is capable of securing datagram transport, such as UDP, and is well suited for tunneling applications, e.g., the CAPWAP tunnel to the controller in a mesh network.
[0056] FIGS. 4A and 4B illustrate flow diagrams for sending and receiving payloads across the virtual communication link. The processes described herein are implemented using the client proxy 204 and server proxy 208 depicted, for example, in FIG. 3. However, it is appreciated that any network component or element may be responsible for such implementation and the disclosed embodiment is a non-limiting example.
[0057] In particular, FIG. 4A illustrates a flow chart for sending client data to a server. Client 31 OA and/or client 312A creates a TCP/UDP client request and sends application data (e.g., payload) to client proxy 204. The application data is received by the client proxy 204 via session manager 302A at 402A. Here, the session manager 302A terminates the TCP connection and reads the application data from the local session and acquires the session information. The state (FIG. 7) and sessions are managed and session details are sent to MUX/DEMUX 306A.
[0058]At 404A, the MUX/DEMUX 306A prepares ACM headers and adds the ACM header to the application data (payload). The ACM header and application payload is described in detail with reference to FIGS. 5 and 6 below.
[0059] At 406A, the TLS/DTLS client 304A is responsible for encrypting /decrypting the data (application data + ACM header) and sending the application data to the application server 210 via virtual communication link 202.
[0060] The encrypted/decrypted data (application + ACM header) is received by the server proxy 208 via the virtual communication link 202 at 412A, and the MUX/DEMUX 306B adds/removes the ACM header to the payload at 41 OA.
[0061] The session manager 302B at the server proxy 208 then reads the application data from the MUX/DEMUX 306B and creates/manages a local session with the application server 210 at 408A. The application data is then sent to the application server 210 via the secure sockets of the session, and clients 310B and/or 312B read the request and prepare an application response. [0062] FIG. 4B illustrates a flow chart for sending server data to a client. The server 1 31 OB and/or server 2 312B prepares an application response in response to the client request of FIG. 4A. At 402B, the session manager 302B reads application data (e.g. payload) from application server 210 and creates/manages records for the session information. The session manager 302B then sends the session information and application data to MUX/DEMUX 306B.
[0063]At 404B, MUX/DEMUX 306 reads the session information and application data and prepares an ACM header to be added/removed from the ACM header to the application data (payload). At 406B, TLS/DTLS service 304B encrypts the application data (application + ACM header) and sends the encrypted application data to the application client 206 via client proxy 204 and virtual tunnel link 202.
[0064] Once the encrypted application data is received by the TLS/DTLS client 304A of proxy client 204, the application data (application + ACM header) is decrypted at 408B, the MUX/DEMUX 306A removes the ACM header from the application data (payload) and sends the decrypted application data to session manager 302A at 410B. The session manager 302A then reads the application data from MUX/DEMUX 306A and sends the application data to respective sockets for client 1 31 OA and/or client 2 312A, where the response is received from the application server 210.
[0065] FIG. 5 illustrates a shim layer added to a payload of the application client or application server. The shim layer is shown as being added between layers 4 and 7 of the open systems interconnection (OSI) layers 502. The figure also illustrates three payloads including the NETCONF payload 502A, the SNMP payload 502B and the CAPWAP payload 502C.
[0066]As illustrated, layer 3 (IP) is the packet later that structures and manages a multi-node network, including addressing, routing and traffic control. Layer 4 (TCP/UDP) is the transport layer that is responsible for transmission of data segments between points on the network, including segmentation, acknowledgement and multiplexing. Layers 5 and/or 6 may include the added shim layer as part of the presentation (layer 6) and session (layer 5) layers that manage communication sessions, such as continuation exchange of information in the form of multiple back- and-forth transmission between two nodes. Layer 7 (NETCONF/SNMP/CAPWAP) is the application layer that includes high-level APIs, including resource sharing and remote file access. [0067] FIG. 6 illustrates an example of an ACM header of FIG. 5. The ACM header includes, for example, an ACM version field, an ACM operation (Op) type field, a security session control field, an application session/source port field, an application identifier (ID) field, and a payload length field. It is appreciated that the illustrated header is a non-limiting example of a header configuration, and that any number of variations may be implemented.
[0068] The fields defined in the ACM header may vary in size and type of information. As an example, in one non-limiting embodiment, the ACM version field may be 4 bits and may indicate an initial version, the ACM Op type field may be 4 bits and may indicate an operation type, the security session control field may be 2 bytes and indicate a security session control type, the application session/source port field may be 2 bytes and may indicate a session ID or source port, the application ID field may be 2 bytes and may indicate an application TCP, a UDP ID, or a destination port, and the payload length may be 2 bytes and may indicate the size of a payload.
[0069] A payload may be configured to be any one or more of the following: an ACM hello request, an ACM hello response, an ACM acknowledgement, an ACM data or data transfer, an ACM service update, an ACM service update acknowledgement, an ACM health statistics request, an ACM health statistics response, an ACM control/alert/error, an ACM heartbeat request and/or an ACM heartbeat response.
[0070] FIG. 7 illustrates a state diagram of an ACM data session state machine. The ACM data session state machine includes an initial state 702 a data write state 704/710, a data read state 706/712 and a session close state 708. The data operational states may be defined according to the following: session-start even (CE) - 1 1 , data-read/data-write even (DRWS) - 01 ; data-read/data-write-end even (DRWE) - 10 and session close event (CT) - 00, as illustrated in FIG. 7 that follows.
[0071] FIGS. 8A and 8B illustrate example flow diagrams of transmitting an application payload in a network. Referring to FIG. 8A, at 802, one or more application payloads 502A, 502B and 502C are received, for example by a proxy client 204, corresponding to one or more applications 206A, 206B and 206C, respectively, residing on an application client 206. The application payload formed from a client request comprising a transport layer protocol, such as TCP or UDP.
[0072] At 804, the application payloads received at the proxy client 204 are terminated, for example by session manager 302A, and the application payload is read for the current session.
[0073] MUX/DE MUX 306A prepares header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads at 806, and the TLS/DTLS client 304A encrypts the application payloads, including the header information, for transmission in the network via a single virtual communication link 202 at 808.
[0074] With reference to FIG. 8B, the one or more application payloads may be multiplexed, including the header information inserted into the application payload, to share across a single communication channel (i.e., virtual communication link).
[0075] At 812, the multiplexed application payloads are transmitted via the shared communication link. Upon arrival at the end point (e.g., server proxy 208), the application payload is demultiplexed (and decrypted) such that application server 210 may respond to the request from the client at 814.
[0076] FIG. 9 illustrates an example network in which the disclosure may be implemented. The network 900 includes cloud 902, network 906, cloud provider 908 and clients 914A, 914B to 914N.
[0077] Cloud 902 includes one or more hosts 904 to 904N (collectively, 904N), where each host 904N includes one or more nodes 904N1 . In one embodiment, the node 904N1 is a virtual machine (VM) that is hosted on a physical machine, such as host 904 through host 904N. In another embodiment, the host machine(s) 904N may be located in a data center. For example, the one or more nodes 904N1 is hosted on physical machine 904N in cloud 902 provided by cloud provider 908. When hosted on host machines 904N, users can interact with one or more applications, such as applications 904N1 -2 and 904N1 -3, executing on the one or more nodes 904N1 using client computer systems, such as clients 914A, 914B to 914N. In one embodiment, the applications 904N1 -2 and 904N1 -3 may be hosted on hosts 904N without the use of VMs.
[0078] As indicated above, the one or more nodes 904N1 execute one or more applications 904N1 -2 and 904N1 -3 that may be owned or managed by different users and/or organizations. For example, a customer may deploy applications 904N1 -2 and 904N1 -3 that may co-exist with another customer's applications on the same or a different node 904N that is hosting the first customer's applications. In one embodiment, portions of or separate applications 904N1 -2 and 904N1 -3 execute on different nodes 904N.
[0079] In one embodiment, as appreciated by the skilled artisan, the data used for execution of applications 904N1 -2 and 904N1 -3 includes application images built from pre-existing application components and source code of users managing the applications 904N1 -2 and 904N1 -3. An image, within the context of SDNs and container networking, refers to data representing executables and files of the application used to deploy functionality for a runtime instance of the application. In one example, the image is built using a Docker tool and is referred to as a Docker image. As explained below, a docket bridge will not be required in implementing the various embodiments of the invention, although such a docker bridge is not excluded from use.
[0080] The one or more nodes 904N1 -2 and 904N1 -3 may execute an application by launching an instance of an application image as a container 904N1 -2A, 904N1 -2B, 904N1 -3A and 904N1 -3B in one or more of nodes 904N1 -2 and 904N1 -3. Containers 904N1 -2A, 904N1 -2B, 904N1 -3A and 904N1 -3B in one or more of nodes 904N1 -2 and 904N1 -3 may implement functionality of the applications 904N1 -2 and 904N1 -3.
[0081] Containers 904N1 -2A, 904N1 -2B, 904N1 -3A and 904N1 -3B may implement operating system-level virtualization, wherein an abstraction layer is provided on top of the kernel of an operating system on a host computer (not shown). The abstraction layer supports, for example, multiple containers each including an application and its dependencies. Each container may be executed as an isolated process on the operating system and shares the kernel with other containers. The container relies on the kernel's functionality to make use of resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces and to completely isolate the application's view of the operating environments. By using containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
[0082] Clients 914A, 914B to 914N may be connected to hosts 904N in cloud 902 by cloud provider 908 via a network 906, which may be a private network (e.g., a local area network (LAN), a wide area network (WAN), intranet, or other similar private networks) or a public network (e.g., the Internet). Each client 914A, 914B to 914N may be a mobile device, a PDA, a laptop, a desktop computer, a tablet computing device, a server device, or any other computing device. Each host 904N may be a server computer system, a desktop computer or any other computing device.
[0083] While various embodiments are described in terms of the network described above, those skilled in the art will appreciate that the network is a non-limiting example and may be implemented in a variety of other configurations including a single, monolithic computer system, as well as various other combinations of computer systems or similar devices connected in various ways.
[0084] FIG. 10 illustrates an example container packet communication using a virtual input/output interface for intra-host communication. In conventional SDNs, a container is produced from an application image gathered, for example, from a designated registry. When a container is instantiated (typically by a daemon), the container is assigned a unique network address that connects the container to a virtual Ethernet bridge, such as a docker bridge. All containers in the system communicate with each other by directing packets to the docker bridge, which then forwards those packets through the container network. However, the containers communicate with each over the bridge ports which are heavy and utilize the open vSwitch (OVS) and/or a linux kernel bridge mechanism.
[0085] In other embodiments, containers 1002, 1004 and 1006 may leverage server virtualization methods such as operating system-level virtualization, where the kernel of an operating system allows for multiple isolated user space instances. Some instances of this may include, but are not limited to, containers, virtualization engines (VEs), virtual private servers (VPS), jails, or zones, and/or any hybrid combination thereof. Some example available technologies for containers 1002, 1004 and 1006 include chroot, Linux-VServer, Imctfy ("let me contain that for you"), LXC (Linux containers), OpenVZ (Open Virtuozzo), Parallels Virtuozzo Containers, Solaris Containers (and Solaris Zones), FreeBSD Jail, sysjail, WPARs (workload partitions), HP-UX Containers (SRP, secure resource partitions), iCore Virtual Accounts, and Sandboxie.
[0086] By virtue of the disclosed technology, direct container level communication is enabled via virtual input/outputs (VIOs) 1010A/B (or input/output virtualization (IOV)). In VIO, a single physical adapter card acts as multiple virtual network interface cards (NICs) and virtual host bus adapters (HBAs). These VIOs 1010A/B may be loaded onto the host 1008 (as depicted in FIG. 10) and comprise VIO software and/or hardware that can be used to control data packets input to and output from containers 1002, 1004 and 1006 via the communication links 1 -6, such as a dedicated link. Each VIO 1010A/B can multiplex and demultiplex the data packets to other containers 1002, 1004 and 1006 directly to thereby solve the intra-host container communication limitation. Such a configuration may allow easy deployment, scalability, and less communication overhead.
[0087] With reference to FIG. 10, direct container level communication is implemented whereby the containers 1002, 1004 and 1006 are communicatively coupled to each other using communication links 1 -6 via the VIOs 1010A/B, which interface with host 1008. Each container 1002, 1004 and 1006 may include, in one embodiment, a virtual network interface card (vNIC) that is connected to a respective one or more of the VIOs 1010A/B, without requiring a bridge to support the communication therebetween. It is also appreciated that the VIOs 1010A/B may send packets using a bridge or OVS.
[0088] As an example, let us assume containers 1002, 1004 and 1006 include dedicated links 1 , 2, 3 and 6 (for purposes of discussion, we will assume links 4 and 5 do not exist). Each of the containers 1002, 1004 and 1006 create dedicated virtual links between a respective container and VIO. Thus, container 1002 forms a link (1 ) between container 1002 and VIO 101 OA and a link (6) between container 1002 and VIO 1010B, container 1004 forms a link (2) between container 1004 and VIO 101 OA, and container 1006 forms a link (3) between container 1006 and VIO 1010B.
[0089] FIG. 1 1 illustrates an example container packet communication using a virtual input/output interface for inter-host communication. The inter-host instances comprise containers 1 102, 1 104, 1 106, 1 108, 1 1 10 and 1 1 12 that are communicatively coupled to VIOs 1 122A/B in respective host 1 1 14 and host 1 1 16. Similar to the description in FIG. 10, the containers create dedicated virtual links for direct application level communication via the VIOs 1 122A/B. The dedicated virtual links may be multiplexed to one or multiple physical links 1 120 directly using, for example, an input queue.
[0090] The physical link 1 120 enables each of the hosts, such as host 1 1 14 and host 1 1 16. The physical network interface 1 120 may be a network I/O device thai provides support in hardware, software, or a combination thereof for any form of I/O virtualization (IOV). Examples of the IOV device include, but are not limited to, PCI- SIG-comp!iani SR-IOV devices and non-SR-IOV devices, PCI-SIG-compliant MR 10V devices, multi-queue NICs, I/O adapters, converged NICs, and converged network adapters (CNA).
[0091] In one example embodiment, let us assume that three dedicated virtual links exist in each host 1 1 14 and host 1 1 16 between containers and VIOs. In host 1 1 14, container 1 102 creates link 1 , container 1 104 creates link 2 and container 1 106 creates link 3. In host 1 1 16, container 1 108 creates link 4, container 1 1 10 creates link 5 and container 1 1 12 creates link 6. Within host 1 1 14, information from containers 1 102, 1 104 and 1 106 to VIOS 1 122A/B may be multiplexed to one or more physical links 1 120 for transmission to host 1 1 16. It is appreciated that any one or more of the containers information may be multiplexed and or transmitted across one or more physical links. Upon receipt at host 1 1 16, the transmission may be demultiplexed and sent to a respective one or more of containers 1 108, 1 1 10 and 1 1 12.
[0092] FIGS. 12 and 13 illustrate various embodiments of direct database/application level communication using VIO without the need to consume the TCP and related sockets. One or more embodiments may be standardized for common application programming interfaces (APIs) for database (DB) use. The direct DB/application level communication via VIO allows easy use, scalability, and less communication overhead.
[0093] By virtue of the disclosed technology, direct DB instance level communication is enabled via virtual input/outputs (VIOs) 1210A/B (or input/output virtualization). In VIO, a single physical adapter card acts as multiple virtual network interface cards (NICs) and virtual host bus adapters (HBAs). These VIOs 1210A/B may be loaded onto the host 1208 and comprise VIO software and/or hardware that can be used to control data packets input to and output from DB instances 1202, 1204 and 1206 via the communication links 1 -6, such as a dedicated link. Each VIO 1210A/B can multiplex and demultiplex the data packets to other DB instances 1202, 1204 and 1206 directly to thereby solve the intra-host instance communication limitation. Such a configuration may allow easy deployment, scalability, and less communication overhead.
[0094] In particular, FIG. 12 illustrates an example database instance communicating using a virtual input/output interface for intra-host communication. As appreciated, a database (DB) instance is a set of memory structure and background processes that access a set of database files. The process can be shared by all of the users.
[0095] Direct DB instance level communication is implemented whereby the DB instances 1202, 1204 and 1206 are communicatively coupled to each other using communication links 1 -6 via the VIOs 1210A/B, which interface with host 1208. Each container 1202, 1204 and 1206 may include, in one embodiment, a virtual network interface card (vNIC) that is connected to a respective one or more of the VIOs 1210A/B, without requiring a bridge to support the communication there-between. It is also appreciated that the VIOs 1210A/B may send packets using a bridge or OVS.
[0096] As an example, let us assume containers 1202, 1204 and 1206 include dedicated links 1 , 2, 3 and 6 (for purposes of discussion, we will assume links 4 and 5 do not exist). Each of the DB instances 1202, 1204 and 1206 create dedicated virtual links between a respective container and VIO. Thus, DB instance 1202 forms a link (1 ) between DB instance 1202 and VIO 121 OA and a link (6) between DB instance 1202 and VIO 1210B, DB instance 1204 forms a link (2) between DB instance 1204 and VIO 121 OA, and DB instance 1206 forms a link (3) between DB instance 1206 and VIO 1210B.
[0097] FIG. 13 illustrates an example DB instance packet communication using a virtual input/output interface for inter-host communication. The inter-host instances comprise instance 1302, 1304, 1306, 1308, 1310 and 1312 that are communicatively coupled to VIOs 1322A/B in respective host 1314 and host 1316. Similar to the description in FIG. 12, the DB instance create dedicated virtual links for direct application level communication via the VIOs 1322A/B. The dedicated virtual links may be multiplexed to one or multiple physical links 1320 directly using, for example, an input queue without relying on TCP sockets.
[0098] The physical link 1320 enables each of the hosts, such as host 1314 and host 1316. The physical network interface 1320 may be a network I/O device thai provides support in hardware, software, or a combination thereof for any form of I/O virtualization (IOV). Examples of the IOV device include, but are not limited to, PCS- SIG-compliani SR-IOV devices and non-SR-IOV devices, PCI-SIG-compliant MR- IOV devices, multi-queue NICs, I/O adapters, converged !Cs, and converged network adapters (C A).
[0099] In one example embodiment, let us assume that three dedicated virtual links exist in each host 1314 and host 1316 between DB instances and VIOs. In host 1314, DB instance 1302 creates link 1 , DB instance 1304 creates link 2 and DB instance 1306 creates link 3. In host 1316, DB instance 1308 creates link 4, DB instance 1310 creates link 5 and DB instance 1312 creates link 6. Within host 1314, information from DB instance 1302, 1304 and 1306 to VIOS 1322A/B may be multiplexed to one or more physical links 1320 for transmission to host 1316. It is appreciated that any one or more of the DB instance information may be multiplexed and or transmitted across one or more physical links. Upon receipt at host 1 1 16, the transmission may be demultiplexed and sent to a respective one or more of DB instance 1308, 1310 and 1312.
[00100] FIGS. 14A and 14B illustrate example flow diagrams of constructing virtual links for container and database instances. With reference to FIG. 14A, the flow diagram relates to application containers communicating via a direct communication link. At 1402, one or more first dedicated virtual links 1 -6 are constructed (for example, by the individual containers 1002, 1004 ad 1006) for direct application container level communication between one or more first application containers. At 1404, data may then be communicated between the one or more first application containers 1002, 1004 and 1006 via the corresponding one or more first dedicated virtual links 1 -6, where each of the one or more first dedicated virtual links 1 -6 is connected to a respective one of the one or more first application containers 1002, 1004 and 1006 at a first end and connected to a respective virtual input/output (VIO) at a second end.
[00101] With reference to FIG. 14B, the flow diagram relates to providing direct database to application level communication via a virtual input/output (VIO). At 1406, one or more first dedicated virtual links 1 -6 are constructed (for example, by the individual DB instances 1202, 1204 ad 1206) for direct application level communication between one or more first DB instances 1202, 1204 and 1206. At 1408, data may then be communicated between the one or more first DB instances 1202, 1204 and 1206 via the corresponding one or more first dedicated virtual links 1 - 6, where each of the one or more first dedicated virtual links1 -6 is connected to a respective one of the one or more first DV instance 1202, 1204 and 1206 at a first end and connected to a respective virtual input/output (VIO) at a second end.
[00102] FIG. 15 illustrates an embodiment of a node in accordance with embodiments of the disclosure. The node may be, for example, the node 108 and 1 10 (FIG. 1 ) or any other node or router as described above in the network. The node 1500 may comprise a plurality of input/output ports 151 10/1530 and/or receivers (Rx) 1512 and transmitters (Tx) 1532 for receiving and transmitting data from other nodes, a processing system or processor 1520 (or content aware unit), including a storage 1522 and programmable content forwarding plane 1528, to process data and determine which node to send the data. The node 1500 may also receive application data (payload) as described above.
[00103] Although illustrated as a single processor, the processor 1520 is not so limited and may comprise multiple processors. The processor 1520 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi- core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. The processor 1520 may be configured to implement any of the schemes described herein, such as the processes illustrated in FIGS. 4A/B, 8 and 14, using any one or combination of steps described in the embodiments. Moreover, the processor 1520 may be implemented using hardware, software, or both.
[00104] The storage 1522 (or memory) may include cache 1524, long-term storage 1526 and database cluster communication module 1528 and may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single storage, storage 1522 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data).
[00105] The inclusion of the database cluster communication module 1528 provides an improvement to the functionality of node 1500. The database cluster communication module 1528 also effects a transformation of node 1500 to a different state. Alternatively, the database cluster communication module 1528 is implemented as instructions stored in the processor 1520.
[00106] FIG. 16 is a block diagram of a network system that can be used to implement various embodiments. Specific devices may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processing units, processors, memories, transmitters, receivers, etc. The network system may comprise a processing unit 1601 equipped with one or more input/output devices, such as network interfaces, storage interfaces, and the like. The processing unit 1601 may include a central processing unit (CPU) 1610, a memory 1620, a mass storage device 1630, and an I/O interface 1660 connected to a bus. The bus may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus or the like.
[00107] The CPU 1610 may comprise any type of electronic data processor. The memory 1620 may comprise any type of system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), a combination thereof, or the like. In an embodiment, the memory 1620 may include ROM for use at boot-up, and DRAM for program and data storage for use while executing programs. In embodiments, the memory 1620 is non-transitory. The mass storage device 1630 may comprise any type of storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus. The mass storage device 1630 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, an optical disk drive, or the like.
[00108] The processing unit 1601 also includes one or more network interfaces 1650, which may comprise wired links, such as an Ethernet cable or the like, and/or wireless links to access nodes or one or more networks 1680. The network interface 1650 allows the processing unit 1601 to communicate with remote units via the networks 1680. For example, the network interface 1650 may provide wireless communication via one or more transmitters/transmit antennas and one or more receivers/receive antennas. In an embodiment, the processing unit 1601 is coupled to a local-area network or a wide-area network for data processing and communications with remote devices, such as other processing units, the Internet, remote storage facilities, or the like.
[00109] FIG. 17 illustrates a block diagram in accordance with the disclosed technology. A receiver/transmitter module 1702 receives and transmits one or more application payloads corresponding to one or more applications residing on a client. A terminating module 1704 terminates the transport layer protocol and reads the application payload associated with a current session. A preparer module 1706 prepares header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads. Encrypting/Decrypting module 1708 encrypts/decrypts the application payloads, including the header information, for transmission in the network via a single virtual communication link. A Multiplexing/Demultiplexing module 1710 multiplexes/demultiplexes the application payloads such that it may be transmitted across a single communication channel (the virtual communication link). Finally, a virtual I/O module 1712 allows application containers to directly communicate with each other using a virtual input/output (VIO) that resides on a host.
[00110] The disclosed technology provides multiple secure applications from the same device having the application payloads multiplexed/demultiplexed and transported across a single crypto channel. One or more advantages arise from this technology, including but not limited to, no session establishment overhead and tunnel overhead, reduce the over heads of TCP proxy sessions and tunnels payload overheads, can be used for control plane traffic with different applications running on same device, reduce number of secure session establishment (asymmetric and symmetric), SDN controller to Network Device Communication through public cloud or Internet, multiplexing multiple applications through single crypto session, firewall need not open multiple ports to support for multiple applications, manage networking devices (Router/Switches/WIFI/IOT) through internet/cloud, and easy managing the multiple application like Netconf, SNMP and Capwap.
[00111] Other embodiments of the disclosed technology advantageously provide the following non-limiting benefits: VIO reduces the communication latency as it shortens the end-to-end communication path, so it improves the overall performance of the database; and VIO can reduce the total number of concurrent connections among database instances, where there is usually a hard limit on the number of TCP connections one server can set up and send data messages through. This would improve the scalability of the database cluster, which means more database instances can be put into the database cluster and more queries can be concurrently processed by the database cluster. This would also improve the overall database system performance.
[00112] In accordance with various embodiments of the present disclosure, the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in a non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Virtual computer system processing can be constructed to implement one or more of the methods or functionalities as described herein, and a processor described herein may be used to support a virtual processing environment.
[00113] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[00114] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[00115] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated. [00116] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
[00117] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1 . A method of transmitting an application payload in a network, comprising: receiving one or more application payloads corresponding to one or more applications residing on a client, the application payload formed from a client request comprising a transport layer protocol;
terminating the transport layer protocol and reading the application payload associated with a current session;
preparing header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads; and
encrypting the application payloads, including the header information, for transmission in the network via a single virtual communication link.
2. The method of claim 1 , wherein the one or more applications comprise one of a NETCONF, SNMP and CAPWAP.
3. The method of any of claims 1 -2, wherein the header information includes at least one of a version field, operation type field, security session control field, application session field, application ID field and payload length.
4. The method of any of claims 1 -3, wherein a shim layer is added between an application payload and transmission control protocol (TCP)/secure socket layer (SSL) header as the header information.
5. The method of any of claims 1 -4, wherein the virtual communication link allows one or more of the applications residing on the client to share a single communication channel by multiplexing the application payloads.
6. The method of any of claims 1 -5, wherein virtual communication link is an encrypted virtual tunnel.
7. A non-transitory computer-readable medium storing computer instructions for transmitting application payloads in a network, that when executed by one or more processors, causes the one or more processors to perform the steps of:
receiving one or more application payloads corresponding to one or more applications residing on a client, the application payload formed from a client request comprising a transport layer protocol;
terminating the transport layer protocol and reading the application payload associated with a current session;
preparing header information including application specific information for each of the received applications for insertion into a corresponding one of the application payloads; and
encrypting the application payloads, including the header information, for transmission in the network via a single virtual communication link.
8. A method for application containers to communicate via a direct communication link, comprising:
constructing one or more first dedicated virtual links for direct application container level communication between one or more first application containers; and communicating data between the one or more first application containers via the corresponding one or more first dedicated virtual links, where each of the one or more first dedicated virtual links is connected to a respective one of the one or more first application containers at a first end and connected to a respective virtual input/output (VIO) at a second end.
9. The method of claim 8, further comprising:
constructing one or more second dedicated virtual links for direct application container level communication between one or more second application containers; and
communicating data between the one or more second application containers via the corresponding one or more second dedicated virtual links, where each of the one or more second dedicated virtual links is connected to a respective one of the one or more second application containers at a first end and connected to a respective virtual input/output (VIO) at a second end.
10. The method of any of claims 8-9, wherein one or more first dedicated virtual links is multiplexed to one or multiple physical links and one or more second dedicated virtual links is multiplexed to the one or more multiple physical links.
1 1 . The method of claim 8-10, wherein the VIO interfaces with a host.
12. The method of any of claims 8-1 1 , wherein the data is communicated between the one or more first and second application containers without a bridge.
13. A method for providing direct database to application level communication via a virtual input/output, comprising:
constructing one or more first dedicated virtual links for direct application level communication between one or more first database instances; and
communicating data between the one or more first database instances via the corresponding one or more first dedicated virtual links, where each of the one or more first dedicated virtual links is connected to a respective one of the one or more first database instances at a first end and connected to a respective virtual input/output (VIO) at a second end.
14. The method of claim 13, further comprising:
constructing one or more second dedicated virtual links for direct application level communication between one or more second database instances; and
communicating data between the one or more second database instances via the corresponding one or more second dedicated virtual links, where each of the one or more second dedicated virtual links is connected to a respective one of the one or more second database instances at a first end and connected to a respective virtual input/output (VIO) at a second end.
15. The method of any of claims 13-14, wherein one or more first dedicated virtual links is multiplexed to one or multiple physical links using an input queue and one or more second dedicated virtual links is multiplexed to the one or more multiple physical links using the input queue.
16. The method of claim 13-15, wherein the VIO interfaces with a host.
17. The method of any of claims 13-16, wherein the data is communicated between the one or more first and second database instances without relying on transmission control protocol (TCP) sockets.
PCT/US2016/052902 2015-09-21 2016-09-21 Fast and scalable database cluster communication path WO2017053441A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018515086A JP6511194B2 (en) 2015-09-21 2016-09-21 Fast and scalable database cluster communication path
CN201680051225.7A CN108370280B (en) 2015-09-21 2016-09-21 Fast and extensible database cluster communication path
EP16849515.8A EP3338386A4 (en) 2015-09-21 2016-09-21 Fast and scalable database cluster communication path

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562221458P 2015-09-21 2015-09-21
US62/221,458 2015-09-21

Publications (1)

Publication Number Publication Date
WO2017053441A1 true WO2017053441A1 (en) 2017-03-30

Family

ID=58387279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/052902 WO2017053441A1 (en) 2015-09-21 2016-09-21 Fast and scalable database cluster communication path

Country Status (4)

Country Link
EP (1) EP3338386A4 (en)
JP (1) JP6511194B2 (en)
CN (2) CN111930832A (en)
WO (1) WO2017053441A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020533665A (en) * 2017-08-31 2020-11-19 ネットフリックス・インコーポレイテッドNetflix, Inc. An extensible method for executing custom algorithms on media works
US10904342B2 (en) 2018-07-30 2021-01-26 Cisco Technology, Inc. Container networking using communication tunnels
CN113301004A (en) * 2020-06-17 2021-08-24 阿里巴巴集团控股有限公司 Data processing method and device, communication method and single-network-card virtual machine
CN114584621A (en) * 2022-04-18 2022-06-03 中国农业银行股份有限公司 Data sending method and device
WO2023159030A1 (en) * 2022-02-15 2023-08-24 Capital One Services, Llc Methods and systems for linking mobile applications to multi-access point providers using an intermediary database

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355601B (en) * 2018-12-21 2022-05-10 华为技术有限公司 Information transmission method and device
CN111953640A (en) * 2019-05-17 2020-11-17 阿里巴巴集团控股有限公司 Communication method, communication system, cloud node and readable storage medium
US11088952B2 (en) * 2019-06-12 2021-08-10 Juniper Networks, Inc. Network traffic control based on application path
CN110995561B (en) * 2019-12-06 2021-05-07 中国科学院信息工程研究所 Virtual network data communication interaction method and system based on container technology
CN114666806A (en) * 2020-12-22 2022-06-24 中国移动通信集团终端有限公司 Method, device, equipment and storage medium for wireless network virtualization

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0923211A2 (en) 1997-12-10 1999-06-16 Radvision Ltd System and method for packet network trunking
US20060253605A1 (en) 2004-12-30 2006-11-09 Prabakar Sundarrajan Systems and methods for providing integrated client-side acceleration techniques to access remote applications
US20120054851A1 (en) 2010-09-01 2012-03-01 Canon Kabushiki Kaisha Systems and methods for multiplexing network channels
US20130018765A1 (en) 2011-07-15 2013-01-17 International Business Machines Corporation Securing applications on public facing systems
US20140047535A1 (en) 2012-08-09 2014-02-13 Vincent E. Parla Multiple application containerization in a single container
US20140136680A1 (en) * 2012-11-09 2014-05-15 Citrix Systems, Inc. Systems and methods for appflow for datastream
US20150074052A1 (en) 2012-10-30 2015-03-12 Vekatachary Srinivasan Method and system of stateless data replication in a distributed database system
US20150244767A1 (en) * 2010-08-12 2015-08-27 Citrix Systems, Inc. Systems and methods for quality of service of ica published applications

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1790131B1 (en) * 2004-09-09 2012-12-05 Avaya Inc. Methods of and systems for network traffic security
CN101133623B (en) * 2004-12-30 2011-11-16 茨特里克斯系统公司 Systems and methods for providing client-side accelerating technology
CN101557386A (en) * 2008-04-10 2009-10-14 华为技术有限公司 Method and device for sending data and method and device for receiving data
CN101902489B (en) * 2009-06-01 2013-04-17 华为技术有限公司 Message sending method, processing method, client, router and system
US8584120B2 (en) * 2009-11-23 2013-11-12 Julian Michael Urbach Stream-based software application delivery and launching system
JP5428878B2 (en) * 2010-01-12 2014-02-26 日本電気株式会社 Communication path configuration system, system control method, and system control program
US8893010B1 (en) * 2011-07-20 2014-11-18 Google Inc. Experience sharing in location-based social networking
EP3364629B1 (en) * 2012-10-15 2020-01-29 Citrix Systems, Inc. Providing virtualized private network tunnels
CN104331659A (en) * 2014-10-30 2015-02-04 浪潮电子信息产业股份有限公司 Design method for system resource application isolation of critical application host

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0923211A2 (en) 1997-12-10 1999-06-16 Radvision Ltd System and method for packet network trunking
US20060253605A1 (en) 2004-12-30 2006-11-09 Prabakar Sundarrajan Systems and methods for providing integrated client-side acceleration techniques to access remote applications
US20150244767A1 (en) * 2010-08-12 2015-08-27 Citrix Systems, Inc. Systems and methods for quality of service of ica published applications
US20120054851A1 (en) 2010-09-01 2012-03-01 Canon Kabushiki Kaisha Systems and methods for multiplexing network channels
US20130018765A1 (en) 2011-07-15 2013-01-17 International Business Machines Corporation Securing applications on public facing systems
US20140047535A1 (en) 2012-08-09 2014-02-13 Vincent E. Parla Multiple application containerization in a single container
US20150074052A1 (en) 2012-10-30 2015-03-12 Vekatachary Srinivasan Method and system of stateless data replication in a distributed database system
US20140136680A1 (en) * 2012-11-09 2014-05-15 Citrix Systems, Inc. Systems and methods for appflow for datastream

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3338386A4

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020533665A (en) * 2017-08-31 2020-11-19 ネットフリックス・インコーポレイテッドNetflix, Inc. An extensible method for executing custom algorithms on media works
JP7047068B2 (en) 2017-08-31 2022-04-04 ネットフリックス・インコーポレイテッド An extensible technique for executing custom algorithms on media works
US10904342B2 (en) 2018-07-30 2021-01-26 Cisco Technology, Inc. Container networking using communication tunnels
CN113301004A (en) * 2020-06-17 2021-08-24 阿里巴巴集团控股有限公司 Data processing method and device, communication method and single-network-card virtual machine
CN113301004B (en) * 2020-06-17 2023-05-09 阿里巴巴集团控股有限公司 Data processing method, device, communication method and single-network-card virtual machine
WO2023159030A1 (en) * 2022-02-15 2023-08-24 Capital One Services, Llc Methods and systems for linking mobile applications to multi-access point providers using an intermediary database
CN114584621A (en) * 2022-04-18 2022-06-03 中国农业银行股份有限公司 Data sending method and device

Also Published As

Publication number Publication date
JP6511194B2 (en) 2019-05-15
JP2018536316A (en) 2018-12-06
EP3338386A4 (en) 2018-10-24
EP3338386A1 (en) 2018-06-27
CN108370280B (en) 2020-09-11
CN108370280A (en) 2018-08-03
CN111930832A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN108370280B (en) Fast and extensible database cluster communication path
US11411995B2 (en) Infrastructure level LAN security
CN110838975B (en) Secure forwarding of tenant workloads in virtual networks
US8713305B2 (en) Packet transmission method, apparatus, and network system
EP3675432A1 (en) Intelligent and dynamic overlay tunnel formation via automatic discovery of citrivity/sdwan peer in the datapath in a pure plug and play environment with zero networking configuration
US10250571B2 (en) Systems and methods for offloading IPSEC processing to an embedded networking device
US9596077B2 (en) Community of interest-based secured communications over IPsec
US11902264B2 (en) Path selection for data packets encrypted based on an IPSEC protocol
CN110838992B (en) System and method for transferring packets between kernel modules in different network stacks
US11316837B2 (en) Supporting unknown unicast traffic using policy-based encryption virtualized networks
EP3955530A1 (en) Managing network ports in a virtualization environment
CN113383528A (en) System and apparatus for enhanced QOS, bootstrapping, and policy enforcement for HTTPS traffic via intelligent inline path discovery of TLS termination nodes
US11936613B2 (en) Port and loopback IP addresses allocation scheme for full-mesh communications with transparent TLS tunnels
US20230143157A1 (en) Logical switch level load balancing of l2vpn traffic
US20220231993A1 (en) Security association bundling for an interface
Tsugawa On the design, performance, and management of virtual networks for grid computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16849515

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018515086

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016849515

Country of ref document: EP