WO2017148512A1 - Data center managed connectivity - Google Patents

Data center managed connectivity Download PDF

Info

Publication number
WO2017148512A1
WO2017148512A1 PCT/EP2016/054389 EP2016054389W WO2017148512A1 WO 2017148512 A1 WO2017148512 A1 WO 2017148512A1 EP 2016054389 W EP2016054389 W EP 2016054389W WO 2017148512 A1 WO2017148512 A1 WO 2017148512A1
Authority
WO
WIPO (PCT)
Prior art keywords
identifier
address
endpoint
correlation information
host
Prior art date
Application number
PCT/EP2016/054389
Other languages
French (fr)
Inventor
Istvan Nagy
Peter Hegyi
Daniel Urban
Peter SPANYI
Original Assignee
Nokia Solutions And Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Solutions And Networks Oy filed Critical Nokia Solutions And Networks Oy
Priority to PCT/EP2016/054389 priority Critical patent/WO2017148512A1/en
Publication of WO2017148512A1 publication Critical patent/WO2017148512A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2521Translation architectures other than single NAT servers
    • H04L61/2525Translation at a client
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • H04L61/103Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • the present invention relates to an apparatus, a method, and a computer program product related to connectivity. More particularly, the present invention relates to an apparatus, a method, and a computer program product related to connectivity in a data center.
  • requires communication between the orchestrator software and the application components (for sharing connectivity information),
  • One approach to distribute configuration information is to execute scripts/recipes after the application is deployed, and pass the required addresses to the application instance(s). This can be done by creating/patching configuration files: Nokia's cloud application manager's recipe executor is often used for this purpose during deploying cloud applications to OpenStack laaS cloud.
  • Another approach is to apply a publish/subscribe or service discovery function on which applications can rely to configure themselves.
  • the disadvantage of these solutions is that both the management layer and the managed application is involved if reconfiguration is needed due to application topology dependency changes (e.g. failover, scale in or out). This adds to overall system complexity on both the management and application sides.
  • UNIX systems For local communication within the same (physical or virtual) machine, UNIX systems use a special type of socket, the domain socket, that needs a path to a local file for the endpoints to find each other.
  • Some examples mimic the use of domain sockets for communicating with peers running on different machines by establishing an SSH tunnel underneath. (http://skife.org/qo/2013/02/08/rpc with ssh and domain sockets.html)
  • Disadvantage of this solution is that applications must use non-transparent paths, e.g. remote :/tmp/socket, and also this solution stays on very low level and does not solve the problem of how to manage these sockets together with the lifecycle of the applications.
  • Another approach is to use a patched SSH server to be able to forward packets between the UNIX domain socket and a remote SSH session. (http://www.25thandclement.com/ ⁇ william/proiects/streamlocal.html)
  • DNS may also be used to map logical names to physical addresses, which in some cases can provide location transparency.
  • Relying on DNS records may have disadvantages: they may be cached too long, not following topology/address changes fast enough.
  • resolving port numbers would require special DNS requests (for SRV records) which is difficult to handle with standard socket libraries.
  • DNS is effectively just a tool for resolving names to host IPs.
  • Another limitation is that some types of applications expect IP addresses instead of hostnames. The same may apply for firewalls.
  • Network overlays are often used in data centers for providing transparent connectivity. The following statements are based on the documents referred below.
  • Network overlays are virtual networks of interconnected nodes that share an underlying physical network, allowing deployment of applications that require specific network topologies without the need to modify the underlying network.
  • VXLAN Virtual Extensible LAN
  • GRE Generic Routing Encapsulation
  • L3 routed networks may be run pervasively throughout the data center, or even L2 services can be extended across a routed topology.
  • Processing overhead - encapsulation requires more processing, thus decreases the throughput of the data link, which may not be acceptable for some applications.
  • Encapsulation overhead The additional headers required for encapsulation increase the frame sizes, thus reducing the Maximum Transmission Unit (MTU) size and causing more fragmentation.
  • MTU Maximum Transmission Unit
  • anapparatus comprising intercepting means adapted to intercept an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining means adapted to determine an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling means adapted to control a transmitting device to transmit the attempt to the address.
  • the identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
  • the identifier may be the first internet protocol address and the address may be a second internet protocol address different from the first internet protocol address.
  • the identifier may be the globally unique identifier and the correlation information may comprise a domain identified by the identifier.
  • the apparatus may comprise selecting means adapted to select the address based on a communication pattern, wherein the correlation information may correlate plural mutually different addresses with the identifier, and one of the plural mutually different addresses may be the address.
  • the attempt may comprise an indication of the communication pattern.
  • the apparatus may further comprise the opening host, wherein the opening host may be adapted to run an invoking application; wherein the invoking application may be configured with the identifier of the peer and adapted to trigger the attempt comprising the identifier.
  • the apparatus may further comprise at least one of querying means adapted to query the correlation information from a first control device; and storing means adapted to store the correlation information received from a second control device.
  • each of the apparatuses belongs to a same data center; and each of the apparatuses stores a same correlation information.
  • an apparatus comprising providing means adapted to provide a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
  • the identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
  • the apparatus may further comprise monitoring means adapted to monitor if a query for the correlation information is received, wherein the query may comprise the identifier; and the providing means may be adapted to provide t e correlation information in response to the query.
  • the providing means may be adapted to provide the correlation information to at least two opening hosts belonging to the data center.
  • an apparatus comprising reading means adapted to read a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address; checking means adapted to check if the endpoint addresses of the correlation informations are the same; notifying means adapted to notify that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
  • the apparatus may further comprise replacing means adapted to replace the at least one of the endpoint addresses by the other endpoint address.
  • an apparatus comprising intercepting circuitry configured to intercept an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining circuitry configured to determine an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling circuitry configured to control a transmitting device to transmit the attempt to the address.
  • the identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
  • the identifier may be the first internet protocol address and the address may be a second internet protocol address different from the first internet protocol address.
  • the identifier may be the globally unique identifier and the correlation information may comprise a domain identified by the identifier.
  • the apparatus may comprise selecting circuitry configured to select the address based on a communication pattern, wherein the correlation information may correlate plural mutually different addresses with the identifier, and one of the plural mutually different addresses may be the address.
  • the attempt may comprise an indication of the communication pattern.
  • the apparatus may further comprise the opening host, wherein the opening host may be configured to run an invoking application; wherein the invoking application may be configured with the identifier of the peer and configured to trigger the attempt comprising the identifier.
  • the apparatus may further comprise at least one of querying circuitry configured to query the correlation information from a first control device; and storing circuitry configured to store the correlation information received from a second control device.
  • each of the apparatuses belongs to a same data center; and each of the apparatuses stores a same correlation information.
  • an apparatus comprising providing circuitry configured to provide a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
  • the identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
  • the apparatus may further comprise monitoring circuitry configured to monitor if a query for the correlation information is received, wherein the query may comprise the identifier; wherein the providing circuitry may be configured to provide the correlation information in response to the query.
  • the providing circuitry may be configured to provide the correlation information to at least two opening hosts belonging to the data center.
  • an apparatus comprising reading circuitry configured to read a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address; checking circuitry configured to check if the endpoint addresses of the correlation informations are the same; notifying circuitry configured to notify that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
  • the apparatus may further comprise replacing circuitry configured to replace the at least one of the endpoint addresses by the other endpoint address.
  • a ninth aspect of the invention there is provided a method, comprising intercepting an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling a transmitting device to transmit the attempt to the address.
  • the identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
  • the identifier may be the first internet protocol address and the address may be a second internet protocol address different from the first internet protocol address.
  • the identifier may be the globally unique identifier and the correlation information may comprise a domain identified by the identifier.
  • the method may further comprise selecting the address based on a communication pattern, wherein the correlation information may correlate plural mutually different addresses with the identifier, and one of the plural mutually different addresses is the address.
  • the attempt may comprise an indication of the communication pattern.
  • the method may further comprise running an invoking application configured with the identifier of the peer; and triggering the attempt comprising the identifier by the invoking application.
  • the method may further comprise at least one of querying the correlation information from a first control device; and storing the correlation information received from a second control device.
  • a method comprising providing a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
  • the identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
  • the method may further comprise monitoring if a query for the correlation information is received, wherein the query may comprise the identifier; and providing the correlation information in response to the query.
  • the providing may be adapted to provide the correlation information to at least two opening hosts belonging to the data center.
  • a method comprising reading a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address; checking if the endpoint addresses of the correlation informations are the same; notifying that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
  • the method may further comprise replacing the at least one of the endpoint addresses by the other endpoint address
  • Each of the methods of the ninth to eleventh aspects may be a method of datacenter managed connectivity.
  • a computer program product comprising a set of instructions which, when executed on an apparatus, is configured to cause the apparatus to carry out the method according to any of the ninth to eleventh aspects.
  • the computer program product may be embodied as a computer-readable medium or directly loadable into a computer.
  • an apparatus comprising at least one processor, at least one memory including computer program code, wherein the at least one processor, with the at least one memory and the computer program code, is arranged to cause the apparatus to at least perform at least one of the methods according to any of the ninth to eleventh aspects.
  • the data center may be easily segmented.
  • FIG. 1 illustrates an implementation according to an embodiment of the invention
  • Fig. 2 shows an apparatus according to an example embodiment of the invention
  • Fig. 3 shows a method according to an example embodiment of the invention
  • Fig. 4 shows an apparatus according to an example embodiment of the invention
  • Fig. 5 shows a method according to an example embodiment of the invention
  • Fig. 6 shows an apparatus according to an example embodiment of the invention
  • Fig. 7 shows a method according to an example embodiment of the invention.
  • Fig. 8 shows an apparatus according to an example embodiment of the invention.
  • the apparatus is configured to perform the corresponding method, although in some cases only the apparatus or only the method are described.
  • the tasks related to configuration and/or reconfiguration are simplified by hiding the dynamic aspects of the network topology from the application components. This extends the "single computer abstraction" concept to the application level.
  • Some embodiments of the invention provide a transparent socket based communication that works across data center nodes for distributed applications without them being aware of their peer(s)' exact location, for example, the IP address of the node and the port number where remote services are running.
  • a network socket (“socket”) is an endpoint of an inter-process communication, possibly across a computer network.
  • a socket API is an application programming interface (API), usually provided by the operating system, that allows application programs to control and use network sockets.
  • Internet socket APIs are usually (but not necessarily) based on the Berkeley sockets standard.
  • a socket address may be identified by a combination of an IP address and a port number. Based on t e socket address, sockets deliver incoming data packets to the appropriate application process or thread.
  • Unix domain sockets provide similar mechanism for communication within a single machine. Bringing this to data center level completes the 'single computer' data center abstraction by adding the "data center socket" .
  • ° application descriptors may define multiple communication patters, such as one- to-many (e.g. load balanced) or one-to-one which can be applied dynamically based on the number of service instances deployed; many-to-many is also possible (mesh);
  • the network stack in the data center will allow applications to transparently open a connection to any of their peers, without precise information of the remote service's location.
  • the transparency includes "location transparency” in the traditional meaning, according to which the applications may assume that they are communicating over a reliable network.
  • the transparency does not include the traditional "location transparency".
  • applications are still aware that they are communicating over a (potentially unreliable) network.
  • the applications do not need to be aware of how to find out where the remote service is running at.
  • Some embodiments of t e invention do not have an effect on layer 4 (transport layer) protocols (including UDP, TCP and SCTP) or higher layer protocols. Hence, all of these may still work without changes.
  • Some embodiments of the invention reduce complexity of distributed applications: instead of configuring and reconfiguring applications to network changes, the underlying network is automatically configured so that the connectivity needs of the applications are fulfilled. Connectivity is solved once on the data center level and not inside each and every application that is deployed.
  • some embodiments of the invention implement connectivity in a technology agnostic way, and in many cases without using tunnels or encapsulation. Some embodiments of the invention are implemented in a part of the application descriptors which forms the user-facing interface of the transparent connectivity. Some embodiments of the invention are part of the data center network stack, which provides the transparent connectivity to the applications deployed based on their descriptors. Descriptors
  • the identifier is unique over the whole data center.
  • uniqueness is verified in an implementation. I.e., it is checked if a same identifier is correlated to different addresses in different network stacks. In this case, a notification (e.g. an alarm) may be issued. In some embodiments, deviating addresses may be replaced by a same address.
  • the identifier can be, for example,
  • GUID globally unique ID
  • the application descriptor specifies a fictitious IP address (and port, if applicable). There is no need for a server to actually be available at this IP address (and port), as the IP address (and port) only serve to identify the peer.
  • This solution has the advantage of not requiring change in the implementation of the application, if it already uses an IP address and port to connect to the peer.
  • the IP address (and port) may be static.
  • the IP address may be of IPv4 or IPv6.
  • the second solution specifying a GUID, makes it easier to avoid collisions (non-uniqueness) between the peer identifiers.
  • it requires changes in the application implementation, since currently it typically does not use GUIDs to open the connections (sockets) (see also the next section about the network stack).
  • the third option is useful when services and applications are organized in a tree like structure where additional edges may express service/application dependencies.
  • the descriptor can include the type of the connection: point to point, point to multipoint, multipoint to point, or multipoint to multipoint.
  • GUID - the descriptor of a reverse proxy specifies that it will connect to the application servers by using the GUID "e95bef78-a78b-4e96-a1 f1 -6feec79c4b41 ".
  • Tree/Path - application server will connect to "../web-app/mysql-cluster/sql-node" which may be directed to any servers in a MySQL Cluster with SQL node role.
  • software images used for running the services do not need to be changed before or after the deployment (e.g. when the network configuration is changed). Also, there is no need for runtime service discovery support in the application instances.
  • the network stack of the data center is modified to control or intercept communication attempts. For every such attempt, it determines the location of the other endpoint (by using stored information of the correlation of the peer identifier(s) in the descriptor(s) and the real location of the currently deployed application(s)/service(s)), and then establishes a route to the other endpoint (i.e., opens a socket). More in detail, the network stack determines the location by searching the peer identifier in the correlation information and retrieving the real location which is correlated to the peer identifier from the correlation information. Selecting the remote endpoint may include applying certain polices that are defined for the actual communication path, where load balancing is an example of such policy.
  • connection identifiers are IP addresses (and ports)
  • the API of the socket() calls need not change. However, if they are GUIDs, a new domain comprising an address family is introduced, called for example AF DATACENTER or AF REMOTE.
  • the application implementations then will use this AF (address family) in their socket() calls.
  • the actual interception is implemented by modifying the socket() and related functions.
  • the modified implementations can be installed either by installing a patched libc, or by intercepting dynamic loading and providing the modified functions to the application (with the LD_PRELOAD technique).
  • OS kernel and/or libc library may also natively support such functionality, in which case no patching or wrapping is necessary.
  • the implementation determines the real location of the desired endpoint. To achieve this, it uses the peer identifier from the communication attempt, the information about the peer in the descriptors, and its knowledge about the current state of the data center (i.e., it knows which applications are currently deployed, and how they can be reached (on which component (e.g. host, VM, container, etc.) they are deployed)).
  • Fig. 1 illustrates the main components and interfaces that play together according to some embodiments of the invention to enable distributed applications to communicate without concrete information of the location of their peers.
  • Fig.1 shows hosts, but the same idea may be applied to virtualized environments, where applications run inside virtual machines or operating system containers. Interception may happen in one/all of the host kernel, host hypervisor, virtual machines or containers.
  • socket calls are intercepted by a socket library wrapper (libc wrapper), but there are other alternatives, too.
  • libc wrapper socket library wrapper
  • the descriptor of an application component specifies, that it will try to reach a server by using the GUID "6a2a-92".
  • this information is paired (correlated) with the actual location of the deployed applications: this GUID will mean a connection from the respective "opening host” (e.g. Host 2 in Fig. 1 ) to Host 1 (i.e., to 1.2.3.4:8082).
  • the orchestrator sends this information to the agents on the hosts. This information is propagated to agents before an application component / service instance is started.
  • the libc wrapper of Host 1 When the server of Host 1 binds to the GUID "6a2a-92", the libc wrapper of Host 1 (which is already installed on all application hosts) communicates with the agent, gets the information that currently the 1 .2.3.4:8082 address is associated with this GUID, so it binds to that address. Later, when the application component of host 2 ("opening host") tries to open a socket to the server of Host 1 (by specifying the domain "AF DC” and the GUID "6a2a-92."), the libc wrapper of Host 2 communicates with the agent of host 2, gets the information about the actual endpoint (1 .2.3.4:8082), and opens a socket to that address.
  • the application components may remain unchanged, regardless of if the application component is moved from Host 1 to a different Host.
  • Orchestration software may reside inside or outside of the datacenter, on dedicated machines or sharing the same compute nodes with applications.
  • external communications from and to the data center are not affected by this invention and handled the traditional way (socket calls pass through unchanged).
  • the simplicity helps packaging the applications as immutable images, since the configuration does not need to be adapted to different environments (application level configuration might still be needed). Moreover, the application does not need to collaborate with the orchestrator and/or the data center infrastructure to communicate with the correct peers, thus it can reduce application complexity.
  • Error handling and reconfiguration of the application can also be handled transparently: the data center OS simply closes the socket, and all the application has to do is to reopen it, and it will be opened to the correct (possibly new) peer. This can even be used to provide a fail-over mechanism or transparent local load balancing.
  • Another advantage is that data classification happens at the socket level (OSI layer 4) (instead of the lower VNIC/NIC/IP address level, which is a mix of L2/L3), so there is more precise control over the data flow.
  • OSI layer 4 OSI layer 4
  • the transparent network connectivity service also allows avoiding the use of service discovery mechanisms to connect applications (thus making the applications simpler).
  • the correlation information correlates an identifier of a peer which is used by an application (i.e. an identifier provided in the descriptor) with an identifier of a component (host) on which the peer resides.
  • the correlation information may be a table comprising a column related to different identifiers used by the application for different peers and another column comprising the respective identifiers of the components.
  • the correlation information may be provided in another form, too, e.g. in form of an ASN.1 string.
  • the form or the syntax of the correlation information is structured.
  • a control entity such as an orchestrator may provide the correlation information to the hosts. For example, it may provide the correlation information when implementing a new application and each time when relevant network configuration is modified.
  • the hosts queries the correlation information from the control entity (e.g. orchestrator).
  • a host may query for the correlation information when it tries to open a session and/or a host may query for the correlation information regularly or at certain events.
  • a master control entity provides the correlation information to a number of intermediate control entities on its own motion and/or in response to a query from a respective intermediate control entity.
  • Each of the intermediate control entities provides the correlation information to a number of related hosts on its own motion and/or in response to a query from a respective host.
  • the correlation information may be distributed both on motion of the control entity and in response to a query from a host (or a lower level intermediate control entity).
  • the control entity may distribute the correlation information to all its related hosts when a new application is implemented or a relevant network configuration is modified.
  • the host may consider that the correlation information distributed by the control entity on its own motion has a certain validity time. After expiry of the validity time (e.g. when a session is to be opened using the correlation information), the host will query the control entity for the correlation information in order to ensure that the host did not miss an update of the correlation information.
  • the correlation information received in response to the query may have another or a same validity period as that received on motion of the control entity.
  • Fig. 2 shows an apparatus according to an example embodiment of the invention.
  • the apparatus may be a host such as an opening host, or an element thereof such as a wrapper.
  • Fig. 3 shows a method according to an example embodiment of the invention.
  • the apparatus according to Fig. 2 may perform the method of Fig. 3 but is not limited to this method.
  • the method of Fig. 3 may be performed by the apparatus of Fig. 2 but is not limited to being performed by this apparatus.
  • the apparatus comprises intercepting means 10, determining means 20, and controlling means 30.
  • the intercepting means 10, determining means 20, and controlling means 30 may be an intercepting circuitry, determining circuitry, and controlling circuitry, respectively.
  • the intercepting means 10 intercepts an attempt to open a socket between an opening host and a peer (S10).
  • the attempt comprises an identifier of the peer, such as an IP address, a GUID, a tree path.
  • the determining means 20 determine an address of an endpoint host based on a stored or obtained correlation information and the identifier (S20).
  • the correlation information correlates the identifier and the address.
  • the controlling means 30 controls a transmitting device to transmit the attempt to the address determined by the determining means 20 (S30).
  • the transmitting device may comprise the controlling means 30 or may be separated from the controlling means 30.
  • the controlling means 30 may exchange the identifier of the peer in the attempt by the address determined by the determining means 20 before the attempt is transmitted by the opening host.
  • the intercepting means 10 may store the attempt, and the controlling means 30 may replace the identifier of the peer in the stored attempt by the address determined by the determining means 20, and then forward the stored attempt comprising the replaced address.
  • Fig. 4 shows an apparatus according to an example embodiment of the invention.
  • the apparatus may be an orchestrator, or an element thereof.
  • Fig. 5 shows a method according to an example embodiment of the invention.
  • the apparatus according to Fig. 4 may perform the method of Fig. 5 but is not limited to this method.
  • the method of Fig. 5 may be performed by the apparatus of Fig. 4 but is not limited to being performed by this apparatus.
  • the apparatus comprises providing means 1 10.
  • the providing means 1 10 may be a providing circuitry.
  • the providing means 1 10 provides a correlation information correlating an identifier and an address of an endpoint host to an opening host (S1 10), wherein the endpoint host and the opening host belong to a data center.
  • the providing means 1 10 may provide the correlation information in response to a query from an opening host, and/or it may provide the correlation information to the opening hosts on its own motion, e.g. if the correlation information is modified.
  • Fig. 6 shows an apparatus according to an example embodiment of the invention.
  • the apparatus may be an orchestrator, or an element thereof.
  • Fig. 7 shows a method according to an example embodiment of the invention.
  • the apparatus according to Fig. 6 may perform the method of Fig. 7 but is not limited to this method.
  • the method of Fig. 7 may be performed by the apparatus of Fig. 6 but is not limited to being performed by this apparatus.
  • the apparatus comprises reading means 210, checking means 220, and notifying means 230.
  • the reading means 210, checking means 220, and notifying means 230 may be a reading circuitry, a checking circuitry, and a notifying circuitry, respectively.
  • the reading means 210 reads a respective correlation information stored in each of plural hosts of a data center (S210). Each of the correlation informations comprises a correlation of an identifier to a respective endpoint address.
  • the checking means 220 checks if the endpoint addresses of the correlation informations are the same (S220).
  • Fig. 8 shows an apparatus according to an example embodiment of the invention.
  • the apparatus comprises at least one processor 610, at least one memory 620 including computer program code, and the at least one processor 610, with the at least one memory 620 and the computer program code, being arranged to cause the apparatus to at least perform at least one of the methods according to Figs. 3, 5, and 7 and related description.
  • a data center is considered as a collection of one or more computers controlled or managed by a same control entity such as an orchestrator (i.e. a same software stack of an orchestrator) at least with respect to the correlation information.
  • a same control entity such as an orchestrator (i.e. a same software stack of an orchestrator) at least with respect to the correlation information.
  • Other properties of the computers of the data center may be managed or controlled by the same control entity or be different control entities.
  • the computers of the data center may be at the same location or at different locations and they may belong to the same LAN or to different LANs.
  • the data center may be realized fully or partly in a cloud, i.e. using shared processing resources such as computers, networks, etc.
  • the shared processing resources are managed by a same control entity at least with respect to the correlation information.
  • the control entity e.g. orchestrator
  • One piece of information may be transmitted in one or plural messages from one entity to another entity. Each of these messages may comprise further (different) pieces of information.
  • Names of network elements, protocols, and methods are based on current standards. In other versions or other technologies, the names of these network elements and/or protocols and/or methods may be different, as long as they provide a corresponding functionality.
  • each of the entities described in the present description may be based on a different hardware, or some or all of the entities may be based on the same hardware. It does not necessarily mean that they are based on different software. That is, each of the entities described in the present description may be based on different software, or some or all of the entities may be based on the same software.
  • Some example embodiments of the invention may be applied to a data center comprising a number of collocated and interconnected computers. Some embodiments of the invention may be applied to computers which are at mutually remote locations or to a mixture of collocated and remote computers.
  • a data center comprises a same network stack in each component.
  • the data center comprises different network stacks in different components.
  • different network stacks may be used to segment the network. I.e. depending on the involved network stack, a request to open a socket comprising a same peer identifier may be routed to a different component running the same invoked service.
  • a host may be a component of a data center such as a computer (e.g. a personal computer, a server, a laptop, a desktop, a pizza-box, etc. It may also be some other component such as a VM or a container. It may run any operating system such as UNIX, windows, LINUX, etc.
  • a computer e.g. a personal computer, a server, a laptop, a desktop, a pizza-box, etc. It may also be some other component such as a VM or a container. It may run any operating system such as UNIX, windows, LINUX, etc.
  • the computers may be interconnected by any suitable network technology such as LAN, WAN, MAN etc. On the physical layer, the connection may be wired or wireless.
  • Information (such as “correlation information”) may mean one or more pieces of information (such as one or more pieces of correlation information).
  • “Informations” (such as “correlation informations”) may mean plural pieces of information (such as plural pieces of correlation information).
  • example embodiments of the present invention provide, for example a host such as a computer, or a component thereof, an apparatus embodying the same, a method for controlling and/or operating the same, and computer program(s) controlling and/or operating the same as well as mediums carrying such computer program(s) and forming computer program product(s).
  • Implementations of any of the above described blocks, apparatuses, systems, techniques, means, entities, units, devices, or methods include, as non-limiting examples, implementations as hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, a virtual machine, or some combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

It is provided a method, comprising intercepting an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling a transmitting device to transmit the attempt to the address.

Description

Data center managed connectivity
Field of t e invention The present invention relates to an apparatus, a method, and a computer program product related to connectivity. More particularly, the present invention relates to an apparatus, a method, and a computer program product related to connectivity in a data center.
Abbreviations
• AF - address family
• API - Application Programming Interface
• DNAT - destination network address translation
• DNS - Domain Name System
· GRE - generic routing encapsulation
• GUID - globally unique identifier
• IP - Internet Protocol
• L2 - Layer 2 (data link layer)
• L3 - Layer 3 (network layer)
· LAN - Local Area Network
• LIBC - Library C
• MAN - Metropolitan Area Network
• MTU - Maximum Transmission Unit
• OS - Operating System
· POSIX - Portable Operating System Interface
• SCTP - Stream Control Transmission Protocol
• SQL - Structured Query Language
• SRV - DNS service record
• SSH - Secure Shell
· TCP - Transmission Control Protocol
• UDP - User Datagram Protocol
• VM - Virtual Machine
• VXLAN - Virtual Extensible LAN
• WAN - Wide Area Network Background of the invention
Current web applications are typically running on multiple interconnected hosts in data centers. The different hosts run specific services which other services rely on (e.g. web server, application server, database server, etc.). The scale of the applications can span from just a few nodes (e.g. classical multi-tier applications) to clusters of hundreds or thousands of nodes (e.g. big data). Quick automated provisioning and management of these applications is an essential requirement. Virtualization technologies enable flexible placement of the application workloads/services across the data center. The granularity of service distribution varies from virtual machines (VM) that host a complete operating system and application stack to containers that may run only a single process (micro-services). In any case, the separate application components need to be aware of how to reach the other services they depend on, i.e. they must possess the peers' IP addresses and port numbers. To help deploying and managing distributed applications, modern data centers provide a "single computer abstraction" in many contexts. Already available solutions are, for example
• resource allocation (Mesos)
• scheduling (Chronos)
• shared configuration (etcd)
· coordination (ZooKeeper, Consul)
• orchestration (Marathon, Kubernetes, Docker Swarm)
These services together form the software stack of the datacenter, which spans all of the hosts of the data center and treats them as if they are running in a single computer - from management point of view.
On the other hand, inter-service or inter-process communication in the data center as a "single computer" is so far unsolved. Network connectivity in data centers still follows a static and rigid approach. To accommodate to this situation, distribution of the connectivity information between application components is done mostly by traditional solutions, such as message queues, publish/subscribe mechanisms and service discovery methods.
Applying the obtained settings at host/container-level requires executing configuration scripts/recipes in each component. In case of static application topology, when the locations of the components do not change during runtime, this is an infrequent operation needed only at deployment time. However, in case of more dynamic applications the topology information may need to be updated more frequently: e.g. at service scaling, VM migration or VM respawning. Some types of services may also need to be restarted when their peer IP addresses change. For deploying applications in a data center, some orchestration type of software is used. Some examples are Mesosphere's Marathon and Nokia Cloud Application Manager. The orchestration software uses application descriptors/templates that describe the structure and the dependencies between components; from this, the communication patterns may be deduced. However, configuring the peer information must be done at component (VM, container) level, once the connection details become available. This means that at deploy time or whenever there is a topology change, both the orchestration software and the involved application components need to take action, in a synchronized manner.
Unfortunately the information of communication patterns that could be inferred from the descriptors/templates is not used by the orchestration software to its full extent, requiring additional configuration steps to be performed.
The above described configuration/reconfiguration mechanism adds to overall system complexity, because:
· requires communication between the orchestrator software and the application components (for sharing connectivity information),
• requires application-level setting of the peer information and updating it when the topology is changed; taking into account the dependencies, executing this action needs to be coordinated - typically by the orchestration software.
One approach to distribute configuration information is to execute scripts/recipes after the application is deployed, and pass the required addresses to the application instance(s). This can be done by creating/patching configuration files: Nokia's cloud application manager's recipe executor is often used for this purpose during deploying cloud applications to OpenStack laaS cloud.
Another approach is to apply a publish/subscribe or service discovery function on which applications can rely to configure themselves. The disadvantage of these solutions is that both the management layer and the managed application is involved if reconfiguration is needed due to application topology dependency changes (e.g. failover, scale in or out). This adds to overall system complexity on both the management and application sides.
For local communication within the same (physical or virtual) machine, UNIX systems use a special type of socket, the domain socket, that needs a path to a local file for the endpoints to find each other. Some examples mimic the use of domain sockets for communicating with peers running on different machines by establishing an SSH tunnel underneath. (http://skife.org/qo/2013/02/08/rpc with ssh and domain sockets.html) Disadvantage of this solution is that applications must use non-transparent paths, e.g. remote :/tmp/socket, and also this solution stays on very low level and does not solve the problem of how to manage these sockets together with the lifecycle of the applications.
Another approach is to use a patched SSH server to be able to forward packets between the UNIX domain socket and a remote SSH session. (http://www.25thandclement.com/~william/proiects/streamlocal.html)
DNS may also be used to map logical names to physical addresses, which in some cases can provide location transparency. Relying on DNS records may have disadvantages: they may be cached too long, not following topology/address changes fast enough. In case of load balancers depending on a client-DNS server, resolving port numbers would require special DNS requests (for SRV records) which is difficult to handle with standard socket libraries. (Either the software explicitly supports SRV records, or DNS is effectively just a tool for resolving names to host IPs.) Another limitation is that some types of applications expect IP addresses instead of hostnames. The same may apply for firewalls.
Network overlays are often used in data centers for providing transparent connectivity. The following statements are based on the documents referred below.
(http://www. Cisco. co m/c/en/us/products/collateral/switches/nexus-9000-series- switches/white-paper-cl 1 -7301 16.html)
Network overlays are virtual networks of interconnected nodes that share an underlying physical network, allowing deployment of applications that require specific network topologies without the need to modify the underlying network. Several data encapsulation formats have been developed for the data center, including Virtual Extensible LAN (VXLAN) or Generic Routing Encapsulation (GRE). Overlays offer several benefits:
• Decoupling the network provided to the application components from the underlying data center network. L3 routed networks may be run pervasively throughout the data center, or even L2 services can be extended across a routed topology.
· Overlapping addressing: the same IP addresses can be used in separate instances of the same application - a single networking configuration is enough for all instances, which is independent from the underlying network.
• Scalability, flexibility: the overlay virtual network is not constrained to fixed location of its nodes.
However, network overlays also have some drawbacks:
(http://www.networkcomputinq.com/networkinq/network-overlays-an-introduction/d/d- id/123401 1 ?paae number=2)
• Decreased visibility of the network fabric, causing more complexity in troubleshooting.
• Processing overhead - encapsulation requires more processing, thus decreases the throughput of the data link, which may not be acceptable for some applications.
• Encapsulation overhead: The additional headers required for encapsulation increase the frame sizes, thus reducing the Maximum Transmission Unit (MTU) size and causing more fragmentation.
Interoperability issues with load balancers and firewalls (due to the extra headers that make the packets opaque to these devices).
Summary of the invention
It is an object of the present invention to improve the prior art.
According to a first aspect of the invention, there is provided anapparatus, comprising intercepting means adapted to intercept an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining means adapted to determine an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling means adapted to control a transmitting device to transmit the attempt to the address.
The identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path. The identifier may be the first internet protocol address and the address may be a second internet protocol address different from the first internet protocol address.
The identifier may be the globally unique identifier and the correlation information may comprise a domain identified by the identifier.
The apparatus may comprise selecting means adapted to select the address based on a communication pattern, wherein the correlation information may correlate plural mutually different addresses with the identifier, and one of the plural mutually different addresses may be the address.
The attempt may comprise an indication of the communication pattern.
The apparatus may further comprise the opening host, wherein the opening host may be adapted to run an invoking application; wherein the invoking application may be configured with the identifier of the peer and adapted to trigger the attempt comprising the identifier.
The apparatus may further comprise at least one of querying means adapted to query the correlation information from a first control device; and storing means adapted to store the correlation information received from a second control device.
According to a second aspect of the invention, there are provided plural apparatuses according to the first aspect, wherein each of the apparatuses belongs to a same data center; and each of the apparatuses stores a same correlation information.
According to a third aspect of the invention, there is provided an apparatus, comprising providing means adapted to provide a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
The identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
The apparatus may further comprise monitoring means adapted to monitor if a query for the correlation information is received, wherein the query may comprise the identifier; and the providing means may be adapted to provide t e correlation information in response to the query.
The providing means may be adapted to provide the correlation information to at least two opening hosts belonging to the data center.
According to a fourth aspect of the invention, there is provided an apparatus, comprising reading means adapted to read a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address; checking means adapted to check if the endpoint addresses of the correlation informations are the same; notifying means adapted to notify that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
The apparatus may further comprise replacing means adapted to replace the at least one of the endpoint addresses by the other endpoint address.
According to a fifth aspect of the invention, there is provided an apparatus, comprising intercepting circuitry configured to intercept an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining circuitry configured to determine an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling circuitry configured to control a transmitting device to transmit the attempt to the address.
The identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
The identifier may be the first internet protocol address and the address may be a second internet protocol address different from the first internet protocol address.
The identifier may be the globally unique identifier and the correlation information may comprise a domain identified by the identifier. The apparatus may comprise selecting circuitry configured to select the address based on a communication pattern, wherein the correlation information may correlate plural mutually different addresses with the identifier, and one of the plural mutually different addresses may be the address.
The attempt may comprise an indication of the communication pattern.
The apparatus may further comprise the opening host, wherein the opening host may be configured to run an invoking application; wherein the invoking application may be configured with the identifier of the peer and configured to trigger the attempt comprising the identifier. The apparatus may further comprise at least one of querying circuitry configured to query the correlation information from a first control device; and storing circuitry configured to store the correlation information received from a second control device.
According to a sixth aspect of the invention, there are provided plural apparatuses according to the fifth aspect, wherein each of the apparatuses belongs to a same data center; and each of the apparatuses stores a same correlation information.
According to a seventh aspect of the invention, there is provided an apparatus, comprising providing circuitry configured to provide a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
The identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
The apparatus may further comprise monitoring circuitry configured to monitor if a query for the correlation information is received, wherein the query may comprise the identifier; wherein the providing circuitry may be configured to provide the correlation information in response to the query.
The providing circuitry may be configured to provide the correlation information to at least two opening hosts belonging to the data center.
According to a eighth aspect of the invention, there is provided an apparatus, comprising reading circuitry configured to read a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address; checking circuitry configured to check if the endpoint addresses of the correlation informations are the same; notifying circuitry configured to notify that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
The apparatus may further comprise replacing circuitry configured to replace the at least one of the endpoint addresses by the other endpoint address. According to a ninth aspect of the invention, there is provided a method, comprising intercepting an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer; determining an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address; controlling a transmitting device to transmit the attempt to the address.
The identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
The identifier may be the first internet protocol address and the address may be a second internet protocol address different from the first internet protocol address.
The identifier may be the globally unique identifier and the correlation information may comprise a domain identified by the identifier. The method may further comprise selecting the address based on a communication pattern, wherein the correlation information may correlate plural mutually different addresses with the identifier, and one of the plural mutually different addresses is the address.
The attempt may comprise an indication of the communication pattern.
The method may further comprise running an invoking application configured with the identifier of the peer; and triggering the attempt comprising the identifier by the invoking application.
The method may further comprise at least one of querying the correlation information from a first control device; and storing the correlation information received from a second control device. According to a tenth aspect of the invention, there is provided a method, comprising providing a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
The identifier may be one of a first internet protocol address, a globally unique identifier, and a tree path.
The method may further comprise monitoring if a query for the correlation information is received, wherein the query may comprise the identifier; and providing the correlation information in response to the query.
The providing may be adapted to provide the correlation information to at least two opening hosts belonging to the data center.
According to an eleventh aspect of the invention, there is provided a method, comprising reading a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address; checking if the endpoint addresses of the correlation informations are the same; notifying that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
The method may further comprise replacing the at least one of the endpoint addresses by the other endpoint address
Each of the methods of the ninth to eleventh aspects may be a method of datacenter managed connectivity. According to a twelfth aspect of the invention, there is provided a computer program product comprising a set of instructions which, when executed on an apparatus, is configured to cause the apparatus to carry out the method according to any of the ninth to eleventh aspects. The computer program product may be embodied as a computer-readable medium or directly loadable into a computer. According to a thirteenth aspect of the invention, there is provided an apparatus comprising at least one processor, at least one memory including computer program code, wherein the at least one processor, with the at least one memory and the computer program code, is arranged to cause the apparatus to at least perform at least one of the methods according to any of the ninth to eleventh aspects.
According to some example embodiments of the invention, at least one of the following technical effects may be provided:
• Network configuration of software images used for running the services do not need to be changed before or after the deployment.
• No need for runtime service discovery support in the application instances, resulting in reduced complexity of the application
• No need for collaboration of the application with the orchestrator, resulting in reduced complexity of the application
· Simpler management of the application.
• Application may be packed as immutable images.
• Simpler, more transparent error handling
• Precise control over data flow from one application to another
• Compatibility: In a data center, computers employing an embodiment of the invention and conventional computers may coexist, even if their applications access one or more of the same service(s).
• As consequence of compatibility: less constraints to be considered at upgrade to an implementation of the invention.
• The data center may be easily segmented.
It is to be understood that any of the above modifications can be applied singly or in combination to the respective aspects to which they refer, unless they are explicitly stated as excluding alternatives. Brief description of the drawings
Further details, features, objects, and advantages are apparent from the following detailed description of example embodiments of the present invention which is to be taken in conjunction with the appended drawings, wherein
Fig. 1 illustrates an implementation according to an embodiment of the invention; Fig. 2 shows an apparatus according to an example embodiment of the invention;
Fig. 3 shows a method according to an example embodiment of the invention;
Fig. 4 shows an apparatus according to an example embodiment of the invention;
Fig. 5 shows a method according to an example embodiment of the invention;
Fig. 6 shows an apparatus according to an example embodiment of the invention;
Fig. 7 shows a method according to an example embodiment of the invention; and
Fig. 8 shows an apparatus according to an example embodiment of the invention.
Detailed description of certain example embodiments
Herein below, certain example embodiments of the present invention are described in detail with reference to the accompanying drawings, wherein the features of the example embodiments can be freely combined with each other unless otherwise described. However, it is to be expressly understood that the description of certain embodiments is given for by way of example only, and that it is by no way intended to be understood as limiting the invention to the disclosed details.
Moreover, it is to be understood that the apparatus is configured to perform the corresponding method, although in some cases only the apparatus or only the method are described.
According to some embodiments of the invention, the tasks related to configuration and/or reconfiguration are simplified by hiding the dynamic aspects of the network topology from the application components. This extends the "single computer abstraction" concept to the application level.
Some embodiments of the invention provide a transparent socket based communication that works across data center nodes for distributed applications without them being aware of their peer(s)' exact location, for example, the IP address of the node and the port number where remote services are running.
A network socket ("socket") is an endpoint of an inter-process communication, possibly across a computer network. Today, most communication between computers is based on the Internet Protocol. A socket API is an application programming interface (API), usually provided by the operating system, that allows application programs to control and use network sockets. Internet socket APIs are usually (but not necessarily) based on the Berkeley sockets standard. A socket address may be identified by a combination of an IP address and a port number. Based on t e socket address, sockets deliver incoming data packets to the appropriate application process or thread.
Unix domain sockets provide similar mechanism for communication within a single machine. Bringing this to data center level completes the 'single computer' data center abstraction by adding the "data center socket" .
Some example aspects of the invention provide
• enhanced application descriptors/templates that include precise details on communication patterns, i.e. which service will communicate with which other service or services;
° application descriptors may define multiple communication patters, such as one- to-many (e.g. load balanced) or one-to-one which can be applied dynamically based on the number of service instances deployed; many-to-many is also possible (mesh);
• an API towards the orchestrator/DCOS to manage the lifecycle of distributed sockets across the data center;
• network stack of the data center that implements this API so that applications do not need to be aware of low level connectivity, especially in a case the remote service(s) run(s) in the data center;
• optionally, a new address family in the standard POSIX socket API, e.g.
AF DATACENTER
According to some embodiments of the invention, the network stack in the data center will allow applications to transparently open a connection to any of their peers, without precise information of the remote service's location.
In some embodiments, the transparency includes "location transparency" in the traditional meaning, according to which the applications may assume that they are communicating over a reliable network. In other embodiments, the transparency does not include the traditional "location transparency". In this case, applications are still aware that they are communicating over a (potentially unreliable) network. In each of these embodiments, the applications do not need to be aware of how to find out where the remote service is running at. Some embodiments of t e invention do not have an effect on layer 4 (transport layer) protocols (including UDP, TCP and SCTP) or higher layer protocols. Hence, all of these may still work without changes. Some embodiments of the invention reduce complexity of distributed applications: instead of configuring and reconfiguring applications to network changes, the underlying network is automatically configured so that the connectivity needs of the applications are fulfilled. Connectivity is solved once on the data center level and not inside each and every application that is deployed.
As communication is understood by the data center on socket level, some embodiments of the invention implement connectivity in a technology agnostic way, and in many cases without using tunnels or encapsulation. Some embodiments of the invention are implemented in a part of the application descriptors which forms the user-facing interface of the transparent connectivity. Some embodiments of the invention are part of the data center network stack, which provides the transparent connectivity to the applications deployed based on their descriptors. Descriptors
Application descriptors include information about the peers (if any) the application needs to reach. For every peer, a unique identifier is specified, which the application will use to connect to the peer or listen on for incoming connections. "Unique" means at least that the identifier of the peer is unique over at least two predefined implementations of the network stack providing the transparent connectivity (see below). I.e., for the hosts of these implementations, if a descriptor comprises a particular identifier, it is related to the same peer, regardless of the host. On the other hand, potentially, different identifiers may relate to different peers or to a same peer. I.e., there is a n:1 relationship (n = 1 , 2, 3,...) between identifiers and peers. Preferably, the relationship is 1 :1 .
Preferably, in order to simplify administration, the identifier is unique over the whole data center. Preferably, uniqueness is verified in an implementation. I.e., it is checked if a same identifier is correlated to different addresses in different network stacks. In this case, a notification (e.g. an alarm) may be issued. In some embodiments, deviating addresses may be replaced by a same address.
The identifier can be, for example,
• an IP address (with or without a port number);
• a globally unique ID (GUID); or
• a path pointing to a node in a data center level tree.
In the first case, the application descriptor specifies a fictitious IP address (and port, if applicable). There is no need for a server to actually be available at this IP address (and port), as the IP address (and port) only serve to identify the peer. This solution has the advantage of not requiring change in the implementation of the application, if it already uses an IP address and port to connect to the peer. The IP address (and port) may be static. The IP address may be of IPv4 or IPv6.
The second solution, specifying a GUID, makes it easier to avoid collisions (non-uniqueness) between the peer identifiers. On the other hand, it requires changes in the application implementation, since currently it typically does not use GUIDs to open the connections (sockets) (see also the next section about the network stack).
The third option (path) is useful when services and applications are organized in a tree like structure where additional edges may express service/application dependencies.
The three options outlined hereinabove are non-exhaustive. I.e., other types of identifiers may be used instead of one of these options if they are understood accordingly by the involved entities. For every peer, besides the unique identifier, the descriptor can include the type of the connection: point to point, point to multipoint, multipoint to point, or multipoint to multipoint.
The following example implementations of embodiments are using a standard web application stack that includes reverse proxies that depend on application servers which depend on database services. 1 . Fixed IP address - t e descriptor of an application service specifies that it will connect to the database service at "1 .2.3.4:1234", wherein 1 .2.3.4 is the IP address and 1234 is the port number.
2. GUID - the descriptor of a reverse proxy specifies that it will connect to the application servers by using the GUID "e95bef78-a78b-4e96-a1 f1 -6feec79c4b41 ".
3. Tree/Path - application server will connect to "../web-app/mysql-cluster/sql-node" which may be directed to any servers in a MySQL Cluster with SQL node role.
In some of these example implementations, software images used for running the services do not need to be changed before or after the deployment (e.g. when the network configuration is changed). Also, there is no need for runtime service discovery support in the application instances.
Data center network stack
The network stack of the data center is modified to control or intercept communication attempts. For every such attempt, it determines the location of the other endpoint (by using stored information of the correlation of the peer identifier(s) in the descriptor(s) and the real location of the currently deployed application(s)/service(s)), and then establishes a route to the other endpoint (i.e., opens a socket). More in detail, the network stack determines the location by searching the peer identifier in the correlation information and retrieving the real location which is correlated to the peer identifier from the correlation information. Selecting the remote endpoint may include applying certain polices that are defined for the actual communication path, where load balancing is an example of such policy.
The control of the communication attempts is done by intercepting socket openings. If the connection identifiers are IP addresses (and ports), the API of the socket() calls need not change. However, if they are GUIDs, a new domain comprising an address family is introduced, called for example AF DATACENTER or AF REMOTE. The application implementations then will use this AF (address family) in their socket() calls. In any case, the actual interception is implemented by modifying the socket() and related functions. The modified implementations can be installed either by installing a patched libc, or by intercepting dynamic loading and providing the modified functions to the application (with the LD_PRELOAD technique). OS kernel and/or libc library may also natively support such functionality, in which case no patching or wrapping is necessary. When a communication attempt (an attempt of opening a socket) is intercepted, the implementation determines the real location of the desired endpoint. To achieve this, it uses the peer identifier from the communication attempt, the information about the peer in the descriptors, and its knowledge about the current state of the data center (i.e., it knows which applications are currently deployed, and how they can be reached (on which component (e.g. host, VM, container, etc.) they are deployed)).
After the location of the other endpoint is determined, all data is routed there. The actual implementation of this routing depends on the specifics of the data center. Some options are:
- using label switching (tagging all packets that leave the socket, and program the data center (physical or virtual) switches accordingly);
- using ipv6 and regular routing;
- applying DNAT or port mapping; or
using an overlay network.
Fig. 1 illustrates the main components and interfaces that play together according to some embodiments of the invention to enable distributed applications to communicate without concrete information of the location of their peers.
The example in Fig.1 shows hosts, but the same idea may be applied to virtualized environments, where applications run inside virtual machines or operating system containers. Interception may happen in one/all of the host kernel, host hypervisor, virtual machines or containers.
In this example, socket calls are intercepted by a socket library wrapper (libc wrapper), but there are other alternatives, too.
The descriptor of an application component specifies, that it will try to reach a server by using the GUID "6a2a-92...". During deployment, this information is paired (correlated) with the actual location of the deployed applications: this GUID will mean a connection from the respective "opening host" (e.g. Host 2 in Fig. 1 ) to Host 1 (i.e., to 1.2.3.4:8082). The orchestrator sends this information to the agents on the hosts. This information is propagated to agents before an application component / service instance is started.
When the server of Host 1 binds to the GUID "6a2a-92...", the libc wrapper of Host 1 (which is already installed on all application hosts) communicates with the agent, gets the information that currently the 1 .2.3.4:8082 address is associated with this GUID, so it binds to that address. Later, when the application component of host 2 ("opening host") tries to open a socket to the server of Host 1 (by specifying the domain "AF DC" and the GUID "6a2a-92..."), the libc wrapper of Host 2 communicates with the agent of host 2, gets the information about the actual endpoint (1 .2.3.4:8082), and opens a socket to that address.
As may be seen from Fig. 1 , the application components may remain unchanged, regardless of if the application component is moved from Host 1 to a different Host.
Orchestration software may reside inside or outside of the datacenter, on dedicated machines or sharing the same compute nodes with applications.
According to some embodiments of the invention, external communications from and to the data center are not affected by this invention and handled the traditional way (socket calls pass through unchanged).
With this transparent connectivity service, management of applications becomes simpler: their network configuration becomes constant. (It only has to include the constant identifiers which are in the descriptors, and since this information is already there, the application configuration could even be generated directly from the descriptors.)
The simplicity helps packaging the applications as immutable images, since the configuration does not need to be adapted to different environments (application level configuration might still be needed). Moreover, the application does not need to collaborate with the orchestrator and/or the data center infrastructure to communicate with the correct peers, thus it can reduce application complexity.
Error handling and reconfiguration of the application (whether due to errors or planned configuration changes) can also be handled transparently: the data center OS simply closes the socket, and all the application has to do is to reopen it, and it will be opened to the correct (possibly new) peer. This can even be used to provide a fail-over mechanism or transparent local load balancing.
Another advantage is that data classification happens at the socket level (OSI layer 4) (instead of the lower VNIC/NIC/IP address level, which is a mix of L2/L3), so there is more precise control over the data flow. The transparent network connectivity service also allows avoiding the use of service discovery mechanisms to connect applications (thus making the applications simpler).
The correlation information correlates an identifier of a peer which is used by an application (i.e. an identifier provided in the descriptor) with an identifier of a component (host) on which the peer resides. For example, the correlation information may be a table comprising a column related to different identifiers used by the application for different peers and another column comprising the respective identifiers of the components. However, the correlation information may be provided in another form, too, e.g. in form of an ASN.1 string. Preferably, the form or the syntax of the correlation information is structured.
In general, there are several implementation options according to embodiments of the invention in order to provide the correlation information to the hosts: In some embodiments, a control entity such as an orchestrator may provide the correlation information to the hosts. For example, it may provide the correlation information when implementing a new application and each time when relevant network configuration is modified. In some embodiments, the hosts queries the correlation information from the control entity (e.g. orchestrator). E.g., a host may query for the correlation information when it tries to open a session and/or a host may query for the correlation information regularly or at certain events.
In still some embodiments, there may be a hierarchy of control entities. A master control entity provides the correlation information to a number of intermediate control entities on its own motion and/or in response to a query from a respective intermediate control entity. Each of the intermediate control entities provides the correlation information to a number of related hosts on its own motion and/or in response to a query from a respective host. There may be one or more levels of intermediate control entities.
In some embodiments of the invention, the correlation information may be distributed both on motion of the control entity and in response to a query from a host (or a lower level intermediate control entity). E.g., the control entity may distribute the correlation information to all its related hosts when a new application is implemented or a relevant network configuration is modified. The host may consider that the correlation information distributed by the control entity on its own motion has a certain validity time. After expiry of the validity time (e.g. when a session is to be opened using the correlation information), the host will query the control entity for the correlation information in order to ensure that the host did not miss an update of the correlation information. The correlation information received in response to the query may have another or a same validity period as that received on motion of the control entity.
Fig. 2 shows an apparatus according to an example embodiment of the invention. The apparatus may be a host such as an opening host, or an element thereof such as a wrapper. Fig. 3 shows a method according to an example embodiment of the invention. The apparatus according to Fig. 2 may perform the method of Fig. 3 but is not limited to this method. The method of Fig. 3 may be performed by the apparatus of Fig. 2 but is not limited to being performed by this apparatus.
The apparatus comprises intercepting means 10, determining means 20, and controlling means 30. The intercepting means 10, determining means 20, and controlling means 30 may be an intercepting circuitry, determining circuitry, and controlling circuitry, respectively.
The intercepting means 10 intercepts an attempt to open a socket between an opening host and a peer (S10). The attempt comprises an identifier of the peer, such as an IP address, a GUID, a tree path.
The determining means 20 determine an address of an endpoint host based on a stored or obtained correlation information and the identifier (S20). The correlation information correlates the identifier and the address. The controlling means 30 controls a transmitting device to transmit the attempt to the address determined by the determining means 20 (S30). The transmitting device may comprise the controlling means 30 or may be separated from the controlling means 30.
For example, the controlling means 30 may exchange the identifier of the peer in the attempt by the address determined by the determining means 20 before the attempt is transmitted by the opening host. In another example, the intercepting means 10 may store the attempt, and the controlling means 30 may replace the identifier of the peer in the stored attempt by the address determined by the determining means 20, and then forward the stored attempt comprising the replaced address. Fig. 4 shows an apparatus according to an example embodiment of the invention. The apparatus may be an orchestrator, or an element thereof. Fig. 5 shows a method according to an example embodiment of the invention. The apparatus according to Fig. 4 may perform the method of Fig. 5 but is not limited to this method. The method of Fig. 5 may be performed by the apparatus of Fig. 4 but is not limited to being performed by this apparatus.
The apparatus comprises providing means 1 10. The providing means 1 10 may be a providing circuitry. The providing means 1 10 provides a correlation information correlating an identifier and an address of an endpoint host to an opening host (S1 10), wherein the endpoint host and the opening host belong to a data center. The providing means 1 10 may provide the correlation information in response to a query from an opening host, and/or it may provide the correlation information to the opening hosts on its own motion, e.g. if the correlation information is modified.
Fig. 6 shows an apparatus according to an example embodiment of the invention. The apparatus may be an orchestrator, or an element thereof. Fig. 7 shows a method according to an example embodiment of the invention. The apparatus according to Fig. 6 may perform the method of Fig. 7 but is not limited to this method. The method of Fig. 7 may be performed by the apparatus of Fig. 6 but is not limited to being performed by this apparatus.
The apparatus comprises reading means 210, checking means 220, and notifying means 230. The reading means 210, checking means 220, and notifying means 230 may be a reading circuitry, a checking circuitry, and a notifying circuitry, respectively.
The reading means 210 reads a respective correlation information stored in each of plural hosts of a data center (S210). Each of the correlation informations comprises a correlation of an identifier to a respective endpoint address.
The checking means 220 checks if the endpoint addresses of the correlation informations are the same (S220).
If at least one of the endpoint addresses is different from the other endpoint addresses (S220 = "no"), the notifying means 230 notifies that the endpoint addresses are not the same (S230), i.e., that there is a discrepancy. The other endpoint addresses are the same endpoint address. Fig. 8 shows an apparatus according to an example embodiment of the invention. The apparatus comprises at least one processor 610, at least one memory 620 including computer program code, and the at least one processor 610, with the at least one memory 620 and the computer program code, being arranged to cause the apparatus to at least perform at least one of the methods according to Figs. 3, 5, and 7 and related description.
In the context of the present application, a data center is considered as a collection of one or more computers controlled or managed by a same control entity such as an orchestrator (i.e. a same software stack of an orchestrator) at least with respect to the correlation information. Other properties of the computers of the data center may be managed or controlled by the same control entity or be different control entities. The computers of the data center may be at the same location or at different locations and they may belong to the same LAN or to different LANs.
The data center may be realized fully or partly in a cloud, i.e. using shared processing resources such as computers, networks, etc. In this case, the shared processing resources are managed by a same control entity at least with respect to the correlation information. The control entity (e.g. orchestrator) may be realized by dedicated hardware or may be fully or partly realized in a same or different cloud than the data center, too.
One piece of information may be transmitted in one or plural messages from one entity to another entity. Each of these messages may comprise further (different) pieces of information. Names of network elements, protocols, and methods are based on current standards. In other versions or other technologies, the names of these network elements and/or protocols and/or methods may be different, as long as they provide a corresponding functionality.
If not otherwise stated or otherwise made clear from the context, the statement that two entities are different means that they perform different functions. It does not necessarily mean that they are based on different hardware. That is, each of the entities described in the present description may be based on a different hardware, or some or all of the entities may be based on the same hardware. It does not necessarily mean that they are based on different software. That is, each of the entities described in the present description may be based on different software, or some or all of the entities may be based on the same software. Some example embodiments of the invention may be applied to a data center comprising a number of collocated and interconnected computers. Some embodiments of the invention may be applied to computers which are at mutually remote locations or to a mixture of collocated and remote computers.
In some embodiments, a data center comprises a same network stack in each component. In some embodiments, the data center comprises different network stacks in different components. For example, in some embodiments, different network stacks may be used to segment the network. I.e. depending on the involved network stack, a request to open a socket comprising a same peer identifier may be routed to a different component running the same invoked service.
A host may be a component of a data center such as a computer (e.g. a personal computer, a server, a laptop, a desktop, a pizza-box, etc. It may also be some other component such as a VM or a container. It may run any operating system such as UNIX, windows, LINUX, etc.
The computers may be interconnected by any suitable network technology such as LAN, WAN, MAN etc. On the physical layer, the connection may be wired or wireless. "Information" (such as "correlation information") may mean one or more pieces of information (such as one or more pieces of correlation information). "Informations" (such as "correlation informations") may mean plural pieces of information (such as plural pieces of correlation information). According to the above description, it should thus be apparent that example embodiments of the present invention provide, for example a network stack, or a component thereof, an apparatus embodying the same, a method for controlling and/or operating the same, and computer program(s) controlling and/or operating the same as well as mediums carrying such computer program(s) and forming computer program product(s). According to the above description, it should thus be apparent that example embodiments of the present invention provide, for example a host such as a computer, or a component thereof, an apparatus embodying the same, a method for controlling and/or operating the same, and computer program(s) controlling and/or operating the same as well as mediums carrying such computer program(s) and forming computer program product(s). Implementations of any of the above described blocks, apparatuses, systems, techniques, means, entities, units, devices, or methods include, as non-limiting examples, implementations as hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, a virtual machine, or some combination thereof.
It should be noted that the description of the embodiments is given by way of example only and that various modifications may be made without departing from the scope of the invention as defined by the appended claims.

Claims

Claims:
1 . Apparatus, comprising
intercepting means adapted to intercept an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer;
determining means adapted to determine an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address;
controlling means adapted to control a transmitting device to transmit the attempt to the address.
2. The apparatus according to claim 1 , wherein the identifier is one of a first internet protocol address, a globally unique identifier, and a tree path.
3. The apparatus according to claim 2, wherein the identifier is the first internet protocol address and the address is a second internet protocol address different from the first internet protocol address.
4. The apparatus according to claim 2, wherein the identifier is the globally unique identifier and the correlation information comprises a domain identified by the identifier.
5. The apparatus according to any of claim 1 to 4, further comprising
selecting means adapted to select the address based on a communication pattern, wherein
the correlation information correlates plural mutually different addresses with the identifier, and
one of the plural mutually different addresses is the address.
6. The apparatus according to claim 5, wherein the attempt comprises an indication of the communication pattern.
7. The apparatus according to any of claims 1 to 6, further comprising
the opening host, wherein
the opening host is adapted to run an invoking application; wherein
the invoking application is configured with the identifier of the peer and adapted to trigger the attempt comprising the identifier.
8. The apparatus according to any of claims 1 to 7, further comprising at least one of querying means adapted to query the correlation information from a first control device; and
storing means adapted to store the correlation information received from a second control device.
9. Plural apparatuses according to any of claims 1 to 8, wherein
each of the apparatuses belongs to a same data center; and
each of the apparatuses stores a same correlation information.
10. Apparatus, comprising
providing means adapted to provide a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
1 1 . The apparatus according to claim 10, wherein the identifier is one of a first internet protocol address, a globally unique identifier, and a tree path.
12. The apparatus according to any of claims 10 and 1 1 , further comprising
monitoring means adapted to monitor if a query for the correlation information is received, wherein the query comprises the identifier; wherein
the providing means is adapted to provide the correlation information in response to the query.
13. The apparatus according to any of claims 10 to 12, wherein the providing means is adapted to provide the correlation information to at least two opening hosts belonging to the data center.
14. Apparatus, comprising
reading means adapted to read a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address;
checking means adapted to check if the endpoint addresses of the correlation informations are the same;
notifying means adapted to notify that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
15. The apparatus according to claim 14, further comprising
replacing means adapted to replace the at least one of the endpoint addresses by the other endpoint address.
16. Method, comprising
intercepting an attempt to open a socket between an opening host and a peer, wherein the attempt comprises an identifier of the peer;
determining an address of an endpoint host based on the identifier and a correlation information correlating the identifier and the address;
controlling a transmitting device to transmit the attempt to the address.
17. The method according to claim 16, wherein the identifier is one of a first internet protocol address, a globally unique identifier, and a tree path.
18. The method according to claim 17, wherein the identifier is the first internet protocol address and the address is a second internet protocol address different from the first internet protocol address.
19. The method according to claim 17, wherein the identifier is the globally unique identifier and the correlation information comprises a domain identified by the identifier.
20. The method according to any of claim 16 to 19, further comprising
selecting the address based on a communication pattern, wherein
the correlation information correlates plural mutually different addresses with the identifier, and
one of the plural mutually different addresses is the address.
21 . The method according to claim 20, wherein the attempt comprises an indication of the communication pattern.
22. The method according to any of claims 16 to 21 , further comprising
running an invoking application configured with the identifier of the peer; and triggering the attempt comprising the identifier by the invoking application.
23. The method according to any of claims 16 to 22, further comprising at least one of querying the correlation information from a first control device; and storing the correlation information received from a second control device.
24. Method, comprising
providing a correlation information correlating an identifier and an address of an endpoint host to an opening host, wherein the endpoint host and the opening host belong to a data center.
25. The method according to claim 24, wherein the identifier is one of a first internet protocol address, a globally unique identifier, and a tree path.
26. The method according to any of claims 24 and 25, further comprising
monitoring if a query for the correlation information is received, wherein the query comprises the identifier; and
providing the correlation information in response to the query.
27. The method according to any of claims 24 to 26, wherein the providing is adapted to provide the correlation information to at least two opening hosts belonging to the data center.
28. Method, comprising
reading a respective correlation information stored in each of plural hosts of a data center, wherein each of the correlation informations comprises a correlation of an identifier to a respective endpoint address;
checking if the endpoint addresses of the correlation informations are the same; notifying that the endpoint addresses are not the same if at least one of the endpoint addresses is different from the other endpoint addresses, wherein the other endpoint addresses are the same endpoint address.
29. The method according to claim 28, further comprising
replacing the at least one of the endpoint addresses by the other endpoint address
30. A computer program product comprising a set of instructions which, when executed on an apparatus, is configured to cause the apparatus to carry out the method according to any of claims 16 to 29.
31 . The computer program product according to claim 30, embodied as a computer-readable medium or directly loadable into a computer.
PCT/EP2016/054389 2016-03-02 2016-03-02 Data center managed connectivity WO2017148512A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/054389 WO2017148512A1 (en) 2016-03-02 2016-03-02 Data center managed connectivity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/054389 WO2017148512A1 (en) 2016-03-02 2016-03-02 Data center managed connectivity

Publications (1)

Publication Number Publication Date
WO2017148512A1 true WO2017148512A1 (en) 2017-09-08

Family

ID=55456777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/054389 WO2017148512A1 (en) 2016-03-02 2016-03-02 Data center managed connectivity

Country Status (1)

Country Link
WO (1) WO2017148512A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111496A (en) * 2017-12-13 2018-06-01 杭州安恒信息技术有限公司 The method, apparatus and system of http services are exposed for dubbo Distributed Applications
US11500699B2 (en) 2019-01-24 2022-11-15 Hewlett Packard Enterprise Development Lp Communication of data between virtual processes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110255537A1 (en) * 2010-04-16 2011-10-20 Cisco Technology, Inc. Controlling Directional Asymmetricity in Wide Area Networks
US20120284417A1 (en) * 2007-07-17 2012-11-08 Adobe Systems Incorporated Endpoint Discriminator in Network Transport Protocol Startup Packets
US20130227108A1 (en) * 2012-02-24 2013-08-29 Futurewei Technologies, Inc. Balancing of Forwarding and Address Resolution in Overlay Networks
EP2787693A1 (en) * 2013-04-05 2014-10-08 Telefonaktiebolaget LM Ericsson (PUBL) User plane traffic handling using network address translation and request redirection
US20150089499A1 (en) * 2013-09-25 2015-03-26 Delta Electronics, Inc. Topology management method and system of virtual machines

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120284417A1 (en) * 2007-07-17 2012-11-08 Adobe Systems Incorporated Endpoint Discriminator in Network Transport Protocol Startup Packets
US20110255537A1 (en) * 2010-04-16 2011-10-20 Cisco Technology, Inc. Controlling Directional Asymmetricity in Wide Area Networks
US20130227108A1 (en) * 2012-02-24 2013-08-29 Futurewei Technologies, Inc. Balancing of Forwarding and Address Resolution in Overlay Networks
EP2787693A1 (en) * 2013-04-05 2014-10-08 Telefonaktiebolaget LM Ericsson (PUBL) User plane traffic handling using network address translation and request redirection
US20150089499A1 (en) * 2013-09-25 2015-03-26 Delta Electronics, Inc. Topology management method and system of virtual machines

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111496A (en) * 2017-12-13 2018-06-01 杭州安恒信息技术有限公司 The method, apparatus and system of http services are exposed for dubbo Distributed Applications
CN108111496B (en) * 2017-12-13 2020-11-20 杭州安恒信息技术股份有限公司 Method, device and system for exposing http service for dubbo distributed application
US11500699B2 (en) 2019-01-24 2022-11-15 Hewlett Packard Enterprise Development Lp Communication of data between virtual processes

Similar Documents

Publication Publication Date Title
US20230104568A1 (en) Cloud native software-defined network architecture for multiple clusters
US11792126B2 (en) Configuring service load balancers with specified backend virtual networks
US11397609B2 (en) Application/context-based management of virtual networks using customizable workflows
US10855531B2 (en) Multiple networks for virtual execution elements
US11074091B1 (en) Deployment of microservices-based network controller
US11743182B2 (en) Container networking interface for multiple types of interfaces
US20190319847A1 (en) Cross-regional virtual network peering
US20220334864A1 (en) Plurality of smart network interface cards on a single compute node
EP3788772B1 (en) On-node dhcp implementation for virtual machines
US20230104368A1 (en) Role-based access control autogeneration in a cloud native software-defined network architecture
EP4160409A1 (en) Cloud native software-defined network architecture for multiple clusters
US20230336414A1 (en) Network policy generation for continuous deployment
WO2017148512A1 (en) Data center managed connectivity
EP4297359A1 (en) Metric groups for software-defined network architectures
US12034652B2 (en) Virtual network routers for cloud native software-defined network architectures
US20240223454A1 (en) Network policy validation
EP4160410A1 (en) Cloud native software-defined network architecture
US20240095158A1 (en) Deployment checks for a containerized sdn architecture system
US20230106531A1 (en) Virtual network routers for cloud native software-defined network architectures
US20240073087A1 (en) Intent-driven configuration of a cloud-native router
EP4075757A1 (en) A plurality of smart network interface cards on a single compute node
US20240129161A1 (en) Network segmentation for container orchestration platforms
CN117278428A (en) Metric set for software defined network architecture

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16708120

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16708120

Country of ref document: EP

Kind code of ref document: A1