WO2015123849A1 - Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible - Google Patents

Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible Download PDF

Info

Publication number
WO2015123849A1
WO2015123849A1 PCT/CN2014/072339 CN2014072339W WO2015123849A1 WO 2015123849 A1 WO2015123849 A1 WO 2015123849A1 CN 2014072339 W CN2014072339 W CN 2014072339W WO 2015123849 A1 WO2015123849 A1 WO 2015123849A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
communication
nic
intranet
vms
Prior art date
Application number
PCT/CN2014/072339
Other languages
English (en)
Inventor
Wenbo Mao
Original Assignee
Wenbo Mao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenbo Mao filed Critical Wenbo Mao
Priority to PCT/CN2014/072339 priority Critical patent/WO2015123849A1/fr
Publication of WO2015123849A1 publication Critical patent/WO2015123849A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]

Definitions

  • An intranet is a privately owned computer network that uses Internet Protocol technology to connect privately owned and controlled computing resources. This term is used in contrast to the Internet, a public network bridging intranets.
  • the really meaningful characteristic difference between an intranet and the Internet is scale: an intranet always has a limited scale which is bound to the economical limit of its private owner, while the Internet has the scale which is unbound to any economical limit of any organization in the world.
  • Cloud computing in this disclosure for constructing a network of unbound scale, cloud computing is considered as service; the so-called “private cloud", with a small, non-scalable size and nothing to do with service, is an unreasonable notion not to be regarded as cloud— should provide a practically unbound scale of computing resources which for example, include network resources.
  • a very large network is optional to provide disaster avoidance, elastic bursting, or even distributing split user data to non-cooperative authorities spanning continent geographical regions for protecting data against abuse of power by corrupted authorities.
  • any cloud computing service provider is going to have a limited economic power to own computing resources of a bound scale.
  • Network resource for cloud Infrastructure as a Service can include the OSI reference model Layer 2 (Link Layer) in order to provide without loss of generality all upper layer services.
  • Layer 2 describes physically— copper, optical fiber, radio, etc.— wired connections.
  • patching algorithm protocol
  • MAC Layer-2
  • the physically separate intranets can see and operate with the MAC packets of the other side, and both operate as enlarged networks with the other side as its extension.
  • MAC encapsulation technologies also known as large layer 2 networks: VPN, GRE, VXLAN, NVGRE, STT, MPLS, LISP to provide examples. It would appear that MAC-encapsulation of layer 2 in layer 3 technology is available to patch physically separate intranet into a network of unbound scale.
  • Layer 2 is physical. Communications described in layer 2 is done through data packets, called MAC packets, exchanged between physical network interface cards (NICs). Each NIC has a unique MAC address (id), and a MAC packet is in the form of the following triple: (Destination-MAC-id, Source-MAC-id, Payload).
  • a MAC id is similar to a person's finger print or some other uniquely identifiable physical attributes being the person's MAC id. They are unique, however they are not convenient to use as the person's id, e.g., for daily communications purposes. Typically, MAC ids are not easy to use. Moreover, applications will need move around in a bigger environment than a physical network. Hence,
  • Layer 3 a logical network in which an entity is identified in a unique IP address (id), and communications is in the form of
  • IP packets in the following triple format: (Destination-IP-id, Source-IP-id, Payload).
  • IP id can be constructed unique, and can even be changed, if necessary. IP ids are convenient to use for communications purposes.
  • MAC id is physical, unique, fixed to a NIC, cannot be changed, and the NIC is wired for sending or receiving data;
  • IP id is logical, movable, changeable, and convenient to use by applications. Plug-and-play standards for MAC/IP interplay
  • a computer with a NIC When a computer with a NIC is wired to a network cable, the computer need be associated with an IP address in order to perform operations.
  • the standard is that the computer initiates a DHCP request (Dynamic Host Configuration Protocol); this is to broadcast an IP id request message to the network environment with its MAC id in the broadcast. Why broadcast? The computer has no idea to whom in the network it sends the message.
  • the network system has one or more DHCP server(s).
  • the first DHCP server which receives the IP id request will arrange for an available IP id and broadcast back to the requestor with the MAC id. Why broadcast the response?
  • the DHCP server also has no idea where the computer with this NIC of this MAC-id is in the network.
  • IP ids When an application in a machine (having a NIC) initiates a communication with a destination (application) machine (also having a NIC), the communication should conveniently use IP ids.
  • these machines in fact, their operating systems, OSes
  • OSes can only communicate in physical way by exchanging data packets between NICs, i.e., the OSes can only communicate by exchanging MAC packets. Then how can the source OS know where the destination IP addressed machine is?
  • the standard is: the source OS will initiate an ARP (Address Resolution Protocol) message by broadcasting: "Who having this destination IP, give me your MAC id!" This time it is easier to understand why the source OS broadcasts: no server's help is needed, no configuration is needed; the protocol is purely in plug-and-play manner. All OSes in the network will hear the ARP broadcast, but only the one with the wanted IP address will respond with the MAC id. Having received the response, now the source OS can send the data packet in MAC packets through the physical wire linking the two NICs.
  • ARP Address Resolution Protocol
  • the conventional broadcast messages e.g., ARP and DHCP
  • DHCP MAC/IP association
  • ARP IP/MAC resolution
  • UDP unlike TCP link which needs handshake establishment, a UDP message can simply be sent and received without requiring the sender and receiver to engage in any agreement for confirming a good connection, thus UDP is well suited for broadcasting.
  • existing large layer 2 technologies suffer from scalability problems in trans-intranet patching networks. Broadcasting in trans-intranet scale requires very high bandwidth Internet connections in order to obtain reasonable response time. High bandwidth Internet connections are very costly. There are not any trans-datacenter clouds in successful commercial operation currently, in large part due to the costs of the high bandwidth that would be required by conventional approaches.
  • firewall for a trans-intranet tenant.
  • a tenant's firewall is distributed in trans-intranet manner so that VM-Internet communication packets are filtered locally and in distribution at each intranet.
  • routing forward table must be updated to all intranets in which a tenant has VMs, which is essentially trying to reach agreement over UDP
  • connectionless channel all "good" large layer 2 patching protocols, e.g., STT and VXLAN, are UDP based in order to serve without loss of generality any applications, e.g., video and broadcast). This translates to the infamous Byzantine Generals
  • VM-Internet communications suffer from a chokepointed firewall having a bandwidth bottleneck.
  • the size of the Internet is unbound. Any segment of network can join the Internet by interconnecting itself with the Internet provided the network segment is constructed, and the interconnection is implemented, in compliance with the OSI seven-layer reference model.
  • Network interconnection i.e., scaling up the size for a network, if using the OSI reference model, is in the formulation that a network packet of a layer is the payload data for the packet of one immediate layer down.
  • Interconnection at layers 2 and 3 in this formulation is stateless and connection-less, i.e., the interconnection needs no prior protocol negotiation.
  • a web client accessing a search engine web server does not need any prior protocol negotiation.
  • network interconnection using the conventional "large layer 2" patching technologies such as VPN, MPLS, STT, VXLAN, and NVGRE protocols do not use the OSI layered formulation. These protocols encapsulate a layer 2 packet as the payload data for a layer 3 packet, as opposed to the OSI interconnection formulation.
  • network patching using these "large layer 2" protocols cannot be done in a stateless streamlined fashion; prior protocol negotiation is necessary or else the interconnection peers misinterpret each other, and the interconnection will fail.
  • a novel intranet topology is managed by a communication controller that controls a software defined networking ("SDN") component.
  • SDN software defined networking
  • the SDN component executes on a plurality of servers within the intranet and coordinates the communication between virtual machines hosted on the plurality of servers and entities outside the intranet network, under the control of the communication controller.
  • the plurality of servers in the intranet can each be configured with at least two network interface cards ("NICs").
  • NICs network interface cards
  • a first external NIC can be connected to an external communication network (e.g., the Internet) and an internal second NIC can be connected to the other ones of the plurality of servers within the intranet.
  • each internal NIC can be connected to a switch and through the switch to the other servers.
  • the communication between each VM hosted on the plurality of servers can be dynamically programmed (e.g., by the SDN component operating under the control of the communication component) to route through a respective external NIC or over external NICs of the plurality of servers connected by their respective internal NICs.
  • the distributed servers having the external connected NICs can perform a network gateway role for the hosted VMs.
  • the gateway role can include interfacing with entities outside the local network (e.g., entities connected via the Internet) on an external side of the network, and the VMs on the internal side of the network.
  • the SDN component can be configured to implement network isolation and firewall policies at the locality and deployment of the VM.
  • the SDN component can also define a region of the intranet (e.g., an "Internet within intranet") where the network isolation and the firewall policies are not executed.
  • the SDN component does not execute any control in terms of tenant network isolation and firewall policy within the Internet within intranet region of the intranet.
  • the network region is configured to provide through network routes between any VM on the distributed servers and any of the external NICs on respective distributed servers, for example, under the control of the communication controller. Under this topology, the SDN component executes full programmatic control on the packet routes between any VM and any of the external NICs.
  • a local network system comprises at least one communication controller and a plurality of distributed servers, wherein the at least one communication controller controls the distributed servers and manages a SDN component deployed and executed on each of the distributed servers; the distributed servers hosting virtual machines (VMs) and managing communication for the VMs; wherein at least two of the distributed servers have at least two network interface cards (NICs): one NIC-ext, and one NIC-int; the NIC-ext is wired to an external network; the NIC-int is wired to a switch; wherein the distributed servers having the NIC-ext and NIC-int execute a network gateway role for the VMs, the gateway role including interfacing with entities outside the local network, and the VMs on an inner side of the network; the communication between each VM on a distributed server and the entities outside the local network can interface using the NIC-ext on the distributed server, or using the other NIC-exts on the other servers via the NIC-ints connected by the switch;
  • NICs network interface cards
  • a network communication system comprises at least one communication controller configured to manage communication within a logical network executing on resources of a plurality of distributed servers; the plurality of distributed servers hosting virtual machines (VMs) and handling the communication for the VMs; wherein at least two of the plurality of distributed servers are connected within an intranet segment, wherein the at least two of the distributed servers within the intranet segment include at least two respective network interface cards (NICs): at least one NIC-ext connected to an external network, and at least one NIC-int connected to a switch, wherein each server of the at least two of the plurality of distributed servers within the intranet segment execute communication gateway functions for interfacing with external entities on an external side of the network; and wherein the at least one
  • NICs network interface cards
  • communication controller dynamically programs communication pathways for the communication of the logical network to occur over any one or more of the at least two of the distributed servers within the intranet segment over respective NIC-exts by managing an SDN component executing on the at least two of the distributed servers.
  • a local network system comprises at least one communication controller coordinating the execution of a SDN component; a plurality of distributed servers; wherein the at least one communication controller manages communication by the plurality of distributed servers and coordinates execution of the SDN component deployed and executing on the plurality of distributed servers; wherein the plurality distributed servers host virtual machines (VMs) and manage communication for the VMs; wherein at least two of the plurality of servers include at least two respective network interface cards (NICs) at least one NIC-ext connected to entities outside the local network, and at least one NIC-int connected to a switch, wherein the communication between a VM on a server and the entities outside the local network interfaces on the external NIC on the distributed server or interfaces on NIC-exts on other distributed servers connected to the server by the switch and respective NIC-ints; wherein the SDN component is configured to coordinate the communication between the VMs and entities outside the local network under the management of the at least one communication controller.
  • NICs network interface cards
  • the following embodiments are use in conjunction with the preceding network systems (e.g., local network and network communication systems).
  • the preceding network systems e.g., local network and network communication systems.
  • the SDN component is configured to execute network isolation and firewall policies for the VMs of a tenant at the locality of each VM software existence and deployment.
  • the at least one communication controller manages the SDN execution of the network isolation and the firewall policies.
  • the SDN component is configured to control pass or drop of network packets which are output from and input to the VM.
  • the SDN component is configured to intercept and examine the network packets to be received by and have been communicated from the VM to manage the pass or the drop of the network packets.
  • the SDN component further comprises defining a network region, an "Internet within the intranet," in the local network, other than and away from VMs existence and deployment localities where the SDN component executes tenants' network isolation and firewall policies, in which the SDN component does not execute any control in terms of tenant network isolation and firewall policy.
  • the intranet region, the SDN component is configured to provide through network routes between any VM and any of the NIC-exts on respective distributed servers, and wherein the SDN component under management of the at least one communication controller executes control on the dynamicity of the packet forwarding routes between VMs and any respective
  • At least one other local network system including a respective Internet within intranet region is controlled by the at least one
  • the local network and the at least one other local network are patch connected to one another through any pair of NIC-exts of the two local networks to form an enlarged trans- local-network system.
  • additional other local network systems having a respective Internet within intranet region are patch connected to join a trans- local-network system to form a further enlarged trans- local-network system including elements having the Internet within intranet topology.
  • trans- local-network communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and wherein the SDN component is programmed to generate dynamic and distributed routes between the first VM and respective external NICs in a first respective local network and the second VM and respective external NICs in a second respective local network.
  • the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take dynamic routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking an external NICs of the local network system and the external entity over the Internet.
  • the preceding systems can be include or be further described by one or more the of the following elements: wherein the SDN component is configured to execute network isolation and firewall policies for VMs of one or more tenants local to each VM; wherein the SDN component is configured to execute the network isolation and firewall policies where network packets are output from the VM or communicated to the VM; wherein the SDN component executes the network isolation and firewall policies for VMs of the one or more tenants at localities where network packets are output from the VM prior to them reaching any other software or hardware component in the local network, or input to the VM without enrouting any other software or hardware component in the local network; wherein the at least one communication controller manages the SDN execution of the network isolation and the firewall policies; wherein the SDN component is configured to control pass or drop of network packets which are output from and input to the VM; wherein the SDN component is configured to intercept and examine the network packets for receipt by and outbound from the VM to manage the pass or the drop of the network packets;
  • communication traffic between a first and second VM in any two patch participating local networks are controlled by the SDN component running on the distributed servers in the respective local networks, and wherein the SDN component is programmed to generate programmed routes, the programmed routes including one or more of dynamic or distributed routes, between the first VM and respective external NICs in a first respective local network over at least one intermediate connection to the second VM and respective external NICs in a second respective local network; wherein the external communication traffic between a VM in a local network system and an external entity is programmed by the SDN component running on the distributed servers in the local network system to take programmed routes between the VM and the distributed NIC-exts in the local network system, and to have the network packets delivered by the Internet linking external NICs of the local network system and the external entity over the Internet; wherein the programmed routes include one or more of dynamic or distributed routes.
  • a computer implemented method for managing communications of virtual machines (“VMs) hosted on an intranet segment.
  • the method comprises managing, by at least one communication controller, network communication for at least one VM hosted on the intranet segment; programming, by the at least one communication controller, a route for the network communication, wherein the act of programming includes selecting for an external network
  • the method further comprises an act of patching a plurality of intranet segments, wherein each of the plurality of intranet segments include at least two distributed servers, each having at least one NIC-int and at least one NIC-ext.
  • the method further comprises programming, by the at least one communication controller communication routes between the plurality of intranet segments based on selection of or distribution between external connections to respective at least one NIC-exts within each intranet segment.
  • the method further comprises managing network configuration messages from VM by the at least one communication controller such that broadcast configuration messages are captured at respective intranet segments.
  • the method further comprises an act of managing a plurality of VMs to provide distributed network isolation and firewall policies at the locality of each VM software existence and deployment.
  • programming, by the at least one communication controller includes managing SDN execution of network isolation and the firewall policies.
  • the method further comprises defining, by the at least one controller, a network region in the intranet segment, other than and away from VMs existence and deployment localities, in which the at least one controller does not execute any control in terms of tenant network isolation and firewall policy.
  • programming, by the at least one controller includes providing through network routes between any VM hosted on the intranet segment and any of the NIC-exts on respective distributed servers, and controlling dynamicity of packet forwarding routes between VMs and any respective NIC-exts.
  • FIG. 1 is a block diagram of a conventional network architecture including for example, a gateway chokepoint;
  • FIG. 2 is a block diagram of a proposed intranet topology, according to various embodiments.
  • FIG. 3 is a block diagram of an intra-inter-net interfacing topology, according to various embodiments.
  • FIG. 4 is a block diagram of an example NVI system, according to one embodiment
  • FIG. 5 is a block diagram of an example NVI system, according to one embodiment
  • FIG. 6 is a block diagram of an example distributed firewall, according to one embodiment
  • FIG. 7 is an example process for defining and/or maintaining a tenant network, according to one embodiment
  • FIG. 8 is an example certification employed in various embodiments.
  • FIG. 9 is an example process for execution of a tenant defined communication policy, according to one embodiment.
  • FIG. 10 is an example process for execution of a tenant defined
  • FIG. 11 is an example user interface, according to one embodiment.
  • FIG. 12 is a block diagram of an example tenant programmable trusted network, according to one embodiment.
  • FIG. 13 is a block diagram of a general purpose computer system on which various aspects and embodiments may be practiced.
  • FIG. 14 is a block diagram of an example logical network, according to one embodiment.
  • FIG. 15 is a process flow for programming network communication, according to one embodiment.
  • At least some embodiments disclosed herein include apparatus and processes for an Internet within intranet topology.
  • the Internet within intranet topology enables SDN route programming.
  • SDN route programming can be executed for trans-datacenter virtual clouds, virtual machine to Internet routes, and further can enable scalable patching of intranets.
  • the Internet within intranet topology includes a plurality of distributed servers hosting VMs.
  • the plurality of distributed servers can perform a network gateway role, and include an external NIC having a connection to the Internet and an internal NIC connected to other ones of the distributed servers, for example, through a switch.
  • the distributed servers can each operate as programmable forwarding devices for VMs.
  • the configuration enables fully SDN controlled intranet networks that can fully leverage redundant Internet connections. These fully SDN controlled intranet networks can be patched into large scale cloud networks (including for example trans-datacenter cloud networks).
  • vNICs virtual NICs
  • isolation of a tenant's network of virtual machines can be executed by NVI (Network Virtualization Infrastructure) software and each VM hosted by a server in the tenant network can be controlled in a distributed manner at the point of each virtual NIC of the VM (discussed in greater detail below).
  • NVI Network Virtualization Infrastructure
  • the underlying servers can be open to communicate without restriction.
  • the underlying servers can operate like the Internet (e.g., open and accessible) but under the SDN programming control.
  • SDN control and/or software alone is insufficient to provide fully distributed routing.
  • SDN can do little without route redundancy.
  • Shown in Fig. 1 is a conventional network architecture 100. Even with SDN programming implemented, the Internet traffic from the plurality of servers (e.g., 102-108 each having their respective NICs 110-116) cannot be fully SDN distributed.
  • each server is connected to at least one switch (e.g., 118), which is connected to a gateway node 120.
  • the gateway node 120 can be a "neutron node" from the commercially available Openstack cloud software.
  • the gateway node 120 connects to Internet 122 via an external NIC 124 and routes the traffic to the servers via an internal NIC 126. However, based on the intranet to Internet topology shown, the intranet to Internet traffic cannot be SDN distributed. The gateway node 120 forms a chokepoint through which all intranet traffic must pass.
  • FIG. 2 is a block diagram of an example Internet-intranet topology 200 that can be used to support a scalable cloud computing network.
  • a plurality of servers can host a plurality of virtual machines as part of a distributed cloud.
  • the servers e.g., 202-208) can be configured with at least two NICs.
  • Each server is configured with an internal NIC (e.g., 210-216) which connects the servers (e.g., 202-208) to each other through at least one switch (e.g., 218).
  • each of the servers can include an external NIC (e.g., 220-226) each of which provides a connection to the Internet 228 (or other external network).
  • each of the connections e.g., 230-236) can be low bandwidth, low cost, Internet connections (including, for example, subscriber lines).
  • route programming can take full advantage of all the available Internet connections (e.g., 230-236), providing, in effect, a high bandwidth low cost connection.
  • the three dots shown in Fig. 2 illustrate the potential to add additional servers (e.g., at 238 with respective connections to switch 218 at 240 and respective Internet connections at 242).
  • each server in the Internet-intranet-interfacing topology can execute the functions of a SDN programmable gateway.
  • each server can include SDN components executing on the server to control communication of traffic from and to VMs.
  • the Internet traffic to and from any VM hosted on one or more of the plurality of servers can go via any external NIC of the connected servers.
  • fully distributed routing of network traffic is made available.
  • the SDN components executing on the servers can dynamically reprogram network routes to avoid bottlenecks, increase throughput, distribute large communication jobs, etc.
  • the SDN components are managed by a
  • the communication controller can be configured to co-ordinate operation of the SDN components on the respective servers.
  • a variety of virtualization infrastructures can be used to provide virtual machine (VMs) to a tenant seeking computing resources.
  • a network virtualization infrastructure (“NVI") software is used to mange a tenant network of VMs (discussed in greater detailed below).
  • the NVI system/software can be implemented to provide network isolation processing and/or system components.
  • the virtualization software e.g., NVI software
  • vNIC virtual NIC
  • the management at respective vNICs divides the inside and outside of the tenant's network at the vNIC of each virtual machine.
  • the VM which is "north" of the vNIC is inside the tenant network, and the software cable which is plugged to the vNIC on one end and connected to the underlying physical server hosting the VM the other end is "south.”
  • the vNIC likewise divides the tenant network between the VM on the "north" and any external connections of the physical server.
  • the tenant's network border is distributed to the point of the respective vNICs of each VM of the tenant. From this configuration it is realized that the tenant's logical layer 2 network is patched by a plural number of intranets, each having the minimum size of containing one VM.
  • the DHCP and ARP broadcasting protocol messages which are initiated by the OS in the VMs can be received and processed by the NVI software.
  • the NVI In response to DHCP and ARP messages from the VMs, the NVI generates IP/MAC associations in a global database.
  • the global database is accessible by NVI hypervisors hosting the VMs of the tenant.
  • the new large layer 2 patching method discussed does not involve any broadcasting message in the DHCP and ARP plug-and-play standard. From the perspective of the OS of the VMs, the two standard protocols continue to serve the needed interplay role for the layer 2 and layer 3 without change. However, the network configuration messages no longer need broadcastings as the addressing associations are handled by the NVI infrastructure (e.g., NVI hypervisors managing entries in a global database).
  • the logical layer 2 of the tenant can be implemented in trans-datacenter manner based on handling network broadcast within the virtualization infrastructure. For example, by limiting broadcasting to the range of the minimum intranet of one VM, the disclosed layer 2 patching is scalable to an arbitrary size. Communications between trans-datacenter VMs of the same tenant occur in logical layer 2 fashion.
  • the functions of the global database instructed NVI hypervisors permit the DHCP and ARP standards to remain constrained to their normal plug-and-play execution for the VM users.
  • the combination of the SDN software and the network topology enables traffic engineering and/or enlarging the trans-datacenter bandwidth.
  • the underlying servers of the VMs of the tenant can become public just like the Internet.
  • the underlying servers e.g., NVI configured servers
  • the underlying servers can be configured as publically accessible resources similar to any Internet accessible resource, and at the same time the servers themselves are under route programming control.
  • the route programming control can be executed by SDN components executing on the underlying servers.
  • the SDN components can be managed by one or more communication controllers.
  • the underlying servers can be directly connected to the Internet in one of at least two respective NICs, denoted by NIC-external ("NIC-ext").
  • the servers are locally (e.g., within a building) connected by switches in the other of the NICs, denoted by NIC-internal ("NIC-int").
  • NIC-int NIC-internal
  • all of the Internet connected NICs can be used by any VM to provide redundant communication routes, either for in-cloud trans-datacenter traffics, or for off-cloud external communications traffics with the rest of the world.
  • the available redundancy greatly increases the utilization of the Internet traffics - known to have been architected for containing high redundancy, and to have been over-provisioned through many years of commercial deployment.
  • the Internet connected servers of the disclosed topology are the programmable forwarding devices, and can therefore be so used to exploit the un-utilized Internet bandwidth potential.
  • Fig. 3 is a block diagram of an Internet-intranet interfacing topology 300.
  • VMs of different tenants e.g., each shape can correspond to different tenant network
  • the VMs are provisioned and controlled via the virtualization infrastructure (e.g., 328 and 330), which is connected to the Internet over distributed Internet-intranet interfaces (e.g., 332-348 and 350-366).
  • communication controllers and/or SDN components can leverage the distributed Internet-intranet interfaces for fully dynamic and programmatic route control of traffic.
  • a conventional intranet network is connected via multiple Internet connections (at 374), however, interface 372 represents a chokepoint where traffic can still bottleneck. Even with SDN
  • the interface 372 cannot fully distribute traffic and cannot fully exploit available bandwidth.
  • the various intranet topologies discussed above can be implemented to provide for dynamic and distributed bandwidth exploitation.
  • the underlying hardware server for the VM (denoted Server-1 (e.g., 202 of Fig. 2)) is externally connected to the Internet on NIC-external- 1 (e.g., 220), and is internally connected to many other servers in an intranet (e.g., a local intranet housed in a building) on NIC-internal- 1 (e.g., 210) via switches (e.g., 218).
  • NIC-external- 1 e.g., 220
  • switches e.g., 218).
  • Server-i 2, 3, ..., n (e.g., 204, 206, and 208).
  • Each of Server-i has a NIC-External-i directly connected to the Internet.
  • typical intranet connection are over-provisioned, that is with a copper-switch, or even faster an
  • optical- fiber-switch, intranet connections in a datacenter have high bandwidth.
  • web requests for the VM web server can be distributed to the n low-bandwidth NIC-external-i's and redirected to Server- 1 and to the VM.
  • the web service provider only needs to rent low bandwidth Internet connections which can be aggregated into a very high bandwidth. It is well-known that the Internet dollar-bandwidth curve is concave function that increases with a rather fast speed as the desired bandwidth increases.
  • high bandwidth can be achieved at low cost making this a valid Internet traffic engineering technology.
  • traffic engineering embodiments can be implemented. For instance, upon detecting a NIC-external for a trans-datacenter connection is in congestion, real-time route programming to select another server's NIC-external can evade the congestion (e.g., detected by a communication controller and re-routed by SDN components).
  • congestion e.g., detected by a communication controller and re-routed by SDN components.
  • a very big file in one datacenter in need of being backed up (e.g., for disaster recovery purposes) to another datacenter can be divided into much smaller parts and transmitted via many low-cost Internet connections to the other end, and reassembled, to greatly increase the transfer efficiency.
  • an NVI architecture achieves network virtualization and provides a decoupling between the logical network of VMs and the physical network of the servers.
  • the decoupling facilitates the implementation of a software-defined network ("SDN") in the cloud space.
  • SDN software-defined network
  • the functions of the SDN are extended to achieve programmable traffic control and to better utilize the potential of the underlying physical network. It is realized that SDN is not just using software programming language to realize network function boxes such as switch, router, firewall, network slicing, etc, which are mostly provisioned in hardware boxes, as many understand at a superficial level.
  • A's packet Z-IP, A-IP, Payload>.
  • a network function box e.g., a network gateway B.
  • B makes the following IP packet: ⁇ C-IP, B-IP, As packet as a payload>.
  • C repeats: ⁇ D-IP, C-IP, As packet as a payload>..., until Y (e.g., the gateway of Z) repeats: Z-IP, Y-IP, As packet as a payload.
  • Y e.g., the gateway of Z
  • the route is an a-priori function of the packet which is received by a network function, and therefore is fixed, once sent out, and cannot be rerouted, e.g., upon traffic congestion, even though the Internet does have tremendous redundancy.
  • Fig. 15 is an example process flow 1500 for programming network
  • Process 1500 begins at 1502 where network traffic is received or accepted.
  • the received message is evaluated to determine where the message is addressed. If the message is addressed internal to an intranet segment on which the message originated 1504 internal (e.g., between VMs on one intranet segment) the message is routed via NIC-ints of the respective servers. If the message is address external to the intranet segment, 1504 external, then a route is programmed to traverse one or more NIC-ext of the servers within the intranet segment.
  • a communication controller manages the programming of the routes to be taken. The controller can be configured to evaluate available bandwidth or the one or more NIC-ext, determine congestion on one or more of the NIC-ext, and respond by programming a route accordingly.
  • the communication controller manages SDN components executing on the servers that make up the intranet segment to provide SDN programming of traffic.
  • SDN as implemented herein, enables such network traffic to be programmed on route, and therefore to utilize the unused potential of the Internet's redundancy.
  • the underlying physical network topology can be re-designed to add route redundancy. For example, let each server in the intranet act as a gateway, with one NIC directly wired to the Internet, and one NIC wired to other such servers in the intranet via a back-end switch. Once configured in this manner, VM-Internet communication routes can be SDN
  • Intranet lines have high bandwidth, easily at gigabits per second levels, like freeway traffic, while the Internet bandwidth is typically low, easily orders of magnitude lower, and rental fee for high bandwidth rises sharply in a convex function (like x A 2, e A x functions), due to under utilization, and hence low return of the heavy investment on the infrastructure.
  • a convex function like x A 2, e A x functions
  • This new intranet network wiring topology provides sufficient route redundancy between each VM and the Internet, and can employ SDN to program the Internet- VM traffic over the redundant routes.
  • many low-cost low-bandwidth Internet lines can be connected to many external facing NICs with intranet elements, and can be aggregated into a high bandwidth communication channel.
  • the servers of each intranet form distributed gateways interfacing the Internet.
  • the distributed gateways avoid traffic, just like the widened tollgates for the freeway, thus avoiding forming a traffic bottleneck, and/or avoiding the very high cost for renting high-bandwidth Internet services.
  • Fig. 2 is a example intranet network topology according to various embodiments. Under the topology illustrated intranet to Internet traffic can be SDN distributed, permitting, for example, aggregation of many low bandwidth Internet communication channels, and further, permitting distributed network routing from the intranet to Internet.
  • Fig. 3 is a diagram of an example novel intra-Internet interfacing topology, according to various embodiments that take advantage of the distributed Internet-intranet interfaces to provide programmatic traffic engineering.
  • the novel intranet topology can be implemented in conjunction with various virtualization infrastructures.
  • One example virtualization infrastructure includes an NVI infrastructure.
  • the intranet topology is configured to facilitate dynamic route programming for VMs through the underlying servers that make up the intranet.
  • Each server within such intranet segments can operate as a gateway for the VMs hosted on the intranet.
  • a minimum of two servers having the two NIC configuration e.g., at least one NIC internal and at least one NIC external
  • the VMs are provisioned and managed under an NVI infrastructure.
  • NVI infrastructure Various properties and benefits of the NVI infrastructure are discussed below with respect to examples and embodiments.
  • the functions, system elements, and operations discussed above, for example, with respect to intranet topology and/or patching can be implemented on or in conjunction with the systems, functions, and/or operations of the NVI systems below.
  • the NVI systems and/or functions provide distributed VM control on tenant networks, providing network isolation and/or distributed firewall services.
  • the intranet topology discuss above enables SDN route programming for trans-datacenter and VM-Internet routes, and scalable intranets patching.
  • the NVI infrastructure is configured to provide communication functions to a group of virtual machines (VMs), which in some examples, can be distributed across a plurality of dataclouds or cloud providers.
  • VMs virtual machines
  • the NVI implements a logical network between the VMs enabling intelligent virtualization and programmable configuration of the logical network.
  • the NVI can include software components (including, for example, hypervisors (i.e. VM managers)) and database management systems (DBMS) configured to manage network control functions.
  • hypervisors i.e. VM managers
  • DBMS database management systems
  • the NVI manages communication between a plurality of virtual machines by managing physical communication pathways between a plurality of physically associated network addresses which are mapped to respective globally unique logical identities of the respective plurality of virtual machines.
  • network control is implemented on vNICs of VMs within the logical network.
  • the NVI can direct communication on the logical network according to mappings between logical addresses (e.g., assigned at vNICs for VMs) of VMs and physically associated addresses assign by respective clouds with the mappings being stored by the DBMS.
  • the mappings can be updated, for example, as VMs change location.
  • a logical address can be remapped to a new physically associated address when a virtual machine changes physical location with the new physically associated address being recorded in the DBMS to replace the previous physically associated address before the VM changing physical location.
  • the network control is fully logical enabling the network dataflow for the logical network to continue over the physical networking components (e.g., assigned by cloud providers) that are mapped to and underlie the logical network.
  • enabling the network control functions directly at vNICs of respective VMs provides for definition and/or management of arbitrarily scalable virtual or logical networks.
  • Such control functions can be action of "plugging” / "unplugging” logically defined unicast cables between vNICs of pairs of VMs to implement network isolations policy, transform formats for network packets (e.g., between IPv6-IPv4 packets), provide cryptographic services on applications data in network packets to implement cryptographic protection on tenants' data, monitor and/or manage traffic to implement advanced network QoS (e.g., balance load, divert traffic, etc.), provide intrusion detection and/or resolution to implement network security QoS, allocate expenses to tenants based on network utilization, among other options.
  • advanced network QoS e.g., balance load, divert traffic, etc.
  • such logical networks can target a variety of quality of service goals.
  • Some example goals include providing a cloud datacenter configured to operate in resource rental, multi-tenancy, and in some preferred embodiments, Trusted Multi-tenancy, and in further preferred embodiments, on-demand and self-serviceable manners.
  • resource rental refers to a tenant (e.g., an organization or for compute project), who rents a plural number of virtual machines (VMs) for its users (e.g., employees of the tenant) for computations the tenant wishes to execute.
  • VMs virtual machines
  • the users, applications, and/or processes of the tenant use the compute resources of a provider through the rental VMs, which can include operating systems, databases, web/mail services, applications, and other software resources installed on the VMs.
  • multi-tenancy refers to a cloud datacenter or cloud compute provider that is configured to serve a plural number of tenants.
  • the multi-tenancy model is conventional throughout compute providers, which typically allows the datacenter to operate with economy of scale.
  • multi-tenancy can be extended to trusted multi-tenancy, where VMs and associated network resources are isolated from accessing by the system operators of the cloud providers, and unless with explicitly instructed permission(s) from the tenants involved, any two VMs and associated network resources which are rented by different tenants respectively are configured to be isolated from one another. VMs and associated network resources which are rented by one tenant can be configured to communicate with one another according to any security policy set by the tenant.
  • on-demand and self-serviceability refers to the ability of a tenant to rent a dynamically changeable quantity/amount/volume of resources according to need, and in preferred embodiment, in a self-servicing manner (e.g., by editing a restaurant menu like webpage).
  • self-servicing can include instructing the datacenter using simple web-service-like interfaces for resource rental at a location outside the datacenter.
  • self-servicing resource rental can include a tenant renting resourced from a plural number of cloud providers which have trans-datacenter physical and/or geographical distributions. Conventional approaches may fail to provide any one or more of: multi-tenancy, trusted
  • LAN Local Area Network
  • IT security e.g., cloud security
  • isolation of LAN in cloud datacenters for tenants can be necessary.
  • LAN isolation turns out to be a very challenging task unresolved by conventional approaches.
  • the systems and methods provide logical de-coupling of a tenant network through globally uniquely identifiable identities assigned to VMs.
  • Virtualization infrastructure (VI) at each provider can be configured to manage communication over a logical virtual network created via the global identifiers for VMs rented by the tenant.
  • the logical virtual network can be configured to extend past cloud provider boundaries, and in some embodiments, allows a tenant to specify the VMs and associated logical virtual network (located at any provider) via whitelist definition.
  • Shown in Fig. 4 is an example embodiment of a network virtualization infrastructure (NVI) or NVI system 400.
  • NVI network virtualization infrastructure
  • system 400 can be implemented on and/or in conjunction with resources allocated by cloud resource providers.
  • system 400 can be hosted, at least in part, external to virtual machines and/or cloud resources rented from cloud service providers.
  • the system 400 can also serve as a front end for accessing pricing and rental information for cloud compute resources.
  • a tenant can access system 400 to allocate cloud resources from a variety of providers. Once the tenant has acquired specific resources, for example, in the form of virtual machines hosted at one or more cloud service providers, the tenant can identify those resources to define their network via the NVI system 400.
  • the logic and/or functions executed by system 400 can be executed on one or more NVI components (e.g., hypervisors (virtual machine managers)) within respective cloud service providers.
  • one or more NVI components can include proxy entities configured to operate in conjunction with hypervisors at respective cloud providers.
  • the proxy entities can be created as specialized virtual machines that facilitate the creation, definition and control function of a logical network (e.g., a tenant isolated network). Creation of the logical network can include, for example, assignment of globally unique logical addresses to VMs and mapping of the globally unique logical addresses to physically associated addresses of the resources executing the VMs.
  • the proxy entities can be configured to define logical communication channels (e.g., logically defined virtual unicast cables) between pairs of VMs based on the globally unique logical addresses. Communication between VMs can occur over the logical communication channels without regard to physically associated addressing which are mapped to the logical addresses/identities of the VMs.
  • the proxy entities can be configured to perform translations of hardware addressed communication into purely logical addressing and vice versa.
  • a proxy entity operates in conjunction with a respective hypervisor at a respective cloud provider to capture VM communication events, route VM communication between a vNIC of the VM and a software switch or bridge in the underlying hypervisor upon which the proxy entity is serving the VM.
  • a proxy entity is a specialized virtual machine at respective cloud providers or respective hypervisors configured for back end servicing.
  • a proxy entity manages internal or external
  • communication according to communication policy defined on logical addresses of the tenants' isolated network (e.g., according to network edge policy).
  • the NVI system 400 can also include various other components
  • the NVI system 400 can be configured to map globally unique identities of respective virtual machines to the physically associated addresses of the respective resources.
  • the NVI system 400 can include an NVI engine 404 configured to assign globally unique identities of a set of virtual machines to resources allocated by hypervisors to a specific tenant. The set of virtual machines can then be configured to communicate with each other using the globally unique identities.
  • the NVI system and/or NVI engine is configured provide network control functions over logically defined unicast channels between virtual machines within a tenant network. For example, the NVI system 400 can provide for network control at each VM in the logical network.
  • the NVI system 400 can be configured to provide network control at a vNIC of each VM, allowing direct control of network communication of the VMs in the logical network.
  • the NVI system 400 can be configured to define point-to-point connections, including for example, virtual cable connections between vNICs of the virtual machines of the logical network using their globally unique addresses.
  • Communication within the network can proceed over the virtual cable connections defined between a source VM and a destination VM.
  • the NVI system 400 and/or NVI engine 404 can be configured to open and close communication channels between a source and a destination (including, for example, internal and external network addresses).
  • the NVI system 400 and/or NVI engine 404 can be configured to establish virtual cables providing direct connections between virtual machines that can be connected and disconnected according to a communication policy defined on the system.
  • each tenant can define a communication policy according to their needs.
  • the communication policy can be defined on a connection by connection basis, both internally to the tenant network and by identifying external communication connections.
  • the tenant can specify for an originating VM in the logical network what destination VMs the originating VM is permitted to
  • the tenant can define communication policy according to source and destination logical identities.
  • the NVI system 400 and/or NVI engine 404 can manage each VM of the logical network according to an infinite number of virtual cables defined at vNICs for the VMs.
  • virtual cables can be defined between pairs of VMs and their vNICs for every VM in the logical network.
  • the tenant can define communication policy for each cable, allowing or denying traffic according to programmatic if then else logic.
  • the NVI system and/or engine are configured to provide distributed firewall services.
  • distribution of connection control can eliminate the chokepoint limitations of conventional architectures, and in further embodiments, permit dynamic
  • re-architecting of a tenant network topology e.g., adding, eliminating, and/or moving cloud resources that underlie the logical network.
  • the NVI system 400 and/or engine 404 can be configured to allocate resources at various cloud compute providers.
  • system and/or engine can be executed by one or more hypervisors at the respective cloud providers.
  • the system and/or engine can be configured to request respective hypervisors create virtual machines and provide identifying information for the created virtual machines (e.g., to store mappings between logical addresses and physically associated address of the resources).
  • the functions of system 400 and/or engine 404 can executed by a respective hypervisor within a cloud provider system.
  • the functions of system 400 and/or engine 404 can executed by and/or include a specialized virtual machine or proxy entity configured to interact with a respective hypervisor.
  • the proxy entity can be configured to request resources and respective cloud provider identifying information (including physically associated addresses for resources assigned by hypervisors).
  • the system and/or engine can be configured to request, capture, and/or assign temporary addresses to any allocated resources.
  • the temporary addresses are "physically associated" addresses assigned to resources by respective cloud providers.
  • the temporary addresses are used in conventional networking technologies to provide communication between resources and to other, for example, Internet addresses.
  • the physically associated addresses are included in network packet metadata, either as a MAC address or an IP address or a context tag.
  • the NVI system 400 de-couples any physical association in its network topology by defining logical addresses for each VM in the logical network.
  • communication can occurs over virtual cables that connect pairs of virtual machines using their respective logical addresses.
  • the system and/or engine 404 can be configured to manage creation/allo cation of virtual machines and also manage communication between the VMs of the logical network at respective vNICs.
  • the system 400 and/or engine 404 can also be configured to identify communication events at the vNICs of the virtual machines when the virtual machines initiate or respond to a communication event.
  • Such direct control can provide advantages over conventional approaches.
  • the system and/or engine can include proxy entities at respective cloud providers.
  • the proxy entities can be configured to operate in conjunction with respective hypervisors to obtain hypervisor assigned addresses, identify
  • a proxy entity can be created at each cloud provider involved in a tenant network, such that the proxy entity manages the virtualization/logical isolation of the tenant's network.
  • each proxy entity can be a back-end servicing VM configured to provide network control functions on the vNICs of front-end business VMs (between vNICs of business VM and hypervisor switch or hypervisor bridge), to avoid programming in the hypervisor directly.
  • the system 400 and/or engine 404 can also be configured to implement communication policies within the tenant network. For example, when a virtual machine begins a communication session with another virtual machine, the NVI system 400 and/or NVI engine 404 can identify the communication event and test the communication against tenant defined policy.
  • the NVI system 400 and/or NVI engine 404 component can be configured to reference physically associated addresses for VMs in the communication and lookup their associated globally unique addresses and/or connection certificates (e.g., stored in a DBMS). In some settings, encryption certificates can be employed to protect/validate network mappings.
  • a PKI certificate can be used to encode a VM's identity - Cert(UUID/IPv6) with a digital signature for its global identity (e.g., UUID/IPv6) and physically associated address (e.g., IP) - Sign(UUID/IPv6, IP).
  • the correctness of the mapping (UUID/IPv6, IP) can then be crypto graphically verified by any entity using Cert(UUID/IPv6) and Sign(UUID/IPv6, IP).
  • the NVI system 400 and/or NVI engine 404 can verify each communication with a certificate lookup and handle each communication event according to a distributed communication policy defined on the logical connections.
  • the NVI system 400 provides a logically defined network 406 de-coupled from any underlying physical resources. Responsive to any network communication event 402 (including for example, VM to VM, VM to external, and/or external to VM communication), the NVI system is configured to abstract the communication event into the logical architecture of the network 406. In one embodiment, the NVI system "plugs" or “unplugs" a virtual cable at respective vNICs of VMs to carry the communication between a source and a destination. The NVI system can control internal network and external network communication according to the logical addresses by "plugging" and/or "unplugging" virtual cables between the logical addresses at respective vNICs of VMs. As the logical address for any resource within a tenant network are globally unique, new resources can be readily added to the tenant network, and can be readily incorporated into
  • the NVI system 400 and/or NVI engine 404 can be configured to accept tenant identification of virtual resources to create a tenant network.
  • the tenant can specify VMs to include in their network, and as reaction to the tenant request, the NVI can provide physically associated addressing information to map to the logical addresses of the tenant requested VMs allocated by respective cloud providers for the resources executing the VMs to define the tenant network.
  • the system can be configured assign new globally unique identifiers to each resource.
  • the connection component 408 can also be configured to accept tenant defined communication policies for the new resources.
  • the tenant can define their network using a whitelist of included resources.
  • the tenant can access a user interface display provided by system 400 to input identifying information for the tenant resources.
  • the tenant can add, remove, and/or re-architect their network as desired. For example, the tenant can access the system 400 to dynamically add resources to their whitelist, remove resources, and/or create communication policies.
  • the NVI system 400 can also provide for encryption and decryption services to enable additional security within the tenant's network and/or communications.
  • the NVI system and/or NVI engine 404 can be configured to provide for encryption.
  • the NVI system 400 can also be configured to provision additional resources responsive to tenant requests.
  • the NVI system 400 can dynamically respond to requests for additional resources by creating global addresses for any new resources.
  • a tenant can define a list of resources to include in the tenant's network using system 400. For example, upon receipt the tenant's resource request, the NVI can create resources for the tenant in the form of virtual machines and specify identity information for the virtual machines to execute as allocated by whatever cloud provider they used.
  • the system 400 can be configured to assign globally unique identifiers to each virtual machine identified by the NVI for the tenant and store associations between globally unique identifiers and resource addresses for use in communicating over the resulting NVI network.
  • the system can create encryption certificates for a tenant for each VM in the NVI logical network, which is rented by the tenant.
  • the NVI can specify encryption certificates for a tenant as part of providing identity information for virtual machines to use in the tenant's network. The NVI system can then provide for encryption and decryption services as discussed in greater detail herein.
  • At least some embodiments disclosed herein include apparatus and processes for creating and managing a globally distributed and intelligent NVI or NVI system.
  • the NVI is configured to provide a logical network implemented on cloud resources.
  • the logical network enables communication between VMs using logically defined unicast channels defined on logical addresses within the logical network.
  • Each logical address can be a globally unique identifier that is associated by the NVI with addresses assigned to the cloud resources (e.g., physical addresses or physically associated addresses) by respective cloud datacenters or providers.
  • the logical addresses remain unchanged even as physical network resources supporting the logical network change, for example, in response to migration of a VM of the logical network to a new location or a new cloud provider.
  • the NVI includes a database or other data storage element that records a logical address for each virtual machine of the logical network.
  • the database can also include a mapping between each logical address and a physically associated address for the resource(s) executing the VM.
  • a logical network ID e.g., UUID or IPv6 address
  • UUID or IPv6 address is assigned to a vNIC of a VM and mapped to a physical network address and/or context tag assigned by the cloud provider to the resources executing the VM.
  • the NVI can be associated with a database management system (DBMS) that stores and manages the associations between logical identities/addresses of VMs and underlying physical addresses of the resources.
  • DBMS database management system
  • the NVI is configured to update the mappings to permanent logical addresses of the VMs with physically associated addresses as resources assigned to the logical network change.
  • Further embodiments include apparatus and processes for provisioning and isolating network resources in cloud environments.
  • the network resources can be rented from one or more providers hosting respective cloud datacenters.
  • the isolated network can be configured to provide various quality of service (“QoS") guaranties and or levels of service.
  • QoS features can be performed according to software developed network principals.
  • the isolated network can be purely logical, relying on no information of the physical locations of the underlying hardware network devices.
  • implementation of purely logical network isolation can enable trans-datacenter implementations and facilitate distributed firewall policies.
  • the logical network is configured to pool underlying hardware network devices (e.g., those abstracted by the logical network topology) for network control into a network resource pool.
  • underlying hardware network devices e.g., those abstracted by the logical network topology
  • Some properties provided by the logical network include, for example: a tenant only sees and on-demand rents resources for its business logic; the tenant should never care where the underlying hardware resource pool is located; and/or how the underlying hardware operates.
  • the system provides a globally distributed and intelligent network virtualization infrastructure ("NVI").
  • the hardware basis of the NVI can consist of globally distributed and connected physical computer servers which can communicate one another using any conventional computer networking technology.
  • the software basis of the NVI consists of hypervisors (i.e., virtual machine managers) and database management systems (DBMS) which can execute on the hardware basis of the NVI.
  • the NVI can include the following properties: first, any two hypervisors of a cloud provider or different cloud providers in the NVI can be configured to communicate one another on their respective physical locations. If necessary, the system can use dedicated cable connection technologies or well-known virtual private network (VPN) technology to connect any two or more hypervisors to form a globally connected NVI. Second, the system and/or virtualization infrastructure knows of any communication event which is initiated by a virtual machine (VM) more directly and earlier than a switch does when the latter sees a network packet.
  • VM virtual machine
  • the latter event (detection at a switch) is only observed as a result of the NVI sending the packet from a vNIC of the VM to the switch.
  • the prior event (e.g., detection at initiation) is a property of the NVI managing the VM's operation, for example at a vNIC of the VM, which can include identifying communication by the NVI at initiation of a communication event (e.g., prior to transmission, at receipt, etc.).
  • the NVI can control and manage communications for globally distributed VMs via its intelligently connected network of globally distributed hypervisors and DBMS.
  • these properties of the NVI enable the NVI to construct a purely logical network for globally distributed VMs.
  • control functions for the logical network of globally distributed VMs which defines the communications semantics of logical network (i.e., governs how VMs in the logical network communicate), is implemented in, and executed by, software components which work with hypervisors and DBMS of the NVI to cause some function to take effect at vNICs of VMs; and the network dataflow for logical network of globally distributed VMs passes through the physical networking components which underlie the logical network and connect the globally distributed hypervisors of the NVI. It is realized that the separation of network control function in software (e.g., operating at vNICs of VMs), from network dataflow through the physical networking components allows definition of the logical network without physical network attributes. In some implementations, the logical network definition can be completely de-coupled from the underlying physical network.
  • the separation of network control function on vNICs of VMs, from network dataflow through underlying physical network of the NVI result in communications semantics of logical network of globally distributed VMs that can be completely software defined, or in other words, results in a logical network of globally distributed VMs that according to some embodiments can be a software defined network (SDN): where communications semantics can be provisioned automatically, fast and dynamically changing, with trans-datacenter distribution, and with a practically unlimited size and scalability for the logical network.
  • SDN software defined network
  • using software network control functions that take effect directly on vNICs enables construction of a logical network of VMs of global distribution and unlimited size and scalability. It is realized that network control methods/functions whether in software or hardware in conventional systems (including, e.g., OpenFlow) take effect in switches, routers and/or other network devices. Thus, it is further realized that, e.g., construction of a large scale of logical network in conventional approaches is at best in step-by-step system upgrading of switches, routers and/or other network devices, which is impractical for constructing a globally distributed, trans-datacenter, or unlimited scalability network.
  • OpenFlow OpenFlow
  • control function to take effect directly on vNICs of VMs of some embodiments include any one or more of: (i) plug/unplug logically defined unicast cables to implement network isolation policy, (ii) transform IPv6-IPv4 versions of packets, (iii) encrypt/decrypt or IPsec based protection on packets, (iv) monitor and/or divert traffics, (v) detect intrusion and/or DDoS attacks, (vi) account fees for traffic volume usage, among other options.
  • the system can distribute firewall packet filtering at the locality of each VM (e.g., at the vNIC). Any pair of VMs, or a VM and an external entity, can communicate in "out-in” fashion, provided isolation and firewall policies permit - whether these communication entities are in the same intranet or in trans-global locations separate by the Internet
  • the region outside the distributed points of VM packet filtering can be configured outside the firewalls of any tenant, exactly like the Internet.
  • the OSI layers 1, 2, and 3 of this "Internet within intranet" region are fully under the centralized control and distributed SDN programmability on each server.
  • this topology can be used in conjunction with a variety of virtualization systems (in one example under the control node of Openstack), to achieve an Internet within intranet region is under communication control and SDN programmability.
  • the distributed servers With the Internet within intranet topology, the distributed servers become SDN programmable forwarding devices that participate in traffic route dynamicity and bandwidth distribution, and in particular can act as a distributed gateway to enlarge the bandwidth for VM-Internet traffic.
  • the new SDN route dynamicity programmability in intranets with the Internet within intranet topology has thus successfully eliminated any chokepoint from the Internet-intranet interface, and in further embodiments, optimally widened routes for intranet patching and Internet traffic.
  • optimally widened routes for intranet patching and Internet traffic by including Internet route redundancy into local intranets the full potential of SDN can be achieved.
  • Every physical IT business processing box (below, IT box) included a physical network interface card (NIC) which can be plugged to establish a connection between two ends (a wireless NIC has the same property of "being plugged as a cable”), and the other end of the cable is a network control device.
  • NIC physical network interface card
  • Any two IT boxes may or may not communicate with one another provided they are under the control of some network control devices in-between them.
  • the means of controlling communications between IT boxes occurs by the control devices inspecting and processing some metadata— addresses and possibly more refined contexts called tags— in the head part of network packets: permitting some packets to pass through, or dropping others, according to the properties of the metadata in the packets against some pre-specified communications policy.
  • This control through physically associated addressing e.g., MAC addresses, IP addresses and or context tags) has a number of drawbacks.
  • Openstack operation includes sending network packets of a VM to a centralized network device (of course, in Openstack the network device may be a software module in a hypervisor, called hypervisor switch or hypervisor bridge) via a network cable (which may also be software implemented in a hypervisor), for passing through or dropping packets at centralized control points.
  • a centralized network device of course, the network device may be a software module in a hypervisor, called hypervisor switch or hypervisor bridge
  • a network cable which may also be software implemented in a hypervisor
  • This conventional network control technology of processing packets metadata at centralized control points has various limitations in spite of virtualization.
  • the centralized packets processing method which processes network control in the meta-data or head part, and forwards dataflow in the main-body part, of a network packet at a centralized point (called chokepoint) cannot make efficient use of the distributed computing model of the VI; centralized packets processing points can form a performance bottleneck at large scale.
  • the packet metadata inspection method examines a fraction of metadata (an address or a context tag) in the head of a whole network packet, and then may drop the whole packet (resulting in wasted network traffic).
  • the metadata (addresses and tags) used in the head of a network packet are still physically associated (i.e., related to) the physical location of hardware of respective virtualized resources.
  • Physical associations are not an issue for on on-site and peak-volume provisioned physical resources (IT as an asset model), where changes in topology are infrequent.
  • IT on on-site and peak-volume provisioned physical resources
  • QoS network quality of services
  • the user or tenant may require an on-demand elastic way to rent IT resources, and may also rent from geographically different and scattered locations of distributed cloud datacenters (e.g., to increase availability and/or reliability). Cloud providers may also require the ability to move assigned resources to maximize utilization and/or minimize maintenance. These requirements in cloud computing translate to needs for resource provisioning with the following properties: automatic, fast and dynamic changing, trans-datacenter scalable, and for IT resource being network per se, a tenant's network should support a tenant-definable arbitrary topology, which can also have a trans-datacenter distribution.
  • the network inside a cloud datacenter upon which various QoS can be performed in SDN should be a purely logical one.
  • the properties provided by various embodiments can include: logical addressing containing no information on the physical locations of the underlying physical network devices; and enabling pooling of hardware devices for network control into a network resource pool.
  • Various implementations can also take advantage of conventional approaches to allow hypervisors of respective cloud providers to connect with each other (e.g., VPN connections) underneath the logical topology.
  • various embodiments can leverage management of VMs by the hypervisors and/or proxy entities to capture and process communication events. Such control allows communication events to be captured more directly and earlier than, for example, switch based control (which must first receive the communication prior to action).
  • various embodiments can control and manage communications for globally distributed VMs without need of inspecting and processing any metadata in network packets.
  • Conventional firewall implementations focus on a "chokepoint" model: an organization first wires its owned, physically close-by IT boxes to some hardware network devices to form the organization's internal local area network (LAN); the organization then designates a "chokepoint” at a unique point where the LAN and wide area network (WAN) meet, and deploys the organization's internal and external communications policy only at that point to form the organization's network edge.
  • Conventional firewall technologies can use network packet metadata such as IP / MAC addresses to define LAN and configure firewall. Due to the seldom changing nature of network configurations, it suffices for the organization to hire specialized network personnel to configure the network and firewall, and suffices for them to use command-line-interface (CLI) configuration methods.
  • CLI command-line-interface
  • firewalls are based on the VLAN technology.
  • the physical hardware switches are "virtualized” into software counterparts in hypervisors, which are either called “hypervisor learning bridges", or “virtual switches” ("hypervisor switch" is a more meaningful name).
  • hypervisors connecting vNICs of VMs to the hardware NIC on the server. They are referred to below interchangeably as a hypervisor switch.
  • a hypervisor switch Like a hardware switch, a hypervisor switch involves in a LAN construction also by learning and processing network packet metadata such as addresses. Also like the hardware counterpart, a hypervisor switch can refine a LAN by adding more contexts to the packet metadata. The additional contexts which can be added to the packet metadata part by a switch (virtual or real) are called tags. The hypervisor switch can add different tags to the network packets of IT boxes which are rented by different tenants. These different tenants' tags divide a LAN into isolated virtual LANs, isolating tenants' networks in a multi-tenancy datacenter.
  • VLAN technology is for network cable virtualization: packets sharing some part of a network cable are labeled differently and thereby can be sent to different destinations, just like passengers in an airport sharing some common corridors before boarding at different gates, according to the labels (tags) on their boarding passes.
  • a network virtualization infrastructure leveraging direct communication control over VMs to establish a fully logical network architecture.
  • Direct control over each VM for example through a hypervisor and/or proxy entity, is completely distributed and at the location where the VM with vNICs currently is executing.
  • An advantage of the direct network control function on a vNIC is that the communication control can avoid complex processing network packets metadata which are tight coupled with physical locations of the network control devices, instead, using purely logical addresses of vNICs.
  • the resultant logical network eliminates any location specific attributes of the underlying physical network. SDN work over the NVI can be implemented simply and as straightforward high-level language programming.
  • each VM can be viewed by the NVI to have an infinite number of vNIC cards, where each can be plugged as a logically defined unicast cable for exclusive use with a single given communications partner.
  • a hypervisor in the NVI is responsible for passing network packets from/to the vNIC of a VM right at the spot of the VM, the NVI can be configured for direct quality of control, either by controlling communication directly with the hypervisor or by using a proxy entity coupled with the hypervisor.
  • a switch even a software coded hypervisor switch, can only control VM's communications via packets metadata received from a multicast network cable.
  • Fig. 5 illustrates an example implementation of network virtualization infrastructure (NVI) technology according to one embodiment.
  • the NVI system 500 and corresponding virtualization infrastructure (VI) which can be globally distributed over a physical network can be configured to plug/unplug a logically defined unicast network cable 502 for any given two globally distributed VMs (e.g., 501 and 503 hosted, for example, at different cloud datacenters 504 and 506).
  • the respective VMs e.g. 501 and 503 are managed throughout their lifecycle by respective virtual machine managers (VMMs) 508 and 510.
  • VMMs virtual machine managers
  • the VM From the moment of a VM's (e.g., 501 and 503) inception and operation, the VM obtains a temporary IP address assigned by a respective hypervisor (e.g., VMM 508 and 510).
  • the temporary IP address can be stored and maintained in respective databases in the NVI (e.g., 512 and 514).
  • the temporary IP addresses can change, however, as the addresses change or resources are added and/or removed any temporary IP addresses are maintained in respective databases.
  • the databases (e.g., 512 and 514) are also configured to store globally identifiable identities in association with each virtual machines' assigned address.
  • the NVI can be configured to plug/unplug logically defined unicast cable between any two given network entities using unchanging unique IDs (so long as one of communicating entities is a VM within the NVI).
  • the NVI constructs the logical network by defining unicast cables to plug/unplug avoiding processing of packet metadata.
  • centrally positioned switches (software or hardware) can still be employed for connecting the underlying physical network, but
  • the network control for VMs can therefore be globally distributed given that the VM ID is globally identifiable, and operates without location specific packet metadata.
  • respective hypervisors and associated hypervisors are associated
  • DBMS in the NVI have fixed locations, i.e., they typically do not move and/or change their physical locations.
  • globally distributed hypervisors and DBMS can use the conventional network technologies to establish connections underlying the logical network.
  • Such conventional network technologies for constructing the underlying architecture used by the NVI can be hardware based, for which command-line-interface (CLI) based configuration methods are sufficient and very suitable.
  • CLI command-line-interface
  • VPN virtual private network
  • UUID Universally Unique Identity
  • IPv6 addresses can be assigned to provide globally unique addresses. Once assigned, the relationship between the UUID and the physically associated address for any virtual machine can be stored for later access (e.g., in response to a communication event). In other embodiments, other globally identifiable unique and unchanging identifiers can be used in place of UUID.
  • the UUID of a VM will not change throughout the VM's complete lifecycle.
  • each virtual cable between two VMs is then defined on the respective global identifiers.
  • the resulting logical network constructed by plugged unicast cables over the NVI is also completely defined by the UUIDs of the plugged VMs.
  • the NVI is configured to plug/unplug the unicast cables in real-time according to a given set of network control policy in the DBMS.
  • a tenant 516 can securely access (e.g., via SSL 518) the control hub of the logical network to define a firewall policy for each communication cable in the logical network.
  • any logic network defined on the never changing UUIDs of the VMs can have network QoS (including, for example, scalability) addressed by programming purely in software.
  • such logic networks are easy to change, both in topology or in scale, by SDN methods, even across datacenters.
  • the tenant can implement a desired firewall using, for example, SDN programming.
  • the tenant can construct a firewall with a trans-datacenter distribution.
  • Shown in Fig. 6 is an example of a distributed firewall 600.
  • Virtual resources of the tenant A 602, 604, and 606 span a number of data centers (e.g., 608, 610, and 612) connected over a communication network (e.g., the Internet 620).
  • Each datacenter provides virtual resources to other tenants (e.g., at 614, 616, and 618), which are isolated from the tenant A's network.
  • the tenant A is able to define a communication policy that enables communication on a cable by cable basis. As communication events occur, the communication policy is checked to insure that each communication event is permitted. For example, a cable can be plugged in real-time in response to VM 602 attempting to communicate with VM 604. For example, the communication policy defined by the tenant A can permit all communication between VM 602 and VM 604. Thus, a communication initiated at 602 with destination 604 passes the firewall at 622. Upon receipt, the communication policy can be checked again to insure that a given communication is permitted, in essence passing the firewall at 624. VM 606 can likewise be protected from both internal VM communication and externally involved communication, shown for illustrative purposes at 626.
  • Fig. 7 illustrates an example process 700 for defining and/or maintaining a tenant network.
  • the process 700 can be executed by an NVI system to enable a tenant to acquire resources and define their own network across rented cloud resources.
  • the process 700 begins at 702 with a tenant requesting resources.
  • various processes or entities can also request resources to begin process 700 at 702.
  • a hypervisor or VMM having available resources can be selected.
  • hypervisors can be selected based on pricing criteria, availability, etc.
  • the hypervisor creates a VM assigned to the requestor with a globally uniquely identifiable id (e.g., a globally uniquely identifiable id (e.g., a globally uniquely identifiable id).
  • the global ID can be added to a database for the tenant network.
  • Each global id is associated with a temporary physical address (e.g., an IP address available from the NVI) assigned to the VM by its hypervisor.
  • the global id and the temporary physical address for the VM are associated and stored at 706.
  • a hypervisor creates in a tenant's entry in the NVI DB a new entry:
  • IP UUID/IPv6 for the newly created VM with the current network address of the VM (IP below denotes the current physical network address which is mapped to the
  • the tenant and/or resource requestor can also implement cryptographic services.
  • the tenant may wish to provide integrity protection on VM IDs to provide additional protection.
  • crypto protection is enabled 708 YES
  • optional cryptographic functions include applying public-key cryptography to create a PKI certificate Cert(UUID/IPv6) and a digital signature Sign(UUID/IPv6, IP) for each tenant VM such that the correctness of the mapping (UUID/IPv6, IP) can be crypto graphically verified by any entity using
  • a cryptographic certificate for the VM ID and signature for the mapping between the ID and the VM's current physical location in IP address are created at 710 and stored, for example, in the tenant database at 712.
  • Process 700 can continue at 714.
  • Responsive to re-allocation of VM resource including, for example, movement of VM resources
  • a respective hypervisor for example a destination hypervisor ("DH") takes over the tenant's entry in the NVI DB maintenance job for the moved VM.
  • the moved VM is assigned a new address consistent with the destination hypervisors network.
  • a new mapping between the VM's global ID and the new hypervisor address is created (let IP' denote the new network address for the VM over DH).
  • let IP' denote the new network address for the VM over DH.
  • the DH updates encryption certifications Sign(UUID/IPv6, IP') in the UUID/IPv6 entry to replace the prior and now invalid certificate Sign(UUID/IPv6, IP).
  • VMs in the tenant network can be managed at 716, by the DH associating a new physical address with the global ID assigned to the VM.
  • the new association is stored in a tenant's entry in the NVI DB, defining the tenant network.
  • a tenant may already have allocated resources through cloud datacenter providers.
  • the tenant may access an NVI system to know identifying
  • the NVI can then assign global ID of VMs to the physically associated addresses of resources. As discussed above, the identities and mappings can be crypto graphically protected to provide additional security.
  • Fig. 8 Shown in Fig. 8 is an example PKI certificate than can be employed in various embodiments.
  • known security methodologies can be implemented to protect the cryptographic credential of a VM (the private key used for signing Sign(UUID/IPv6, IP) and to migrate credentials between hypervisors within a tenant network (e.g., at 714 of process 700).
  • known "Trusted Computing Group" (TCG) technology is implemented to protect and manage cryptographic credentials.
  • TPM module can be configured to protect and manage credentials within the NVI system and/or tenant network.
  • known protection methodologies can include hardware based implementation, and hence can prevent very strong attacks to the NVI, and for example, can protect against attacks launched by a datacenter system administrator.
  • TCG technology also supports credential migration (e.g., at 714).
  • the tenant can establish a communication policy within their network.
  • the tenant can define algorithms for plugging/unplugging unicast cables defined between VMs in the tenant networks, and unicast cables connecting external address to internal VMs for the tenant network.
  • the algorithms can be referred to as communication protocols.
  • the tenant can define
  • FIG. 9 Shown in Fig. 9 is an example process flow 900 for execution of a tenant defined communication policy.
  • the process 900 illustrates an example flow for a sender defined protocol (i.e., initiated by a VM in the tenant network).
  • SIP physically associated address
  • DIP physically associated address
  • DST global ID
  • DH global ID
  • control components in the NVI system can include the respective hypervisors of respective cloud providers where the hypervisors are specially configured to perform at least some of the functions for generating, maintaining, and/or managing communication in an NVI network.
  • each hypervisor can be coupled with one or more proxy entities configured to work with respective hypervisors to provide the functions for generating, maintaining, and/or managing communication in the tenant network.
  • the processes for executing communication policies (e.g., 900 and 1000) are discussed in some examples with reference to hypervisors performing operations, however, one should appreciate that the operations discussed with respect to the hypervisors can be performed by a control component, the hypervisors, and/or respective hypervisors and respective proxy entities.
  • the process 900 beings at 902 with SH intercepting a network packet generated by VM1, wherein the network packet includes physically associated addressing (to DIP).
  • the hypervisor SH and/or the hypervisor in conjunction with a proxy entity can be configured to capture communication events at 902.
  • the communication event includes a communication initiated at VM1 address to VM2.
  • the logical and/or physically associated addresses for each resource within the tenant's network can be retrieved, for example, by SH.
  • a tenant database entry defines the tenant's network based on globally unique identifiers for each tenant resource (e.g., VMs) and their respective physically associated addresses (e.g., addresses assigned by respective cloud providers to each VM).
  • the tenant database entry also includes certificates and signatures for confirming mappings between global ID and physical addresses for each VM.
  • the tenant database can be accessed to look up the logical addressing for VM2 based on the physically associated address (e.g. DIP) in the communication event. Additionally, the validity of the mapping can also be confirmed at 906 using Cert(DST), Sign(DST, DIP), for example, as stored in the tenant database. If the mapping is not found and/or the mapping is not validated against the digital certificate, the communication event is terminated (e.g., the virtual communication cable VM1 is attempting to use is unplugged by the SH). Once a mapping is found and/or validated at 906, a system communication policy is checked at 908. In some embodiments, the communication policy can be defined by the tenant at part of creation of their network. In some implementations, the NVI system can provide default communication policies. Additionally, tenants can update and/or modify existing communication policies as desired. Communication policies may be stored in the tenant's entry in the NVI database or may be referenced from other data locations within the tenant network.
  • the communication event is terminated (e.g., the virtual communication cable
  • Each communication policy can be defined based on the global IDs assigned to communication partners. If for example, the communication policy specifies (SRC, DST: unplug), the communication policy prohibits communication between SRC and DST, 910 NO. At 912, the communication event is terminated. If for example, the communication policy permits communication between SRC and DST (SRC, DST: plug), SH can plug the unicast virtual cable between SRC and DST permitting communication at 914.
  • the process 900 can also include additional but optional cryptographic steps. For example, once SH plugs the cable between SRC and DST, SH can initiate a cryptographic protocol (e.g., IPsec) with DH to provide
  • a cryptographic protocol e.g., IPsec
  • process 900 can be executed on all types of communication for the tenant network.
  • communication events can include VM to external address communication.
  • DST is a conventional network identity rather than a global ID assigned to the logical network (e.g., an IP address).
  • the communication policy defined for such communication can be defined based on a network edged policy for VMl .
  • the tenant can define a network edge policy for the entire network implement through execution of, for example, process 900.
  • the tenant can define network edge policies for each VM in the tenant network.
  • Fig. 10 illustrates another example execution of a communication policy within a tenant network.
  • DIP physically associated address
  • SIP physically associated address
  • SH global ID
  • a communication event is captured.
  • the communication event is captured.
  • the communication event is the receipt of a message of a communication from VMl .
  • the communication event can be captured by a control component in the NVI.
  • the communication event is captured by DH.
  • the logical addressing information for the communication can be retrieved.
  • the tenant's entry in the NVI database can be used to perform a lookup for a logical address for the source VM based on SIP within a communication packet of the communication event at 1004.
  • validity of the communication can be determined based on whether the mapping between the source VM and destination VM exist in the tenant's entry in the NVI DB, for example, as accessible by DH.
  • validity at 1006 can also be determined using certificates for logical mappings.
  • DH can retrieve a digital certificate and signature for VM1 (e.g., Cert(SRC), Sign(SRC,SIP)).
  • the certificate and signature can be used to verify the communication at 1006. If the mapping does not exist in the tenant database or the certificate/signature is not valid 1006 NO, then the communication event is terminated at 1008.
  • DH can operate according to any defined communication policy at 1010. If the communication policy prohibits communication between SRC and DST (e.g., the tenant database can include a policy record "SRC, DST : unplug") 1012 NO, then the communication event is terminated at 1008. If the communication is allowed 1012 YES (e.g., the tenant database can include a record "SRC, DST: plug"), then DH permits communication between VM1 and VM2 at 1014. In some examples, once DH determines a communication event is valid and allowed, DH can be configured to use a virtual cable between the
  • DH can execute cryptographic protocols (e.g., IPsec) to create and/or respond to communications of SH to provide cryptographic protection of application layer data in the network packets.
  • cryptographic protocols e.g., IPsec
  • process 1000 can be executed on all types of communication for the tenant network.
  • communication events can include external to VM address communication.
  • SRC is a conventional network identity rather than a global ID assigned to the logical network (e.g., an IP address).
  • the communication policy defined for such communication can be defined based on a network edge policy for the receiving VM.
  • the tenant can define a network edge policy for the entire network implemented through execution of, for example, process 1000.
  • the tenant can define network edge policies for each VM in the tenant network.
  • the tenant can define communication protocols for both senders and recipients, and firewall rules can be executed at each end of a communication over the logical tenant network.
  • Shown in Fig. 11 is a screen shot of an example user interface 1100.
  • the user interface (“UI") 1100 is configured to accept tenant definition of network topology.
  • the user interface is configured to enable a tenant to add virtual resources (e.g., VMs) to security groups (e.g., at 1110 and 1130).
  • the UI 1100 can be configured to allow the tenant to name such security groups.
  • Responsive to adding a VM to a security group the system creates and plugs virtual cables between the members of the security group. For example, VMs windowsl (1112), mailserver (1114), webserver (1116), and windows3 (1118) are members of the HR-Group.
  • Each member has a unicast cable defining a connection between each other member of the group.
  • windowsl there is a respective connection for windowsl as a source to mailserver, webserver, and windows3 defined within HR-Group 1110.
  • virtual cables exist for R&D-Group 1130.
  • User interface 1100 can also be configured to provide other management functions.
  • a tenant can access UI 1100 to define communication policies, including network edge policies at 1140, manage security groups by selecting 1142, password control at 1144, manage VMs at 1146 (including for example, adding VMs to the tenant network, requesting new VMs, etc.), and mange users at 1148.
  • the communications protocol suite operates on communication inputs or addressing that is logical. For example, execution of communication in processes 900 and 1000 can occur using global IDs in the tenant network. Thus communication does not require any network location information about the underlying physical network. All physical associated addresses (e.g., IP addresses) which the tenant's rental VMs (the tenant's internal nodes) have temporary IP addresses assigned by respective provides. These temporary IP addresses are maintained in a tenant database, which can be updated as the VMs move, replicate, terminate, etc. (e.g., through execution of process 700). Accordingly, these temporary IP addresses play no role in the definition of the tenant's distributed logical network and firewall/communication policy in the cloud. The temporary IP addresses are best envisioned as pooled network resources.
  • IP addresses which the tenant's rental VMs (the tenant's internal nodes) have temporary IP addresses assigned by respective provides.
  • These temporary IP addresses are maintained in a tenant database, which can be updated as the VMs move, replicate, terminate, etc. (e.g., through execution of
  • the pool networks resources are employed as commodities for use in the logical network, and may be consumed and even discarded depending on the tenant's needs.
  • the tenant's logical network is completely and thoroughly de-coupled from the underlying physical network.
  • software developed network functions can be executed to provide network QoS in simplified "if-then-else" style of high-level language programming. This simplification allows a variety of QoS guaranties to be implemented in the tenants' logical network.
  • QoS Network QoS which can be implemented as SDN programming at vNICs include: Traffic diversion, Load-balancing, Intrusion detection, DDoS scrubbing, among other options.
  • an SDN task that the NVI system can implement can include automatic network traffic diversion.
  • Various embodiments, of NVI systems/tenant logical networks distribute network traffic to the finest possible granularity: at the very spot of each VM making up the tenant network. If one uses such VMs to host web services, the network traffic generated by web services requests can be measured and monitored to the highest precision at each VM.
  • the system can be configured to execute automatic replication of the VM and balance requests between the pair of VMs (e.g., the NVI system can request a new resource, replicate the responding VM, and create a diversion policy to the new VM).
  • the system can automatically replicate an overburden or over threshold VM and new network requests can be diverted to the newly created replica.
  • any one or more of following advantages can be realized in various embodiments over conventional centralized deployment: (i) on-VM-spot unplug avoids sending/dropping packets to the central control points, and reducing network bandwidth; (ii) fine granularity distribution makes the execution of security policy less vulnerable to DDoS-like attacks; (iii) upon detect of DDoS-like attacks to a VM, moving the VM being attacked or even simply changing the temporary IP address can resolve the attack.
  • the resulting logical network provides an intelligent layer-2 network or practically unlimited size (e.g., at 2 A 128 level if the logical network is defined over IPv6 addresses) on cloud based resources. It is further realized that various implementations of the logical network manage communication without broadcast, as every transmission is delivered over a unicast cable between source and destination (e.g., between two VMs in the network). Thus, the NVI system and/or logical network solve a long felt but unsolved need for a large layer-2 network.
  • NVI-based new overlay technology in this disclosure is the world first overlay technology which uses the global management and global mapping intelligence of hypervisors and DBs formed infrastructure to achieve for the first time a practically unlimited size, globally distributed logical network, without need of protocol negotiation among component networks.
  • the NVI-based overlay technology enables simple web-service controllable and manageable inter-operability for constructing a practically unlimited large scale and on-demand elastic cloud network.
  • Table 1 below provides network traffic measurements in three instances of comparisons, which are measured by the known tool NETPERF. The numbers shown in the table are in megabits (10 A 6) per second.
  • the packets drop must take place behind the consolidated switch, and that means, the firewall edge point to drop packets can be quite distant from the message sending VM, which translates to a large amount of wasted network traffic in the system.
  • Various embodiments also provide: virtual machines that each have PKI certificates; thus, not only can the ID of the VM get crypto quality protection, but also the VM's IP packets and 10 storage blocks can be encrypted by the VMM.
  • the crypto credential of a VM's certificate is protected and managed by the VMM and the crypto mechanisms, which manage VM credentials are in turn protected by a TPM of the physical server.
  • Further embodiments provide for vNIC of a VM that never need to change its identity (i.e., the global address in the logical network does not change, even when the VM changes location, and even when the location change is in trans-datacenter). This results in network QoS programming at a vNIC that can avoid VM location changing complexities.
  • a global ID used in the tenant network can include an IPv6 address.
  • a cloud datacenter (1) runs a plural number of network virtualization infrastructure (NVI) hypervisors, and each NVI hypervisor hosts a plural number of virtual machines (VMs) which are rented by one or more tenants.
  • NVI hypervisor also runs a mechanism for public-key based crypto key management and for the related crypto credential protection. This key-management and credential-protection mechanism cannot be affected or influenced by any entity in any non-prescribed manner.
  • credential-protection mechanism can be implemented using known approaches (e.g., in the US Patent Application 13/601,053, which claims priority to Provisional Application number 61530543), which application is incorporated herein by reference in its entirety. Additional known security approaches include the Trusted Computing Group technology and TXT technology of Intel. Thus, the protection on the crypto-credential management system can be implemented even against a potentially rogue system administrator of the NVI.
  • the NVI uses the key-management and
  • Each VM has an individually and distinctly managed public key, and also has the related crypto credential so protected.
  • the NVI executes known cryptographic algorithms to protect the network traffic and the storage input/output data for a VM: Whenever the VM initiates a network sending event or a storage output event, the NVI operates an encryption service for the VM, and whenever the VM responds to a network receiving event or a storage input event, the NVI operates a decryption service for the VM.
  • the network encryption service in (3) uses the public key of the communication peer of the VM; and the storage output encryption service in (3) uses the public key of the VM; both decryption services in (3) use the protected crypto credential that the NVI-hypervisor protects for the VM.
  • the communication peer of the VM in (4) does not possess a public key
  • the communication between the VM and the peer should route via a proxy entity (PE) which is a designated server in the datacenter.
  • the PE manages a public key and protects the related crypto credentials for each tenant of the datacenter.
  • the network encryption service in (3) shall use a public key of the tenant which has rented the VM.
  • the PE Upon receipt of an encrypted communication packet from an NVI-hypervisor for a VM, the PE will provide a decryption service, and further forward the decrypted packet to the communication peer which does not possess a public key.
  • the PE Upon receipt of an unencrypted communication packet from the no-public-key communication peer to the VM, the PE will provide an encryption service using the VM's public key.
  • the NVI-hypervisor and PE provide
  • the whitelist contains (i) public-key certificates of the VMs which are rented by the tenant, and (ii) the ids of some communication peers which are designated by the tenant.
  • the NVI-hypervisor and PE will perform
  • a tenant uses the well-known web-service CRUD (create, retrieve, update, or delete) to compose the whitelist in (6).
  • a tenant may also compose the whitelist using any other appropriate interface or method.
  • Elements in the whitelist are the public-key certificates of the VMs which are rented by the tenant, and the ids of the communication peers which are designated by the tenant.
  • the tenant uses this typical web-service CRUD manner to compose its whitelist.
  • NVI-hypervisor and PE use the tenant-composed whitelist to provide
  • the tenant achieves instructing the datacenter in a self-servicing manner to define, maintain and manage a virtual private network (VPN) for the VMs it rents and for the communication peers it designates for its rental VMs.
  • VPN virtual private network
  • the PE can periodically create a symmetric conference key for T, and securely distribute the conference key to each NVI-hypervisor which hosts the VM(s) of T.
  • the crypto graphically protected secure communications among the VMs, and those between the VMs and the PE in (3), (5) and (6) can use symmetric
  • each NVI-hypervisor secures it using its crypto-credential protection mechanism in (1) and (2).
  • Fig. 12 Shown in Fig. 12 is an example embodiment of a tenant programmable trusted network 1200.
  • Fig. 12 illustrates both cases of the tenant T's private communication channels (e.g. 1202-1218) among its rental VMs (e.g., 1220 - 1230) and the PE (e.g., 1232). These communication channels can be secured either by the public keys of the VMs involved, or by a group's conference key.
  • Shown in this example are 20 VMs rented by a tenant 1250.
  • the tenant 1250 can define their trusted network using the known CRUD service 1252.
  • the tenant uses the CRUD service to define a whitelist 1254.
  • the whitelist can include a listing for identifying information on each VM in the tenant network.
  • the whitelist can also include public-key certificates of the VMs in the tenant network, and the ids of the communication peers which are designated by the tenant.
  • the PE 1232 further provides functions of NAT (Network Address Translation) and firewall, as shown.
  • the PE can be the external communications facing interface 1234 for the virtual network.
  • a VM in the trusted tenant network can only communicate or input/output data necessarily and exclusively via the communication and storage services which are provided by its underlying NVI-hypervisor. Thus, there can be no any other channel or route for a VM to bypass its underlying
  • NVI-hypervisor to attempt to achieve or bypass communication and/or input/output data with any entity outside the VM.
  • the NVI-hypervisor cannot be bypassed to perform encryption/decryption services for the VMs according to the instructions provided by the tenant.
  • the non-bypassable property can be implemented via known approaches (e.g., by using VMware's ESX, Citrix's Xen, Microsoft's Hyper- V, Oracle's VirtualBox, and open source community's KVM, etc, for the underlying NVI technology).
  • Various embodiments achieve a tenant defined, maintained, and managed virtual private network in a cloud datacenter.
  • the tenant defines their network by providing information on their rental VMs.
  • the tenant can maintain and managing the whitelist for its rental VMs through the system.
  • the tenant network is implemented such that network definition and maintain can be done in a self-servicing and on-demand manner.
  • VPC Virtual Private Cloud
  • a large number of small tenants can now securely share network resources of the hosting cloud, e.g., share a large VLA of the hosting cloud which is low-cost configured by the datacenter, which in some examples can be executed and/or managed using SDN technology. Accordingly, the small tenant does not need to maintain any high-quality onsite IT infrastructure. The tenant now uses purely on-demand IT.
  • the VPC provisioning methods discussed are also globally provisioned, i.e., a tenant is not confined to renting IT resources from one datacenter. Therefore, the various aspect and embodiments, enable break tradition vendor-locked-in style of cloud computing and provide truly open- vendor global utilities.
  • a proxy entity 1402 is configured to operate in conjunction with a hypervisor 1404 of a respective cloud according to any QoS definitions for the logical network (e.g., as stored in database 1406).
  • the three dots indicate that respective proxy entities and hypervisors can be located throughout the logical network to handle mapping and control of communication.
  • proxy entities and/or hypervisors can manage mapping between logical addresses of vNICs (1410-1416) and underlying physical resources managed by the hypervisor (e.g., physical NIC 1418), mapping between logical addresses of VMs, and execute communication control at vNICs of the front-end VMs (e.g., 1410-1416).
  • mapping enables construction of an arbitrarily large, arbitrary topology
  • trans-datacenter layer-2 logical network i.e., achieved the de-coupling of physical addressing.
  • control enables programmatic communication control, or in other words achieves a SDN.
  • the proxy entity 1402 is a specialized virtual machine (e.g. at respective cloud providers or respective hypervisors) configured for back end servicing.
  • a proxy entity manages internal or external communication according to communication policy defined on logical addresses of the tenants' isolated network (e.g., according to network edge policy).
  • the proxy entity executes the programming controls on vNICs of an arbitrary number of front end VMs (e.g., 1408).
  • the proxy entity can be configured to manage logical mappings in the network, and to update respective mappings when the hypervisor assigns new physical resources to front end VMs (e.g., 1408).
  • aspects and functions described herein may be implemented as specialized hardware or software components executing in one or more computer systems or cloud based computer resources.
  • computer systems that are currently in use. These examples include, among others, network appliances, personal computers, workstations, mainframes, networked clients, servers, media servers, application servers, database servers and web servers.
  • Other examples of computer systems may include mobile computing devices, such as cellular phones and personal digital assistants, and network equipment, such as load balancers, routers and switches.
  • aspects may be located on a single computer system, may be distributed among a plurality of computer systems connected to one or more communications networks, or may be virtualized over any number of computer systems.
  • aspects and functions may be distributed among one or more computer systems configured to provide a service to one or more client computers, or to perform an overall task as part of a distributed system or a cloud based system. Additionally, aspects may be performed on a client-server or multi-tier system that includes components distributed among one or more server systems that perform various functions, and may be distributed through a plurality of cloud providers and cloud resources. Consequently, examples are not limited to executing on any particular system or group of systems. Further, aspects and functions may be implemented in software, hardware or firmware, or any combination thereof. Thus, aspects and functions may be implemented within methods, acts, systems, system elements and components using a variety of hardware and software configurations, and examples are not limited to any particular distributed architecture, network, or communication protocol.
  • the distributed computer system 1300 includes one or more computer systems that exchange information. More specifically, the distributed computer system 1300 includes computer systems 1302, 1304 and 1306. As shown, the computer systems 1302, 1304 and 1306 are interconnected by, and may exchange data through, a communication network 1308. For example, components of an NVI-hypervisor system, NVI engine, can be implemented on 1302, which can communicate with other systems (1304-1306), which operate together to provide the functions and operations as discussed herein.
  • system 1302 can provide functions for request and managing cloud resources to define a tenant network execution on a plurality of cloud providers.
  • Systems 1304 and 1306 can include systems and/or virtual machines made available through the plurality of cloud providers.
  • system 1304 and 1306 can represent the cloud provider networks, including respective hypervisors, proxy entities, and/or virtual machines the cloud providers assign to the tenant.
  • all systems 1302-1306 can represent cloud resources accessible to an end user via a communication network (e.g., the Internet) and the functions discussed herein can be executed on any one or more of systems 1302-1306.
  • system 1302 can be used by an end user or tenant to access resources of an NVI-hypervisor system (for example, implemented on at least computer systems 1304-1306).
  • the tenant may access the NVI system using network 1308.
  • the network 1308 may include any communication network through which computer systems may exchange data.
  • the computer systems 1302, 1304 and 1306 and the network 1308 may use various methods, protocols and standards, including, among others, Fibre Channel, Token Ring, Ethernet, Wireless Ethernet, Bluetooth, IP, IPV6, TCP/IP, UDP, DTN, HTTP, FTP, SNMP, SMS, MMS, SSI 3, JSON, SOAP, CORBA, REST and Web Services.
  • the computer systems 1302, 1304 and 1306 may transmit data via the network 1308 using a variety of security measures including, for example, TLS, SSL or VPN. While the distributed computer system 1300 illustrates three networked computer systems, the distributed computer system 1300 is not so limited and may include any number of computer systems and computing devices, networked using any medium and communication protocol.
  • the computer system 1302 includes a processor 1310, a memory 1312, a bus 1314, an interface 1316 and data storage 1318.
  • the processor 1310 performs a series of instructions that result in manipulated data.
  • the processor 1310 may be any type of processor, multiprocessor or controller.
  • Some exemplary processors include commercially available processors such as an Intel Xeon, Itanium, Core, Celeron, or Pentium processor, an AMD Opteron processor, a Sun UltraSPARC or IBM Power5+ processor and an IBM mainframe chip.
  • the processor 1310 is connected to other system components, including one or more memory devices 1312, by the bus 1314.
  • the memory 1312 stores programs and data during operation of the computer system 1302.
  • the memory 1312 may be a relatively high performance, volatile, random access memory such as a dynamic random access memory (DRAM) or static memory (SRAM).
  • the memory 1312 may include any device for storing data, such as a disk drive or other non- volatile storage device.
  • Various examples may organize the memory 1312 into particularized and, in some cases, unique structures to perform the functions disclosed herein. These data structures may be sized and organized to store values for particular data and types of data.
  • each tenant can be associated with a data structured for managing information on a respective tenant network.
  • the data structure can include information on virtual machines assigned to the tenant network, certificates for network members, globally unique identifiers assigned to the network members, etc.
  • the bus 1314 may include one or more physical busses, for example, busses between components that are integrated within the same machine, but may include any communication coupling between system elements including specialized or standard computing bus technologies such as IDE, SCSI, PCI and
  • the bus 1314 enables communications, such as data and instructions, to be exchanged between system components of the computer system 1302.
  • the computer system 1302 also includes one or more interface devices 1316 such as input devices, output devices and combination input/output devices.
  • Interface devices may receive input or provide output. More particularly, output devices may render information for external presentation. Input devices may accept information from external sources. Examples of interface devices include keyboards, mouse devices, trackballs, microphones, touch screens, printing devices, display screens, speakers, network interface cards, etc. Interface devices allow the computer system 1302 to exchange information and to communicate with external entities, such as users and other systems.
  • the data storage 1318 includes a computer readable and writeable nonvolatile, or non-transitory, data storage medium in which instructions are stored that define a program or other object that is executed by the processor 1310.
  • the data storage 1318 also may include information that is recorded, on or in, the medium, and that is processed by the processor 1310 during execution of the program. More specifically, the information may be stored in one or more data structures specifically configured to conserve storage space or increase data exchange performance.
  • the instructions stored in the data storage may be persistently stored as encoded signals, and the instructions may cause the processor 1310 to perform any of the functions described herein.
  • the medium may be, for example, optical disk, magnetic disk or flash memory, among other options.
  • the processor 1310 or some other controller causes data to be read from the nonvolatile recording medium into another memory, such as the memory 1312, that allows for faster access to the information by the processor 1310 than does the storage medium included in the data storage 1318.
  • the memory may be located in the data storage 1318 or in the memory 1312, however, the processor 1310 manipulates the data within the memory, and then copies the data to the storage medium associated with the data storage 1318 after processing is completed.
  • a variety of components may manage data movement between the storage medium and other memory elements and examples are not limited to particular data management components. Further, examples are not limited to a particular memory system or data storage system.
  • the computer system 1302 is shown by way of example as one type of computer system upon which various aspects and functions may be practiced, aspects and functions are not limited to being implemented on the computer system 1302 as shown in FIG. 13. Various aspects and functions may be practiced on one or more computers having different architectures or components than that shown in FIG. 13.
  • the computer system 1302 may include specially programmed, special-purpose hardware, such as an application-specific integrated circuit (ASIC) tailored to perform a particular operation disclosed herein.
  • ASIC application-specific integrated circuit
  • another example may perform the same function using a grid of several general-purpose computing devices (e.g., running MAC OS System X with Motorola PowerPC processors) and several specialized computing devices running proprietary hardware and operating systems.
  • the computer system 1302 may be a computer system or virtual machine, which may include an operating system that manages at least a portion of the hardware elements included in the computer system 1302.
  • a processor or controller such as the processor 1310, executes an operating system. Examples of a particular operating system that may be executed include a
  • Windows-based operating system such as, Windows NT, Windows 2000 (Windows ME), Windows XP, Windows Vista, Windows 7 or 8 operating systems, available from the Microsoft Corporation, a MAC OS System X operating system available from Apple Computer, one of many Linux-based operating system distributions, for example, the Enterprise Linux operating system available from Red Hat Inc., a Solaris operating system available from Sun Microsystems, or a UNIX operating systems available from various sources. Many other operating systems may be used, and examples are not limited to any particular operating system.
  • the processor 1310 and operating system together define a computer platform for which application programs in high-level programming languages are written.
  • These component applications may be executable, intermediate, bytecode or interpreted code which communicates over a communication network, for example, the Internet, using a communication protocol, for example, TCP/IR
  • aspects may be implemented using an object-oriented programming language, such as .Net, SmallTalk, Java, C++, Ada, C# (C-Sharp), Objective C, or Javascript.
  • object-oriented programming languages such as .Net, SmallTalk, Java, C++, Ada, C# (C-Sharp), Objective C, or Javascript.
  • object-oriented programming languages may also be used.
  • functional, scripting, or logical programming languages may be used.
  • various aspects and functions may be implemented in a non-programmed environment, for example, documents created in HTML, XML or other format that, when viewed in a window of a browser program, can render aspects of a graphical-user interface or perform other functions.
  • various examples may be implemented as programmed or non-programmed elements, or any combination thereof.
  • a web page may be implemented using HTML while a data object called from within the web page may be written in C++.
  • the examples are not limited to a specific programming language and any suitable programming language could be used.
  • the functional components disclosed herein may include a wide variety of elements, e.g., specialized hardware, virtualized hardware, executable code, data structures or data objects, that are configured to perform the functions described herein.
  • the components disclosed herein may read parameters that affect the functions performed by the components. These parameters may be physically stored in any form of suitable memory including volatile memory (such as RAM) or nonvolatile memory (such as a magnetic hard drive). In addition, the parameters may be logically stored in a propriety data structure (such as a database or file defined by a user mode application) or in a commonly shared data structure (such as an application registry that is defined by an operating system). In addition, some examples provide for both system and user interfaces that allow external entities to modify the parameters and thereby configure the behavior of the components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Diverses mises en œuvre discutées résolvent au moins certains des problèmes associés à une application de correctifs conventionnelle de ressources de réseau, comprenant par exemple, une application de correctifs d'une multitude d'intranets locaux. Selon un exemple, une nouvelle topologie d'intranet permet des routes de trafic entièrement distribuées et/ou dynamiques par programmation SDN, un Internet dans une topologie d'intranet. La topologie d'intranet peut être gérée par un dispositif de commande de communication qui commande un composant de réseautage ("SDN") défini par logiciel. Selon un mode de réalisation, le composant SDN s'exécute sur une pluralité de serveurs dans l'intranet et coordonne la communication entre des machines virtuelles hébergées sur la pluralité de serveurs et des entités à l'extérieur du réseau d'intranet, sous la commande du dispositif de commande de communication. Le SDN définit un internet dans la région d'intranet où il n'y a pas de commande d'isolation de réseau ou de pare-feu exécutée.
PCT/CN2014/072339 2014-02-20 2014-02-20 Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible WO2015123849A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/072339 WO2015123849A1 (fr) 2014-02-20 2014-02-20 Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/072339 WO2015123849A1 (fr) 2014-02-20 2014-02-20 Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible

Publications (1)

Publication Number Publication Date
WO2015123849A1 true WO2015123849A1 (fr) 2015-08-27

Family

ID=53877536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/072339 WO2015123849A1 (fr) 2014-02-20 2014-02-20 Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible

Country Status (1)

Country Link
WO (1) WO2015123849A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224385A (zh) * 2015-09-03 2016-01-06 成都中机盈科科技有限公司 一种基于云计算的虚拟化系统及方法
CN105530259A (zh) * 2015-12-22 2016-04-27 华为技术有限公司 报文过滤方法及设备
CN106571945A (zh) * 2015-10-13 2017-04-19 中兴通讯股份有限公司 控制面、业务面分离的方法和系统、服务器、云计算平台
WO2017113300A1 (fr) * 2015-12-31 2017-07-06 华为技术有限公司 Procédé de détermination de route, procédé de configuration de réseau et dispositif associé
CN109495485A (zh) * 2018-11-29 2019-03-19 深圳市永达电子信息股份有限公司 支持强制访问控制的全双工防火墙防护方法
US10841274B2 (en) 2016-02-08 2020-11-17 Hewlett Packard Enterprise Development Lp Federated virtual datacenter apparatus
US20200402294A1 (en) 2019-06-18 2020-12-24 Tmrw Foundation Ip & Holding S. À R.L. 3d structure engine-based computation platform
EP3757788A1 (fr) * 2019-06-18 2020-12-30 TMRW Foundation IP & Holding S.A.R.L. Virtualisation de moteur logiciel et distribution dynamique de ressources et de tâches à la périphérie et dans le nuage
CN112637342A (zh) * 2020-12-22 2021-04-09 唐旸 文件摆渡系统及方法、装置、摆渡服务器
CN113783765A (zh) * 2021-08-10 2021-12-10 济南浪潮数据技术有限公司 一种实现云内网和云外网互通的方法、系统、设备和介质
US20230362245A1 (en) * 2020-12-31 2023-11-09 Nutanix, Inc. Orchestrating allocation of shared resources in a datacenter
US12034785B2 (en) 2020-08-28 2024-07-09 Tmrw Foundation Ip S.Àr.L. System and method enabling interactions in virtual environments with virtual presence
US12039354B2 (en) 2019-06-18 2024-07-16 The Calany Holding S. À R.L. System and method to operate 3D applications through positional virtualization technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108786A (en) * 1997-04-25 2000-08-22 Intel Corporation Monitor network bindings for computer security
WO2008028270A1 (fr) * 2006-09-08 2008-03-13 Bce Inc. Procédé, système et appareil pour commander un dispositif d'interface réseau
US7369556B1 (en) * 1997-12-23 2008-05-06 Cisco Technology, Inc. Router for virtual private network employing tag switching
WO2012092263A1 (fr) * 2010-12-28 2012-07-05 Citrix Systems, Inc. Systèmes et procédés permettant un routage basé sur une politique pour de multiples bonds suivants
CN103583022A (zh) * 2011-03-28 2014-02-12 思杰系统有限公司 用于经由nic感知应用处理nic拥塞的系统和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108786A (en) * 1997-04-25 2000-08-22 Intel Corporation Monitor network bindings for computer security
US7369556B1 (en) * 1997-12-23 2008-05-06 Cisco Technology, Inc. Router for virtual private network employing tag switching
WO2008028270A1 (fr) * 2006-09-08 2008-03-13 Bce Inc. Procédé, système et appareil pour commander un dispositif d'interface réseau
WO2012092263A1 (fr) * 2010-12-28 2012-07-05 Citrix Systems, Inc. Systèmes et procédés permettant un routage basé sur une politique pour de multiples bonds suivants
CN103583022A (zh) * 2011-03-28 2014-02-12 思杰系统有限公司 用于经由nic感知应用处理nic拥塞的系统和方法

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224385A (zh) * 2015-09-03 2016-01-06 成都中机盈科科技有限公司 一种基于云计算的虚拟化系统及方法
CN106571945A (zh) * 2015-10-13 2017-04-19 中兴通讯股份有限公司 控制面、业务面分离的方法和系统、服务器、云计算平台
CN106571945B (zh) * 2015-10-13 2020-07-10 中兴通讯股份有限公司 控制面、业务面分离的方法和系统、服务器、云计算平台
CN105530259A (zh) * 2015-12-22 2016-04-27 华为技术有限公司 报文过滤方法及设备
CN105530259B (zh) * 2015-12-22 2019-01-18 华为技术有限公司 报文过滤方法及设备
WO2017113300A1 (fr) * 2015-12-31 2017-07-06 华为技术有限公司 Procédé de détermination de route, procédé de configuration de réseau et dispositif associé
CN107113241A (zh) * 2015-12-31 2017-08-29 华为技术有限公司 路由确定方法、网络配置方法以及相关装置
CN107113241B (zh) * 2015-12-31 2020-09-04 华为技术有限公司 路由确定方法、网络配置方法以及相关装置
US10841274B2 (en) 2016-02-08 2020-11-17 Hewlett Packard Enterprise Development Lp Federated virtual datacenter apparatus
CN109495485B (zh) * 2018-11-29 2021-05-14 深圳市永达电子信息股份有限公司 支持强制访问控制的全双工防火墙防护方法
CN109495485A (zh) * 2018-11-29 2019-03-19 深圳市永达电子信息股份有限公司 支持强制访问控制的全双工防火墙防护方法
US20200402294A1 (en) 2019-06-18 2020-12-24 Tmrw Foundation Ip & Holding S. À R.L. 3d structure engine-based computation platform
EP3757788A1 (fr) * 2019-06-18 2020-12-30 TMRW Foundation IP & Holding S.A.R.L. Virtualisation de moteur logiciel et distribution dynamique de ressources et de tâches à la périphérie et dans le nuage
US12033271B2 (en) 2019-06-18 2024-07-09 The Calany Holding S. À R.L. 3D structure engine-based computation platform
US12039354B2 (en) 2019-06-18 2024-07-16 The Calany Holding S. À R.L. System and method to operate 3D applications through positional virtualization technology
US12040993B2 (en) 2019-06-18 2024-07-16 The Calany Holding S. À R.L. Software engine virtualization and dynamic resource and task distribution across edge and cloud
US12034785B2 (en) 2020-08-28 2024-07-09 Tmrw Foundation Ip S.Àr.L. System and method enabling interactions in virtual environments with virtual presence
CN112637342A (zh) * 2020-12-22 2021-04-09 唐旸 文件摆渡系统及方法、装置、摆渡服务器
CN112637342B (zh) * 2020-12-22 2021-12-24 唐旸 文件摆渡系统及方法、装置、摆渡服务器
US20230362245A1 (en) * 2020-12-31 2023-11-09 Nutanix, Inc. Orchestrating allocation of shared resources in a datacenter
CN113783765A (zh) * 2021-08-10 2021-12-10 济南浪潮数据技术有限公司 一种实现云内网和云外网互通的方法、系统、设备和介质

Similar Documents

Publication Publication Date Title
US20140052877A1 (en) Method and apparatus for tenant programmable logical network for multi-tenancy cloud datacenters
US11218483B2 (en) Hybrid cloud security groups
US10680946B2 (en) Adding multi-tenant awareness to a network packet processing device on a software defined network (SDN)
US20200252375A1 (en) Virtual private gateway for encrypted communication over dedicated physical link
WO2015123849A1 (fr) Procédé et appareil pour étendre l'internet dans des intranets afin d'obtenir un réseau en nuage éxtensible
EP2909780B1 (fr) Fourniture d'une architecture d'appareil de sécurité virtuel à une infrastructure en nuage virtuelle
CN116210204A (zh) 用于vlan交换和路由服务的系统和方法
JP5976942B2 (ja) ポリシーベースのデータセンタネットワーク自動化を提供するシステムおよび方法
US8683023B1 (en) Managing communications involving external nodes of provided computer networks
US8488446B1 (en) Managing failure behavior for computing nodes of provided computer networks
US11856097B2 (en) Mechanism to provide customer VCN network encryption using customer-managed keys in network virtualization device
US10116622B2 (en) Secure communication channel using a blade server
US11848918B2 (en) End-to-end network encryption from customer on-premise network to customer virtual cloud network using customer-managed keys
Benomar et al. Extending openstack for cloud-based networking at the edge
CN116982306A (zh) 扩展覆盖网络中的ip地址
CN117561705A (zh) 用于图形处理单元的路由策略
KR20240100378A (ko) 사설 네트워크들 사이의 외부 엔드포인트들의 투명 마운팅
CN118176697A (zh) 私有网络之间的安全双向网络连接性系统
US11218918B2 (en) Fast roaming and uniform policy for wireless clients with distributed hashing
Bakshi Network considerations for open source based clouds
Chang et al. Design and architecture of a software defined proximity cloud
WO2024138126A1 (fr) Système de connectivité bidirectionnelle de réseau sécurisé entre des réseaux privés
CN116746136A (zh) 同步通信信道状态信息以实现高流量可用性
CN117597894A (zh) 用于图形处理单元的路由策略

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14883276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14883276

Country of ref document: EP

Kind code of ref document: A1