US11496606B2 - Sticky service sessions in a datacenter - Google Patents

Sticky service sessions in a datacenter Download PDF

Info

Publication number
US11496606B2
US11496606B2 US14/841,654 US201514841654A US11496606B2 US 11496606 B2 US11496606 B2 US 11496606B2 US 201514841654 A US201514841654 A US 201514841654A US 11496606 B2 US11496606 B2 US 11496606B2
Authority
US
United States
Prior art keywords
service
data message
service node
session
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/841,654
Other versions
US20160094661A1 (en
Inventor
Jayant JAIN
Anirban Sengupta
Rick Lund
Raju Koganty
Xinhua Hong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nicira Inc
Original Assignee
Nicira Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/841,654 priority Critical patent/US11496606B2/en
Application filed by Nicira Inc filed Critical Nicira Inc
Priority to PCT/US2015/053332 priority patent/WO2016054272A1/en
Priority to EP15782148.9A priority patent/EP3202109B1/en
Priority to CN202010711875.8A priority patent/CN112291294A/en
Priority to CN201580057270.9A priority patent/CN107005584B/en
Assigned to NICIRA, INC. reassignment NICIRA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LUND, RICK, HONG, XINHUA, JAIN, JAYANT, KOGANTY, RAJU, SENGUPTA, ANIRBAN
Publication of US20160094661A1 publication Critical patent/US20160094661A1/en
Priority to US17/976,783 priority patent/US20230052818A1/en
Application granted granted Critical
Publication of US11496606B2 publication Critical patent/US11496606B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • H04L63/0245Filtering by information in the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/825Involving tunnels, e.g. MPLS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/12Setup of transport tunnels

Definitions

  • Datacenters today use a very static, configuration intensive way to distribute data messages between different application layers and to different service layers.
  • a common approach today is to configure the virtual machines to send packets to virtual IP addresses, and then configure the forwarding elements and load balancers in the datacenter with forwarding rules that direct them to forward VIP addressed packets to appropriate application and/or service layers.
  • Another problem with existing message distribution schemes is that today's load balancers often are chokepoints for the distributed traffic. Accordingly, there is a need in the art for a new approach to seamlessly distribute data messages in the datacenter between different application and/or service layers. Ideally, this new approach would allow the distribution scheme to be easily modified without reconfiguring the servers that transmit the data messages.
  • Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs).
  • the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath).
  • the inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes.
  • the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters.
  • the service-node clusters can perform the same service or can perform different services in some embodiments.
  • This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
  • an inline service switch performs load-balancing operations to distribute data messages among several service nodes or service-node clusters that perform the same service.
  • a service cluster in some embodiments can have one or more load balancers that distribute data messages received for the cluster among the service nodes of the service cluster.
  • At least one service cluster implements an elastic model in which one primary service node receives the cluster's data messages from the inline service switches. This service node then either performs the service on the data message itself or directs the data message (e.g., through L3 and/or L4 network address translation, through MAC redirect, etc.) to one of the other service nodes (called secondary service nodes) in the cluster to perform the service on the data message.
  • the primary service node in some embodiments elastically shrinks or grows the number of secondary service nodes in the cluster based on the received data message load.
  • Some embodiments provide an inline load-balancing switch that statefully distributes the service load to a number of service nodes based on one or more L4+ parameters, which are packet header parameters that are above L1-L4 parameters.
  • L4+ parameters include session keys, session cookies (e.g., SSL session identifiers), file names, database server attributes (e.g., user name), etc.
  • the inline load-balancing switch in some embodiments establishes layer 4 connection sessions (e.g., a TCP/IP sessions) with the data-message SCNs and the service nodes, so that the switch (1) can monitor one or more of the initial payload packets that are exchanged for the session, and (2) can extract and store the L4+ session parameters for later use in its subsequent load balancing operation.
  • layer 4 connection sessions e.g., a TCP/IP sessions
  • the inline switch establishes layer 4 connection session with a SCN and another session with a service node by performing a three-way TCP handshake with the SCN and another one with the service node.
  • the inline switch in some embodiments can adjust the sequence numbers of the relayed data messages to address differences in sequence numbers between the SCN and the service node.
  • Some embodiments provide a controller-driven method for reconfiguring the application or service layer deployment in a datacenter.
  • one or more controllers define data-message distribution policies for SCNs in the datacenter, and push these policies, or rules based on these policies, to the inline switches of the SCNs.
  • the inline switches then distribute the data messages to the data compute nodes (DCNs) that are identified by the distribution policies/rules as the DCNs for the data messages.
  • DCNs data compute nodes
  • a distribution policy or rule is expressed in terms of a DCN group address (e.g., a virtual IP address (VIP)) that the SCNs use to address several DCNs that are in a DCN group.
  • VIP virtual IP address
  • This controller-driven method can seamlessly reconfigure the application or service layer deployment in the datacenter without having to configure the SCNs to use new DCN group addresses (e.g., new VIPs).
  • the controller set only needs to provide the inline switches with new distribution policies or rules that dictate new traffic distribution patterns based on previously configured DCN group addresses.
  • the seamless reconfiguration can be based on arbitrary packet header parameters (e.g., L2, L3, L4 or L7 parameters) that are used by the SCNs. In other words, these packet header parameters in some cases would not have to include DCN group addresses.
  • the inline switches can be configured to distribute data messages based on metadata tags that are associated with the packets, and injected into the packets (e.g., as L7 parameters) by application level gateways (ALGs).
  • ALGs application level gateways
  • the controller set in some embodiments is configured to push new distribution policies and/or rules to the inline switches that configure these switches to implement new application or service layer deployment in the network domain.
  • FIG. 1 illustrates an example of a multi-host system with the inline service switches.
  • FIG. 2 conceptually illustrates a process that an inline service switch performs in some embodiments.
  • FIG. 3 illustrates different examples of service rules.
  • FIG. 4 conceptually illustrates distributing data message flows to services nodes in one service node cluster.
  • FIG. 5 conceptually illustrates distributing data message flows to different service node clusters that perform the same service.
  • FIG. 6 illustrates an example of an ISS sequentially calling multiple different service nodes of different clusters.
  • FIG. 7 illustrates an example of an elastic service model that uses one primary service node and zero or more secondary service nodes.
  • FIG. 8 illustrates an example of sequentially forwarding a data message from a VM to different elastically adjustable service cluster.
  • FIG. 9 conceptually illustrates another process that the inline service switch performs in some embodiments.
  • FIG. 10 conceptually illustrates a process that a primary service node performs in some embodiments of the invention.
  • FIG. 11 illustrates an example of a multi-host system with inline service switches that statefully distribute the service load to service nodes.
  • FIG. 12 conceptually illustrates an example of extracting and re-using a session parameter.
  • FIG. 13 conceptually illustrates another example of extracting and re-using a session parameter.
  • FIG. 14 conceptually illustrates a process of some embodiments for processing a service request in a sticky manner from an associated VM.
  • FIG. 15 illustrates a more detailed architecture of a host computing device
  • FIG. 16 illustrates an example of a controller re-configuring the application layer deployment.
  • FIG. 17 illustrates another example of a controller re-configuring the application layer deployment.
  • FIG. 18 conceptually illustrates a process of some embodiments for defining service policy rules for an inline switch.
  • FIG. 19 conceptually illustrates a process of some embodiments for modifying a service rule and reconfiguring inline service switches that implement this service rule.
  • FIG. 20 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
  • Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs).
  • the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapaths).
  • the inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes.
  • the inline service switches of some embodiments (1) identify service-nodes clusters for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters.
  • the service-node clusters can perform the same service or can perform different services in some embodiments.
  • This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
  • a tunnel uses a tunnel header to encapsulate the packets from one type of protocol in the datagram of a different protocol.
  • VPN virtual private network
  • PPTP point-to-point tunneling protocol
  • IP Internet Protocol
  • GRE generator routing encapsulation
  • cloud refers to one or more sets of computers in one or more datacenters that are accessible through a network (e.g., through the Internet).
  • the XaaS model is implemented by one or more service providers that operate in the same datacenter or in different datacenters in different locations (e.g., different neighborhoods, cities, states, countries, etc.).
  • a data message refers to a collection of bits in a particular format sent across a network.
  • data message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc.
  • references to L2, L3, L4, and L7 layers are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.
  • an inline service switch performs load balancing operations to distribute data messages among several service nodes or service node clusters that perform the same service.
  • a service cluster in some embodiments can have one or more load balancers that distribute data messages received for the cluster among the service nodes of the service cluster.
  • At least one service cluster implements an elastic model in which one primary service node receives the cluster's data messages from the inline service switches. This service node then either performs the service on the data message itself or directs the data message (e.g., through L3 and/or L4 network address translation, through MAC redirect, etc.) to one of the other service nodes (called secondary service nodes) in the cluster to perform the service on the data message.
  • the primary service node in some embodiments elastically shrinks or grows the number of secondary service nodes in the cluster based on the received data message load.
  • an SCN can be a virtual machine (VM) or software container (such as a Docker container) that executes on a host along with other VMs or containers that serve as SCNs or destination compute nodes (DCNs).
  • DCNs in some embodiments include compute end nodes that generate or consume data messages, or middlebox service nodes that perform some type of data processing on the data messages as these messages are being relayed between the data compute end nodes.
  • data compute end nodes include webservers, application servers, database servers, etc.
  • middlebox service nodes include firewalls, intrusion detection systems, intrusion prevention systems, etc.
  • a service node is a standalone appliance or is a DCN (e.g., a VM, container, or module) that executes on a host computer.
  • the service nodes can be data compute end nodes (e.g., webservers, application servers, database servers, etc.), or can be middlebox service nodes (e.g., firewalls, intrusion detection systems, intrusion prevention systems, etc.).
  • the inline service switch is another software module that executes on the same host as the SCN. Two or more of the SCNs on the host use the same inline service switch in some embodiments, while in other embodiments, each SCN on the host has its own inline service switch that executes on the host.
  • the host also executes a software forwarding element (SFE) in some embodiments.
  • SFE software forwarding element
  • the SFE communicatively couples the SCNs of the host to each other and to other devices (e.g., other SCNs) outside of the host.
  • the inline switches are inserted in the egress path of the SCNs before the SFE.
  • one or more controllers configure the inline service switches by providing the service distribution policies or by providing distribution rules that are defined based on the service distribution policies.
  • One example of these controllers are the ISS controllers 120 of FIG. 1 .
  • This figure illustrates an example of a multi-host system 100 with the inline service switches 105 of some embodiments.
  • This system includes multiple host computing devices 110 , a set of ISS controllers 120 , a set of one or more VM managing controllers 125 , and multiple service node clusters 150 . As shown in FIG.
  • a network 175 which can include a local area network (LAN), a wide area network (WAN) or a network of networks (e.g., Internet).
  • LAN local area network
  • WAN wide area network
  • Internet a network of networks
  • Each host computing device 110 executes one or more VMs 115 , one or more SFEs 130 (e.g., a software switch, a software router, etc.), an ISS agent 135 , and one or more inline service switches 105 .
  • the VMs include SCNs and DCNs in some embodiments.
  • an SFE 130 on a host communicatively couples the VMs of the host to each other and to devices outside of the host (e.g., to VMs of other hosts).
  • an SFE of a host implements one or more logical networks with the SFEs executing on other hosts.
  • the SFE 130 also communicatively couples an ISS 105 on the host to one or more service nodes or one or more service node clusters 150 .
  • each ISS 105 is associated with one VM on its host, while in other embodiments, one ISS 105 is associated with more than one VM on its host (e.g., is associated with all VMs on its host that are part of one logical network).
  • an ISS 105 enforces one or more service rules that implement one or more service policies. Based on the service rules, the ISS (1) determines whether a sent data message should be processed by one or more service nodes or clusters, and (2) if so, selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster through a tunnel.
  • Each ISS 105 has a load balancer 160 that it uses to determine how to distribute the load for performing a service to one or more service nodes or one or more service node clusters that perform this service.
  • an ISS 105 connects to a service node or cluster through a tunnel.
  • the inline switches connect to some service nodes/clusters through tunnels, while not using tunnels to connect to other service nodes/clusters.
  • the service nodes are in different datacenters than the hosts 110 and controllers 120 and 125 , while in other embodiments one or more of the service nodes are in the same datacenter as the hosts 110 and controllers 120 and 125 .
  • some of the service nodes are service VMs that execute on hosts 110 .
  • different service node clusters can provide the same service or can provide different services.
  • the service node clusters 150 a and 150 b provide the same service (e.g., firewall service), while the service node cluster 150 c provides a difference service (e.g., intrusion detection).
  • the tunnel-based approach for distributing data messages to service nodes/clusters in the same datacenter or different datacenters is advantageous for seamlessly implementing a cloud-based XaaS model, in which any number of services are provided by service providers in the cloud.
  • This tunnel-based, XaaS model architecture allows hosts 110 and VMs 115 in a private datacenter (e.g., in an enterprise datacenter) to seamlessly use one or more service clusters that are in one or more public multi-tenant datacenters in one or more locations.
  • the private datacenter typically connects to a public datacenter through a public network, such as the Internet.
  • cloud service providers include: firewall-service providers, email spam service providers, intrusion detection service providers, data compression service providers, etc.
  • One provider can provide multiple cloud services (e.g., firewall, intrusion detection, etc.), while another provider can provide only one service (e.g., data compression).
  • the ISS for a VM is deployed in the VM's egress datapath.
  • each VM has a virtual network interface card (VNIC) that connects to a port of the SFE.
  • VNIC virtual network interface card
  • the inline switch for a VM is called by the VM's VNIC or by the SFE port to which the VM's VNIC connects.
  • the VMs execute on top of a hypervisor, which is a software layer that enables the virtualization of the shared hardware resources of the host.
  • the hypervisor provides the inline switches that provide the inline switching and load balancing service to its VMs.
  • a distributed service switch Multiple inline service switches that execute on multiple hosts can implement a distributed service switch.
  • the data messages from one group of related VMs on multiple different hosts get distributed to one or more service nodes or clusters according to the same service distribution policies. These data messages are distributed according to the same service distribution policies because the individual inline service switches for the SCN group are configured with the same policies or rules.
  • the VM managing controllers 125 provide control and management functionality for defining (e.g., allocating or instantiating) and managing one or more VMs on each host.
  • the ISS controller set 120 configures the inline switches 105 and their associated load balancers 160 through the ISS agent 135 .
  • one of these two controller sets 120 and 125 provide control and management functionality for defining and managing multiple logical networks that are defined on the common SFE physical infrastructure of the hosts.
  • the controllers 120 and 125 communicate with their agents that execute on the hosts through out-of-band control channel communication in some embodiments.
  • controllers 120 and 125 are standalone servers or are servers executing on host machines along with other servers.
  • the ISS controller set 120 provides the ISS agent with high level service policies that the ISS agent converts into service rules for the inline switches to implement. These service policies and rules include load balancing policies and rules that the load balancers of the inline switches implement.
  • the ISS controller set provides the ISS agent with service rules that the agent passes along to the inline switches and load balancers.
  • the ISS controller set provides both service policies and service rules to the ISS agent.
  • the ISS agent converts the service policies to service rules, and then it provides the received and converted service rules to the inline switches and load balancers.
  • the ISS controller set directly configures the inline switches and load balancers without going through an ISS agent.
  • the ISS controller set also provides to the ISS agents 135 , service switches 105 or their load balancers 160 , load balancing criteria that the load balancers use to perform their load balancing operations.
  • the load balancing criteria includes a set of weight values that specify how the load balancers should distribute the data message load among a set of service nodes in a weighted round robin approach.
  • the ISS controller set 120 distributes data-message load statistics and the service agents 135 , ISS 105 or the load balancers 160 generate load balancing criteria based on these statistics.
  • the ISS controller set 120 gathers statistics from inline switches and based on the gathered statistics, dynamically adjusts the service policies, service rules and/or load balancing criteria that it distributes directly or indirectly (through the ISS agent) to the inline switches and load balancers.
  • each inline switch stores statistics regarding its data message distribution in a data storage (called STAT storage below) that it updates on its host.
  • the ISS agent 135 periodically gathers the collected statistics from the STAT data storage (not shown in FIG. 1 ), and relays these statistics to the ISS controller set 120 .
  • the agent 135 aggregate and/or analyze some of the statistics before relaying processed statistics to the ISS controller set 120 , while in other embodiments the agents relay collected raw statistics to the ISS controller set 120 .
  • the ISS controller set 120 of some embodiments aggregates the statistics that it receives from the agents of the hosts. In some embodiments, the ISS controller set 120 then distributes the aggregated statistics to the agents that execute on the hosts. These agents then analyze the aggregated statistics to generate and/or to adjust rules or criteria that their associated inline switches or their load balancers enforce. In other embodiments, the controller set analyzes the aggregated statistics to generate and/or to adjust service policies, service rules and/or LB criteria, which the controller set then distributes to the agents 135 of the hosts for their inline switches and load balancers to enforce.
  • the controller set distributes the same policies, rules and/or criteria to each ISS in a group of associated ISS, while in other embodiments, the controller set distributes different policies, rules and/or criteria to different ISS in a group of associated ISS. In some embodiments, the controller set distributes updated policies, rules and/or criteria to some of the inline switches in an associated group of switches, while not distributing the updated policies, rules and/or criteria to other inline switches in the associated group. In some embodiments, the controller set updates and distributes some policies, rules or criteria based on the aggregated statistics, while also distributing some or all aggregated statistics to the hosts so that their agents can generate other rules or criteria.
  • the policies, rules and/or criteria are not always adjusted based on the aggregated statistics, but rather are modified only when the aggregated statistics require such modification.
  • the collection and aggregation of the data traffic statistics allows the switching rules or criteria to be dynamically adjusted. For instance, when the statistics show one service node as being too congested with data traffic, the load balancing rules or criteria can be adjusted dynamically for the load balancers that send data messages to this service node, in order to reduce the load on this service node while increasing the load on one or more other service node in the same service node cluster.
  • the collection and aggregation of the data traffic statistics also allows the controller set 120 to reduce the load on any service node in a service-node cluster by dynamically directing a service-node management controller set (not shown) to provision new service node(s) or allocate previously provisioned service node(s) to the service cluster.
  • FIG. 2 illustrates a process 200 that an ISS 105 of a VM 115 performs for a data message sent by the VM.
  • the ISS 105 (1) determines whether the data message should be processed by one or more service nodes or clusters, and (2) if so, selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster through a tunnel.
  • the ISSS performs a load balancing operation to ensure that the data message flows that it processes are distributed among several service nodes or clusters based on a set of load balancing criteria.
  • the process 200 will be described below by reference to FIGS. 3-5 .
  • FIG. 3-5 The process 200 will be described below by reference to FIGS. 3-5 .
  • FIGS. 4 and 5 respectively show an ISS 105 distributing data message flows to service nodes 305 in one service node cluster 310 , and distributing data message flows to different service-node clusters 405 that perform the same service.
  • the process 200 starts when the ISS 105 receives a data message that its associated VM sends.
  • the ISS 105 is deployed in the VM's egress datapath so that it can intercept the data messages sent by its VM.
  • the ISS 105 is called by the VM's VNIC or by the SFE port that communicatively connects to the VM's VNIC.
  • the process determines whether the data message is part of a data message flow for which the process has processed other data messages. In some embodiments, the process makes this determination by examining a connection storage that the ISS maintains to keep track of the data message flows that it has recently processed. Two data messages are part of the same flow when they share the same message headers. For example, two packets are part of the same flow when they have the same five tuples identifier, which includes the source IP address, destination IP address, source port, destination port, and protocol.
  • the connection storage stores one record for each data message flow that the ISS has recently processed.
  • This record stores a description of the set of service rules that have to be applied to the flow's data messages or has a reference (e.g., a pointer) to this description.
  • the connection-storage record when the operation of the service rule set requires the data message to be dropped, the connection-storage record also specifies this action, or specifies this action in lieu of the service rule description.
  • the connection-storage record indicates that the ISS should allow the received data message to pass along the VM's egress datapath.
  • this record stores the flow's identifier (e.g., the five tuple identifiers).
  • the connection storage is hash addressable (e.g., locations in the connection storage are identified based on a hash of the flow's identifier) in some embodiments.
  • the process 200 After performing these service operations, the process 200 provides (at 215 ) a data message to the module (e.g., SFE port or VNIC) that called it, assuming that the service operations do not require the data message to be dropped, in which case the process so notifies the calling module.
  • the data message that the process 200 returns to the calling module is a modified version of the data message received at 205 .
  • the modified data message may have different header value and/or datagram (i.e., payload) than the received data message.
  • the returned data message might be identical to the received data message.
  • the process determines (at 220 ) whether the service rules that it enforces require one or more service actions to be performed on the received data message.
  • the ISS has a service rule storage that stores several service rules that the ISS enforces. Each service rule can be associated with one or more data message flows from the inline switch's VM, and different service rule can be associated with different data message flows from this VM.
  • each service rule in the service rule storage has (1) an associated set of data message identifiers (e.g., packet header values) and (2) a set of one or more actions.
  • the process 200 determines (at 220 ) whether the received data message's identifiers (e.g., five tuples) match the data message identifiers of any service rule in its service rule storage.
  • the process 200 of some embodiments only performs the set of actions that is specified by the highest priority matching service rule.
  • the service rule storage stores the rules according to a sort that is based on their priorities so that the process 200 first matches the data message to a higher priority rule before being able to match it to a lower priority rule, when more than one rule matches the data message.
  • the process 200 determines that it does not need to forward the data message to any service node to perform any service action. Hence, it creates (at 222 ) a record in the connection storage to specify that no service action is needed for data messages that are part of the same flow as the received data message. For some embodiments of the invention, the structure of the connection storage was described above and further described below.
  • the process also notifies the module (e.g., SFE port or the VM VNIC) that called it that the process has finished processing the data message.
  • this notification is not accompanied by the data message, while in other embodiments, this notification is accompanied by the data message.
  • the process 200 is allowing the received data message to pass without any service being performed on it. After 222 , the process ends.
  • each service rule can specify only one action, while in other embodiments, a service rule can specify a sequence of one or more actions.
  • a service action in some embodiments entails forwarding the matching data messages to a service node or cluster. For such an action, the service rule identifies directly, or through another record (to which the rule refers), the service nodes of a cluster or service-node clusters of a group of service clusters for performing the service. As further described below, the process 200 selects one of the identified service nodes or clusters.
  • FIG. 3 illustrates several examples of service rules specifying service actions.
  • This figure illustrates a service rule storage 300 that stores multiple service rules.
  • Each service rule has an associated service rule identifier set 305 that is expressed in terms of one or more data message header values (e.g., one or more five tuple values, as described above).
  • the process 200 compares the service rule identifier set to a data message's header values in order to determine whether the service rule matches a received data message.
  • Each service rule also specifies one or more actions, with each action being specified in terms of an action type 310 (e.g., firewall action type, IPS action type, IDS action type, etc.) and a tunnel ID set 315 .
  • the tunnel ID set of each action of a service rule identifies (1) one or more tunnels between the ISS and one or more service nodes in a cluster, or (2) one or more service clusters in a service cluster group that provides the service.
  • the tunnel ID sets of the service rules are supplied as a part of the data initially supplied by the ISS controller set (e.g., in order to configure the ISS) or are supplied in subsequent updates that is provided by the controller set.
  • a service rule When a service rule specifies more than one action, the actions can be associated with more than one service. In this manner, a service rule can specify a sequence of service operations that need to be performed on a matching data message.
  • some embodiments store the service rules in the data storage 300 according to a sort that is based on the rule priorities, because the process 200 in these embodiments matches a data message to only one service rule, and the sorted order allows the process to match a data message to a matching higher priority rule instead of lower priority matching rule.
  • service rule 350 has one associated action, while service rule 355 has multiple associated actions.
  • each service rule can only specify one service action.
  • the service rule does not directly identify the tunnel ID for the service node or cluster. For instance, in some embodiments, the process 200 identifies the tunnel ID by using a service-node identifier or service-cluster identifier to retrieve the tunnel ID from a table that identifies these IDs.
  • the process selects a service action of a service rule that matches the received data message header value.
  • a matching service rule specifies a sequence of two or more service actions
  • the process 200 maintains a record (e.g., a count) that identifies where it is in the sequence of actions that it has to perform so that when it returns to 225 it will know which is the next service action that it has to select in the sequence. This will be further described below.
  • this service action has an associated tunnel ID set 315 that specifies one or more tunnels of one or more service nodes or service node clusters that perform the service action.
  • the process 200 uses the load balancer of the ISS to select for the data message in a load-balance way, one service node or one service node cluster from the set of service nodes or service-node clusters that are identified by the tunnel ID set.
  • the ISS load balancer distributes the load in a stateful manner so that data messages that are part of the same flow are processed by the same service node or the same service node cluster.
  • each service rule in some embodiments specifies a set of weight values (not shown) for each of the rule's specified tunnel ID set.
  • each service rule refers to another record that identifies the weight value set for each tunnel ID set identified for the rule.
  • Each weight value set specifies a weight value for each tunnel ID in the associated tunnel ID set, and provides the load-balancing criteria for the ISS load balancer to spread the traffic to the service nodes or clusters that are identified by the tunnel ID set.
  • the ISS load balancer uses these weight values to implement a weighted round robin scheme to spread the traffic to the nodes or clusters.
  • the tunnel ID set has five tunnel IDs and the weight values for the tunnel IDs are 1, 3, 1, 3, and 2. Based on these values, the ISS load balancer would distribute data messages that are part of ten new flows as follows: 1 to the first tunnel ID, 3 to the second tunnel ID, 1 to the third tunnel ID, 3 to the fourth tunnel ID, and 2 to the fifth tunnel ID.
  • the weight values for a service rule are generated and adjusted by the ISS agent 135 and/or ISS controller set 120 in some embodiments based on the statistics that the controller set collects from the inline switches.
  • a tunnel ID set can have multiple weight value sets and the service rule in some embodiments can specify different time periods during which different weight values (i.e., different load balancing criteria) of the tunnel ID set are valid.
  • the process (at 235 ) identifies a tunnel key, encapsulates the data message with a tunnel header (that includes the identified tunnel key) for the tunnel to the selected service node or service-node cluster, and provides this tunnel-header encapsulated data message to its host's SFE for forwarding to the selected service node or service-node cluster.
  • tunnels and keys are GRE tunnels, Geneve tunnels, GRE keys, Geneve keys, etc.
  • the inline switches of some embodiments also use other redirection mechanisms (such as MAC redirect, destination network address translation, etc.) to forward data messages to some of the service nodes and service-node clusters.
  • Tunnel keys allow multiple data message flows to share the same tunnel.
  • the process in some embodiments uses one GRE key to send the flow's data messages to service node or cluster at the other end of the tunnel and to receive responsive data messages in response to the sent data messages from this node or cluster.
  • the tunnel key also allows the process 200 to associate the data message to the data message that the process sent to the service node or cluster.
  • FIG. 4 presents an example that shows the inline service switches 105 , of several related VMs 115 executing on the same host or on different hosts, using several tunnels 450 to distribute their VM data messages to several service nodes 405 of a service node cluster 410 that perform the same service (e.g., a firewall service or an IPS service) on these messages.
  • An ISS performs a load balancing operation to select the service node for each data message flow.
  • each tunnel is established between an ISS 105 and a service node 405 in the cluster.
  • an ISS 105 uses different tunnel keys so that different flows can share the same tunnel.
  • the ISS receives data messages in response to the data messages that it sends to the service node, and uses the tunnel keys to associate each responsive data message with a data message that it sent.
  • each service node 405 is a standalone appliance.
  • one or more service nodes 405 are servers executing on a host computer.
  • the tunnels 405 in some embodiments are tunnels that are provisioned for the host computer, or for an SFE of the host computer, on which the service node executes.
  • the tunnel can also be provisioned at the host level in some embodiments.
  • two or more inline switches 105 that execute on the same host computer uses the same tunnel to a service node.
  • FIG. 5 presents an example that shows the inline service switches 105 , of several related VMs 115 executing on the same host or on different hosts, using several tunnels 550 to distribute their VM data messages to several service-node clusters 505 that perform the same service (e.g., a firewall service or an IPS service) on these messages.
  • an ISS performs a load balancing operation to select the service cluster for each data message flow.
  • different tunnel keys are used to identify data messages of different flows that share the same tunnel in the example of FIG. 5 .
  • each service cluster 505 has multiple service nodes 510 that perform the same service, and a load-balancing webserver set 515 (with one or more webservers) that distributes the received data messages to the service nodes of its cluster.
  • each tunnel is established between the ISS 105 and a load-balancing webserver 515 of the cluster.
  • the ISS selects one cluster in the group of clusters of FIG. 5 , in order to distribute the service load to the different clusters that perform the same service.
  • the load-balancing webservers 515 of each cluster then have the task of distributing each cluster's load among the cluster's service nodes. In some embodiments, these webservers distribute the load in a stateful manner so that the same service node in the cluster processes data messages that are part of the same flow.
  • the different service clusters of a service cluster group illustrated in FIG. 5 are in different datacenters at different locations. Having different service clusters in different locations that perform the same service can be advantageous in that it allows different ISS in different locations to bias their service cluster selection to service clusters that are closer to the ISS location. Also, having different service clusters perform the same service action also provides different tenants in a datacenter the ability to pick different service providers for the same service and to easily switch between these providers without the need to reconfigure the inline switches or their servers (e.g., their VMs or containers). In other embodiments, one or more of these service clusters 505 are in the same datacenter. Such service clusters might be created when different service providers provide the same service in one datacenter.
  • the architecture illustrated in FIG. 5 is also used in some embodiments to terminate tunnels on non-service node elements (e.g., on load balancers such as load balancers 515 ) that distribute data messages that they receive from the inline switches 105 to one or more service nodes that perform the same service or different services.
  • non-service node elements e.g., on load balancers such as load balancers 515
  • load balancers 515 e.g., load balancers 515
  • service nodes 515 of one service provider can be in different clusters 505 .
  • each service cluster can have just one service node.
  • the tunnel that an inline switch uses to forward data message to a service node does not necessarily have to terminate (i.e., does not have to be provisioned) at the service node, but can terminate at a machine or appliance that forwards the data messages it receives through the tunnel to the service node.
  • the confirmation is part of one or more data messages that are received from the service node or cluster and that are encapsulated with the tunnel header with the tunnel key.
  • the tunnel key allows the process 200 to associate the received data message(s) with the sent data message (i.e., the data message sent at 235 ).
  • the received confirmation might indicate that the data message should be dropped (e.g., when the service node performs a security service operation (e.g., firewall, IPS, IDS, etc.) that determines that the data message should be dropped).
  • the confirmation data message(s) might return a data message with one or more modified data message header. These modified header values may re-direct the data message to a different destination once the process 200 completes its processing of the data message.
  • the confirmation data message(s) in some embodiments might return a new or modified payload to replace the payload of the data message that was sent at 235 to the service node or cluster.
  • the new payload might be the encrypted or compressed version of the payload of the sent data message.
  • the process 200 replaces the sent data message payload with the received new or modified payload before having another service node or cluster perform another service on the data message, or before having the SFE forward the data message to its eventual destination.
  • the process 200 determines (at 245 ) whether it should continue processing the data message.
  • the process 200 transitions to 255 , where it creates a record in the ISS connection storage to specify that data messages that are part of the same flow (as the data message received at 205 ) should be dropped. This record is created so that for subsequent data messages that are part of the same flow, the process does not have to search the service rule data storage and to perform the service actions before it determines that it should drop the data message.
  • the process 200 also updates the statistics that it maintains in the ISS STAT storage to reflect the current data message's processing by the service node or nodes that processed this data message before it was dropped.
  • the process determines (at 245 ) that it should continue processing the data message, it determines (at 250 ) whether its service rule check at 220 identified any other service actions that it has to perform the current data message.
  • the process in some embodiments can identify multiple matching service rules with multiple service actions that have to be performed on the data message. In other embodiments, the process can only identify one matching service rule to the data message. However, in some embodiments, a matching service rule might specify multiple service actions that have to be performed on a data message.
  • the process 200 determines (at 250 ) that it needs to perform another service action on the data message, it returns to 225 to select another service action and to repeat operations 230 - 250 .
  • a matching service rule specifies a sequence of two or more service actions
  • the process 200 maintains a record (e.g., a count) that identifies where it is in the sequence of actions that it has to perform so that when it returns to 225 it will know which is the next service action that it has to select in the sequence. In other words, this record maintains the state where the process is in the service policy chain that it has to implement for a received data message.
  • FIG. 6 illustrates an example of an ISS sequentially calling multiple different service nodes of different clusters that perform different services in order to implement a complex service policy that involves multiple different individual service policies.
  • This figure illustrates an ISS 105 of a VM 115 sequentially using X service nodes 605 of X different service clusters 610 to perform a complex service policy that involves X individual service action, where X is an integer.
  • the ISS uses different tunnels 650 to send data messages to the X service nodes.
  • FIG. 6 shows the tunnels that are used to process the data message in terms of solid lines, while showing other candidate tunnels that the ISS 105 does not select in terms of dashed lines.
  • the use of the tunnels allows some or all of the clusters to be in the cloud. In other words, the tunnels allow the ISS to seamlessly implement a cloud-based XaaS model.
  • the different service clusters 610 can be located in the same datacenter with each other, or in different datacenters. Also, a service cluster 610 can be located in the same datacenter as the VM 115 and ISS 105 , or it can be in a different datacenter.
  • the VM 115 is in a private datacenter (e.g., in an enterprise datacenter) while the one or more service clusters are in a public multi-tenant datacenter in a different location.
  • the tunnel-based approach for distributing data messages to service nodes/clusters in the same datacenter or different datacenters is advantageous for seamlessly implementing a cloud-based XaaS model, in which any number of services are provided by service providers in the cloud.
  • the inline switch when an inline switch 105 sequentially calls multiple service nodes or clusters to perform multiple service actions for a data message that the switch has received, the inline switch sends a data message to each service node or cluster that is identical to the data message that the inline service switch initially receives when the process 200 starts, or identical to the data message that the inline service switch receives from a previous service node that performed a previous service action on a data message that the inline service switch sent to the previous service node.
  • the inline switch just relays in the tunnels that connect it to the service nodes or clusters, the data messages that it receives (at 205 ) at the start of the process 200 and receives (at 240 ) from the service nodes. In these situations, the inline switch just places a tunnel packet header on the data message that it receives before forwarding it to the next service action node.
  • one service node In performing its service action on a received data message, one service node might modify the data message's header value and/or its datagram before sending back the modified data message. Notwithstanding this modification, the discussion in this document refers to all the data messages that are received by the inline switch during the execution of the process 200 (i.e., while this switch is directing the service node(s) or cluster(s) to perform a desired sequence of service operations that are initiated when the first data message is received at 205 to start the process 200 ) as the received data message.
  • the data message can be modified so that the resulting message is not similar (e.g., has a different header value or different datagram) to the message on which the operation was perform.
  • the inline switch might just send a portion of a received data message to the service node.
  • the inline switch might send only the header of a data message, a portion of this header, the payload of the data message, or a portion of the payload.
  • the service nodes in some embodiments do not send back a data message that is a modified version of a data message that they receive, but instead send back a value (e.g., Allow, Drop, etc.).
  • the process determines (at 250 ) that it has performed all service actions that it identified for the data message received at 205 , the process creates (at 255 ) a record in the ISS connection storage to specify the service action or service-action sequence that should be performed for data messages that are part of the same flow (as the data message received at 205 ). This record is created so that for subsequent data messages that are part of the same flow, the process does not have to search the service rule data storage. Instead, at 210 , the process can identify for these subsequent data messages the service action(s) that it has to perform from the record in the connection storage, and it can perform these actions at 215 .
  • the process For each service action that the process 200 identifies in the connection storage, the process also identifies, in the connection storage record, the identified service node or cluster (i.e., the node or cluster identified at 225 ) that has to perform the service action, so that all the data messages of the same flow are processed by the same service node or cluster for that service action.
  • the identified service node or cluster i.e., the node or cluster identified at 225
  • the process 200 also updates the statistics that it maintains in the ISS STAT storage to reflect the current data message's processing by the service node or nodes that processed this data message.
  • the process 200 provides (at 255 ) a data message to the module (e.g., SFE port or VNIC) that called it, assuming that the service operations do not require the data message to be dropped, in which case the process so notifies the calling module.
  • the data message that the process 200 returns to the calling module is typically a modified version of the data message received at 205 (e.g., has one or more different header value and/or a modified payload), but in some cases, the returned data message might be identical to the received data message.
  • the process ends.
  • the inline switch selects in a load-balanced manner a service node or cluster for processing a data message, and then sends the data message to the selected node or cluster through a tunnel.
  • the inline switch does not select a service node from several service nodes, nor does it select a service cluster from several service clusters.
  • the inline switch simply relays a data message along one tunnel to a service cluster so that a load-balancing node at the service cluster can then select a service node of the cluster to perform the service.
  • At least one service cluster implements an elastic model in which one primary service node receives the cluster's data messages from the inline service switches. This service node then either performs the service on the data message itself or directs the data message (e.g., through L3 and/or L4 network address translation, through MAC redirect, etc.) to one of the other service nodes (called secondary service nodes) in the cluster to perform the service on the data message.
  • the primary service node in some embodiments elastically shrinks or grows the number of secondary service nodes in the cluster based on the received data message load.
  • FIG. 7 illustrates an example of such an elastic service model that uses one primary service node and zero or more secondary service nodes. This example is illustrated in three stages 705 - 715 that illustrate the operation of a service node cluster 700 at three different instances in time.
  • the first stage 705 illustrates that at a time T1, the cluster includes just one primary service node (PSN) 720 .
  • the PSN 720 has a load balancer (LB) and a service virtual machine (SVM).
  • LB load balancer
  • SVM service virtual machine
  • the PSN receives all data messages on which the cluster has to perform its service. These messages are sent by an inline switch 105 captures and sends from its VM to the cluster 700 through a tunnel 750 . In the first stage 705 , the PSN's SVM 730 performs the needed service on these messages, and then directs these messages back to the inline switch 105 through the tunnel 750 .
  • the second stage 710 illustrates that a time T2, the cluster has been expanded to include another service node, SSN1, which is implemented by a second service virtual machine.
  • the service node SSN1 is added to the cluster because the data message load on the cluster has exceeded a first threshold value.
  • a service-node controller set (not shown) adds SSN1 when it detects that the data message load has exceeded the first threshold value, or when the PSN detects this condition and directs the controller set to add SSN1.
  • the service-node controller set obtains the data message load from the PSN.
  • the controller set or PSN in different embodiments quantify the data message load based on different metrics.
  • these metrics include one or more of the following parameters: (1) number of flows being processed by the cluster or by individual service nodes in the cluster, (2) number of packets being processed by the cluster or by individual service nodes in the cluster, (3) amount of packet data being processed by the cluster or by individual service nodes in the group.
  • the second stage 710 also illustrates that at time T2 the PSN performs the cluster's service on some of the data message flows, while directing other data message flows to SSN1 so that this service node can perform this service on these other flows.
  • the PSN directs the data message to the ISS 105 .
  • this service node in some embodiments returns the data message to the PSN, which directs it back to the ISS. In other embodiments, the SSNs return the processed data messages directly to the inline switch.
  • the SSNs and the inline switches are configured to insert the appropriate packet header values and to examine the appropriate packet header values to identify data messages that have been processed by the SSNs.
  • the SSNs establish tunnels with the inline switches (e.g., with the hosts of the inline switches) once the SSNs are provisioned so that they can return their processed messages directly to the inline switches.
  • the load balancer 725 of the PSN performs a load balancing operation that selects which service node (primary or secondary) in the cluster should perform the group's service on each data message that the PSN receives.
  • the load balancer 725 distributes the data messages based on a hash of the different tunnel keys that the ISS 105 uses to send different data-message flows through the tunnel 750 . This hashing ensures that the data messages that are part of the same flows are processed by the same service node in the cluster.
  • the load balancing is also based on some of the inner packet header values in some embodiments. In other embodiments, the load balancing is just based on the inner packet header values (i.e., it is not based on the tunnel keys).
  • the load balancer 725 stores in a connection storage a record of each service node selection for each data-message flow, and uses this record to forego re-assessing selection of a service node for a flow after picking a service node for the first data message in the flow.
  • the load balancer of the PSN also determines when service nodes should be added to or removed from the cluster.
  • the third stage 715 illustrates that a time T3, the cluster has been expanded to include yet another service node, SSN2, which is a third service virtual machine.
  • the service node SSN2 is added to the cluster because the data message load on the cluster has exceeded a second threshold value, which is the same as the first threshold value in some embodiments or is different than the first threshold value in other embodiments.
  • Some embodiments add the service node SSN2 when the load on either PSN or SSN1 exceed a second threshold amount.
  • Other embodiments add a new service node when the load on N (e.g., two or three) service nodes exceeds a threshold value.
  • the service-node controller set in some embodiments adds SSN2 when it or the PSN detects that the data message load has exceeded the second threshold value.
  • the third stage 715 also illustrates that time T3, the PSN performs the cluster's service on some of the data message flows, while directing other data message flows to SSN1 or SSN2, so that these service nodes can perform this service on these other flows.
  • PSN, SSN1, or SSN2 performs the service on a data message
  • the PSN returns the data message to the ISS 105 through the tunnel 750 .
  • SSN2 like SSN1, provides its reply data message to the PSN so that the PSN can forward this message to the ISS 105 through the tunnel 750 .
  • FIG. 8 illustrates an example where the ISS 105 of a VM 115 sequentially forwards a data message from the VM to different clusters of elastically adjusted service-node clusters.
  • different service clusters perform different service operations on the data message.
  • SSNs of one cluster can be PSNs of other clusters, when the multiple clusters reside in the same location.
  • the ISS 105 connects to the PSN of each service cluster through a tunnel, which allows each service cluster to reside outside of the ISS' local area network. By sequentially relaying the data message to different service clusters, the ISS 105 can implement a complex service policy with multiple service actions (X in this example) on the data message.
  • the use of the tunnels allows some or all of the clusters to be in the cloud. In other words, the tunnels allow the ISS to seamlessly implement a cloud-based XaaS model.
  • FIG. 9 illustrates a process 900 that the ISS 105 performs in some embodiments to process data messages with one or more elastically adjusted service node clusters.
  • This process is identical to the process 200 of FIG. 2 except that process 900 does not perform the load-balancing operation 230 to select a service node in the cluster.
  • the process 900 just forwards (at 235 ) the data message to the service cluster along the tunnel that connects the ISS to the service cluster.
  • FIG. 10 conceptually illustrates a process 1000 that such a PSN performs whenever the PSN receives a data message in some embodiments.
  • the process 1000 identifies one service node in the PSN's SN group that should process the received data message, and then directs the identified service node to perform the SN group's service for the received data message.
  • the identified service node can be the PSN itself, or it can be an SSN in the SN group.
  • the process 1000 starts (at 1005 ) when the PSN receives a data message through a tunnel from an ISS filter. After receiving the data message, the process determines (at 1010 ) whether the received message is part of a particular data message flow for which the PSN has previously processed at least one data message.
  • the process examines (at 1010 ) a connection-state data storage that stores (1) the identity of each of several data message flows that the PSN previously processed, and (2) the identity of the service node that the PSN previously identified as the service node for processing the data messages of each identified flow.
  • the process identifies each flow in the connection-state data storage in terms of one or more flow attributes, e.g., the flow's five tuple identifier.
  • the connection-state data storage is hash indexed based on the hash of the flow attributes (e.g., of the flow's five tuple header values).
  • the PSN For such a storage, the PSN generates a hash value from the header parameter set of a data message, and then uses this hash value to identify one or more locations in the storage to examine for a matching header parameter set (i.e., for a matching data message flow attribute set).
  • the process When the process identifies (at 1010 ) an entry in the flow connection-state data storage that matches the received data message flow's attributes (i.e., when the process determines that it previously processed another data message that is part of the same flow as the received data message), the process directs (at 1015 ) the received data message to the service node (in the SN group) that is identified in the matching entry of the connection-state data storage (i.e., to the service node that the PSN previously identified for processing the data messages of the particular data message flow). This service node then performs the service on the data message. This service node can be the PSN itself, or it can be an SSN in the SN group. After performing (at 1015 ) the service on the data message, the SN returns a reply data message (e.g., the processed data message) to the ISS filter that called it, and then ends.
  • the service node in the SN group
  • This service node can be the PSN itself, or it can be an SSN in the
  • the process determines (at 1010 ) that the connection-state data storage does not store an entry for the received data message (i.e., determines that it previously did not process another data message that is part of the same flow as the received data message)
  • the process transitions to 1020 .
  • the connection-state data storage periodically removes old entries that have not matched any received data messages in a given duration of time. Accordingly, in some embodiments, when the process determines (at 1010 ) that the connection-state data storage does not store an entry for the received data message, the process may have previously identified a service node for the data message's flow, but the matching entry might have been removed from the connection-state data storage.
  • the process determines whether the received data message should be processed locally by the PSN, or remotely by another service node of the SN group.
  • the PSN in some embodiments performs a load balancing operation that identifies the service node for the received data message flow based, based on the load balancing parameter set that the PSN maintains for the SN group at the time that the data message is received.
  • the load balancing parameter set is adjusted in some embodiments (1) based on updated statistic data regarding the traffic load on each service node in the SN group, and (2) based on service nodes that are added to or removed from the SN group.
  • the process 1000 performs different load balancing operations (at 1020 ) in different embodiments.
  • the load balancing operation relies on L2 parameters of the data message flows (e.g., generates hash values form the L2 parameters, such as source MAC addresses, to identify hash ranges that specify service nodes for the generated hash values) to distribute the data messages to service nodes, while in other embodiments, the load balancing operations relies on L3/L4 parameters of the flows (e.g., generates hash values form the L3/L4 parameters, such as five tuple header values, to identify hash ranges that specify service nodes for the generated hash values) to distribute the data messages to service nodes.
  • the load balancing operations use different techniques (e.g., round robin techniques) to distribute the load amongst the service nodes.
  • the process determines (at 1020 ) that the PSN should process the received data message, the process directs (at 1025 ) a service module of the PSN to perform the SN group's service on the received data message.
  • the process 1000 also creates an entry in the flow connection-state data storage to identify the PSN as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies the PSN and identifies the received data message header values (e.g., five tuple values) that specify the message's flow.
  • the PSN After performing (at 1025 ) the service on the data message, the PSN returns a reply data message (e.g., the processed data message) to the ISS filter that called it, and then ends.
  • the process determines (at 1020 ) that based on its load balancing parameter set, the PSN should not process the received data message, the process identifies (at 1020 ) another service node in the PSN's SN group to perform the service on the data message. Thus, in this situation, the process directs (at 1030 ) the message to another service node in the PSN's SN group.
  • the PSN in different embodiments uses different techniques, such as MAC redirect (for L2 forwarding), IP destination network address translation (for L3 forwarding), port address translation (for L4 forwarding), L2/L3 tunneling, etc.
  • the process 1000 in some embodiments changes the MAC address to a MAC address of the service node that it identifies at 1020 . For instance, in some embodiments, the process changes the MAC address to a MAC address of another SFE port in a port group that contains the SFE port connected with the PSN. More specifically, in some embodiments, the service nodes (e.g., SVMs) of a SN group are assigned ports of one port group that can be specified on the same host or different hosts.
  • SVMs service nodes
  • the PSN when the PSN wants to redirect the data message to another service node, it replaces the MAC address of the PSN's port in the data message with the MAC address of the port of the other service node, and then provides this data message to the SFE so that the SFE can forward it directly or indirectly (through other intervening forwarding elements) to the port of the other service node.
  • the PSN replaces the destination IP address in the data message to the destination IP address of the other service node, and then provides this data message to the SFE so that the SFE can forward it directly or indirectly (through other intervening forwarding elements) to the other service node.
  • DNAT IP destination network address translation
  • the PSN To redirect the data message to the other service node through port address translation, the PSN replaces the destination port address in the data message to the destination port address of the other service node, and then uses this new port address to direct the data message to the other service node.
  • the PSN's network address translation may include changes to two or more of the MAC address, IP address, and port address.
  • the process After directing (at 1030 ) the data message to the other service node, the process creates (at 1035 ) an entry in the connection-state data storage to identify the other service node as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies (1) the other service node and (2) the received data message header values (e.g., five tuple values) that specify the message's flow.
  • the SSN After performing the service on the data message, the SSN returns a reply data message (e.g., the processed data message) to the ISS filter that called it, and then ends. In some embodiments, the SSN returns the reply data message directly to the ISS filter, while in other embodiments, the SSN returns this reply data message to the ISS filter through the PSN.
  • the inline service switch of some embodiments statefully distributes the service load to a number of service nodes based on one or more L4+ parameters.
  • L4+ parameters include session keys, session cookies (e.g., SSL session identifiers), file names, database server attributes (e.g., user name), etc.
  • the inline service switch in some embodiments establishes layer 4 connection sessions (e.g., a TCP/IP sessions) with the data-message SCNs and the service nodes, so that the switch (1) can examine one or more of the initial payload packets that are exchanged for a session, and (2) can extract and store the L4+ session parameters for later use in its subsequent load balancing operation of a session.
  • layer 4 connection sessions e.g., a TCP/IP sessions
  • FIG. 11 illustrates an example of a multi-host system 1100 of some embodiments with the inline service switches 1105 that statefully distributes the service load to a number of service nodes based on one or more L4+ parameters.
  • the system 1100 is identical to the system 100 of FIG. 1 , except that its inline service switches 1105 of the hosts 1110 establish layer 4 connection sessions (e.g., a TCP/IP sessions) with their associated VMs and with the service nodes.
  • layer 4 connection sessions e.g., a TCP/IP sessions
  • an ISS 1105 (1) can examine one or more of the initial payload packets that are exchanged for a session, and (2) can extract and store the L4+ session parameters for later use in its subsequent load balancing operation for its VM.
  • the ISS filter After establishing the L4 sessions with its VM and the service node, the ISS filter (1) receives a data packet from a session end point (i.e., from the VM or the service node), (2) extracts the old packet header, (3) examines the packet payload (i.e., the datagram after the L3 and L4 packet header values) to identify any L4+ session parameter that it needs to extract, (4) extracts any needed L4+ session parameter if one such parameter is found, (5) stores any extracted session parameter (e.g., in the connection storage 1190 on its host 1110 ), and (6) re-encapsulated the payload with a new packet header before relaying the packet to the other session's end point (i.e., to the service node or the VM).
  • the new and old packet headers are similar except for specifying different TCP sequence numbers as further described below.
  • FIG. 12 illustrates an ISS 1105 extracting and re-using a session parameter by establishing an L4 connection session with its VM and a service node 1205 of a service cluster 1250 .
  • the service cluster 1250 includes several servers (service nodes) that perform a service (e.g., provide access to secure content) through SSL (secure sockets layer) sessions.
  • the extracted and re-used session parameters are SSL session cookies.
  • FIG. 12 presents a messaging diagram that illustrates how the ISS 1105 relays two different sets 1230 and 1235 of SSL packets from its associated VM to a service node 1205 .
  • the ISS 1105 first establishes a TCP session with the VM by performing a 3-way TCP handshake. After establishing the first TCP session with its VM (for the first set of SSL packets 1230 ), the ISS 1105 examines an initial set of one or more packets that its VM 115 sends and determines that the VM is requesting an SSL service session. The ISS 1105 then determines that the requested SSL service session is a new one as this request is not accompanied by an SSL session cookie.
  • the ISS 1105 determines that it has to select a service node for the requested SSL session from the service cluster 1250 , and that it has to monitor the packets exchanged between the VM and this service node so that it can record the SSL session cookie for this session. In some embodiments, the ISS 1105 selects the service node 1205 in the cluster based on a set of load balancing criteria that it considers for the service cluster 1250 .
  • the ISS 1105 After selecting the service node 1205 , the ISS 1105 performs a 3-way TCP handshake with the service node 1205 in order to establish an L4 connection session with the service node 1205 . Once this session is established, the ISS 1105 starts to relay the packets that it receives from its VM 115 to the service node 1205 , and to relay the packets that it receives from the service node 1205 to its VM 115 . In relaying the data packets between the VM 115 and the service node 1205 , ISS 1105 in some embodiments can adjust the sequence numbers of the relayed data messages to address differences in sequence numbers between the VM and the service node. In some embodiments, the ISS 1105 sends packets to and receives packets from the service node 1205 through a tunnel.
  • the ISS 1105 In relaying one or more responsive packets from the service node 1205 to the VM 115 , the ISS 1105 identifies in an initial set of packet an SSL session ID that is generated by the service node 1205 .
  • This session ID is often referred to as SSL session ID or cookie.
  • an SSL session key is generated, e.g., by the VM based on an SSL certificate of the service node. Generation of an SSL session key is computational intensive.
  • the ISS 1105 can extract the SSL session cookie from the initial set of one or more packets that the service node 1205 sends. As shown, the ISS 1105 stores the SSL session cookie in the connection storage 1190 .
  • the connection storage record that stores this SSL session cookie also includes the identity of the service node 1205 as the service node that generated this cookie. In some embodiments, this record also includes one or more packet header attributes of the current flow (such as source IP, destination IP, destination port, and protocol of the current flow).
  • the VM stops communicating with the service node for a time period. It then resumes this communication by sending a second set of data packets. Because the VM wants to continue using the same SSL session as before, the VM sends the SSL session cookie that it obtained previously. However, in such situations, it is not unusual for the VM to use a different source port for these new data packet. Because of the different source port, the ISS 1105 initially assumes that the new data packets are for a new flow.
  • the ISS 1105 establishes another TCP session with the VM by performing another 3-way TCP handshake.
  • the ISS 1105 examines an initial set of one or more packets sent by its VM 115 and determines this set of packets includes an SSL session cookie. As shown, the ISS 1105 extracts this cookie, compares it with the cookies in its connection storage 1190 , identifies the record that stores this cookie (i.e., determines that it has previously stored this cookie) and from this record, identifies service node 1205 as the service node for processing the SSL session associated with this request.
  • the ISS 1105 then performs another 3-way TCP handshake with the service node 1205 in order to establish another L4 connection session with the service node 1205 because it has determined that this service node is the node that should process the request SSL session.
  • the ISS 1105 starts to relay packets back and forth between its VM 115 and the service node 1205 .
  • the ISS 1105 can properly route subsequent data packets from its VM 115 that include this session's cookie to the same service node 1205 . This is highly beneficial in that it allows the SSL session to quickly resume, and saves the computational resources from having to generate another session key.
  • the inline service switches of some embodiments can extract and store different L4+ session parameters for later use in facilitating efficient distribution of service requests from VMs to service nodes in service-node clusters.
  • Other examples include session keys, file names, database server attributes (e.g., user name), etc.
  • FIG. 13 illustrates an example of a file name as the extracted L4+ session parameter.
  • the file name is the name of a piece of content (e.g., image, video, etc.) that is requested by a VM 115 and that is provided by the servers of a service cluster 1350 .
  • the VM's ISS 1105 stores the requested file name as part of a first set of content processing messages 1330 .
  • the ISS (1) performs an initial TCP 3-way handshake, (2) receives the VM's initial request, and (3) extracts the file name from the request.
  • the VM's initial request is in the form of a URL (uniform resource locator), and the ISS 1105 extracts the file name from this URL.
  • the URL often contains the name or acronym of the type of content being requested (e.g., contain .mov, .img, .jpg, or other similar designations that are postscripts that identify the name requested content).
  • the ISS in some embodiments stores the extracted file name in its connection storage 1190 in a record that identifies the service node 1305 that it selects to process this request. From the servers of the cluster 1350 , the ISS identifies the service node 1305 by performing a load balancing operation based on a set of load balancing criteria that it processes for content requests that it distributes to the cluster 1350 .
  • the ISS 1105 performs a 3-way TCP handshake with the service node 1305 in order to establish an L4 connection session with the service node 1305 .
  • the ISS 1105 relays the content request to the service node 1305 .
  • ISS 1105 in some embodiments can adjust the sequence numbers of the relayed data packets to address differences in sequence numbers between the VM and the service node 1305 .
  • the ISS 1105 sends packets to and receives packets from the service node 1305 through a tunnel.
  • the ISS 1105 then receives one or more responsive packets from the service node 1305 and relays these packets to the VM 115 .
  • This set of packets includes the requested content piece.
  • the ISS 1105 creates the record in the connection storage 1190 to identify the service node 1305 as the server that retrieved the requested content piece only after receiving the responsive packets from this server.
  • the service node 1305 directly sends its reply packets to the VM 115 .
  • the ISS 1105 provides a TCP sequence number offset to the service node, so that this node can use this offset in adjusting its TCP sequence numbers that it uses in its reply packets that respond to packets from the VM 115 .
  • the ISS 1105 provides the TCP sequence number offset in the encapsulating tunnel packet header of a tunnel that is used to relay packets from the ISS to the service node 1305 .
  • the inline service switch 1105 is configured to, or is part of a filter architecture that is configured to, establish the L4 connection session for its associated VM. In these embodiments, the ISS 1105 would not need to establish a L4 connection session with its VM in order to examine L4 parameters sent by the VM.
  • the VM 115 starts a second set of content processing messages 1335 by requesting the same content piece.
  • the ISS 1105 initially assumes that the new data packets are for a new flow.
  • the ISS 1105 establishes another TCP session with its VM by performing a 3-way TCP handshake.
  • the ISS 1105 examines an initial set of one or more packets sent by its VM 115 and determines this set of packets includes a content request.
  • ISS 1105 extracts the file name from the URL of this request, compares this file name with the file names stored in its connection storage 1190 , and determines that it has previously processed a request for this content piece by using service node 1305 .
  • the ISS 1105 performs another 3-way TCP handshake with the service node 1305 in order to establish another L4 connection session with the service node 1305 . Once this session is established, the ISS 1105 relays the content request to this service node, and after obtaining the responsive data packets, relays them to its VM.
  • This approach is highly beneficial in that it saves the service cluster's resources from having to obtain the same piece of content twice. In other words, going to the same service node is efficient as the service node 1305 probably still has the requested content in its cache or memory.
  • this approach is also beneficial in that it allows one ISS of one VM to go to the same service node as the ISS of another VM when both VMs requested the same piece of content within a particular time period.
  • FIG. 14 illustrates a process 1400 that an ISS 1105 of a VM 115 performs to process a service request in a sticky manner from an associated VM.
  • the ISS 1105 (1) determines whether the request is associated with a service request previously processed by a service node of a service-node cluster, and (2) if so, directs the service request to the service node that was previously used.
  • the ISS 1105 determines whether the request is associated with a previously processed request by examining L4+ session parameters that it stored for previous requests in its connection storage 1190 .
  • the process 1400 starts when the ISS 1105 receives a data message sent by its associated VM.
  • the ISS 1105 is deployed in the VM's egress datapath so that it can intercept the data messages sent by its VM.
  • the ISS 1105 is called by the VM's VNIC or by the SFE port that communicatively connects to the VM's VNIC.
  • the received data message is addressed to a destination address (e.g., destination IP or virtual IP address) associated with a service node cluster. Based on this addressing, the ISS ascertains (at 1405 ) that the data message is a request for a service that is performed by the service nodes of the cluster.
  • the process determines whether the data message is part of a data message flow for which the process has processed other data messages. In some embodiments, the process makes this determination by examining its connection storage 1190 , which stores records of the data message flows that it has recently processed as further described below by reference to 1445 . Each record stores one or more service parameters that the process previously extracted from the previous data messages that it processed. Examples of such session parameters include session cookies, session keys, file names, database server attributes (e.g., user name), etc. Each record also identifies the service node that previously processed data messages that are part of the same flow. In some embodiments, this record also stores the flow's identifier (e.g., the five tuple identifier). In addition, the connection storage is hash addressable (e.g., locations in the connection storage are identified based on a hash of the flow's identifier) in some embodiments.
  • the process determines (at 1410 ) that it has previously processed a data message from the same flow as the received data message, it transitions to 1415 .
  • the process retrieves from the connection storage 1190 the identity of the service node that it used to process previous data messages of the same flow, and forwards the received data message to the identified service node to process.
  • the process also (1) retrieves the previously stored session parameter(s) (e.g., session cookie) for the data message's flow from the connection storage 1190 , and (2) forwards the retrieved parameter(s) to the identified service node so that this node can use the parameter(s) to process the forwarded data message.
  • session parameter(s) e.g., session cookie
  • the process 1400 instead of forwarding the retrieved service parameter(s) to the service node, the process 1400 in some embodiments uses the retrieved service parameter(s) to perform an operation on the received data message, before forwarding the data message to the identified service node. Also, in some embodiments, the process provides additional context information (e.g., Tenant ID, Network ID, etc.), which cannot be encoded in the tunnel key. After 1415 , the process 1400 ends.
  • additional context information e.g., Tenant ID, Network ID, etc.
  • the process determines (at 1410 ) that it has not previously processed a data messages from the same data message flow, the process establishes (at 1420 ) an L4 session with the VM (e.g., by performing a three-way TCP handshake with the VM). After establishing the L4 session with its VM, the process determines (at 1425 ) whether an initial set of one or more packets sent by its VM contain one or more L4 service parameters that the process can use to determine whether it has previously processed a similar service request.
  • session parameters include session cookies, session keys, file names, database server attributes (e.g., user name), etc.
  • the process determines (at 1420 ) whether the connection storage 1190 contains a record for the identified L4 service parameter(s). If so, the process transitions to 1415 to forward the data message to the record's identified service node. In some embodiments, the process 1400 also performs other operations at 1415 , as described above. The process 1400 can transition from either 1410 or 1420 to 1415 , because the process can determine that the same session record is applicable based either on outer packet header values (e.g., L2, L3 and L4 values) of one message flow, or on inner packet values (e.g., L4+ parameters) of another message flow.
  • outer packet header values e.g., L2, L3 and L4 values
  • inner packet values e.g., L4+ parameters
  • the inner packet values might match a session record when the VM uses a different source port for a service session that follows an earlier related service session, as described above by reference to FIG. 12 . This would also result when the VM requests the same file and the file name is used to identify the same service node, as described above by reference to FIG. 13 .
  • the process 1400 uses (at 1430 ) the load balancer of the ISS to select a service node in a service node cluster to process the service request from the VM.
  • the process 1400 uses a service rule that matches the received message flow attributes.
  • the service rule specifies a set of service nodes, and a set of load-balancing criteria (e.g., weight values) for each of the rule's specified service nodes.
  • load-balancing criteria e.g., weight values
  • the process After selecting ( 1430 ) a service node for the data message, the process establishes (at 1435 ) an L4 session with the service node (e.g., through a three-way TCP handshake with the service node), because it soft terminated the session with the VM.
  • the process uses this connection session to forward the data messages that it receives from the VM to the selected service node.
  • the process also receives responsive data messages from the selected service node, and it forwards these received data messages to the VM through its connection session with the VM.
  • the process in some embodiments adjusts the TCP sequence numbers of the data messages, as described above.
  • the process exchanges messages with the selected service node through a tunnel.
  • the process encapsulated the data messages that it relays to the service node with a tunnel header, and it removes this tunnel header from the data messages that it passes from the service node to the VM.
  • the process 1400 updates in some embodiments the statistics that it maintains in the ISS STAT storage to keep track of the data messages that it is directing to different service nodes.
  • the process stores in the connections storage 1190 one or more L4+ parameters that it extracts from the data messages that it relays between the VM and selected service node.
  • the process stores the L4+ parameter set in a record that identifies the selected service node, as mentioned above. By storing the selected service node's identity for the extracted L4+ parameter set, the process can later re-use the selected service node for processing data messages that related to the same L4+ parameter set.
  • the record created at 1445 also stores the flow identifier of the data message received at 1405 , so that this record can also be identified based on the outer packet header attributes of the flow. After 1445 , the process ends.
  • the inline service switches of the embodiments described above by reference to FIGS. 12-14 select service nodes in a service node cluster, and relay data messages to the selected service nodes. However, as described above, the inline service switches of some embodiments select service node clusters in a group of service node clusters, and forward data messages to the selected clusters.
  • the inline service switches of some embodiments implement sticky service request processing by forwarding data messages to service clusters (that perform the same service) in a sticky manner.
  • an inline switch in these embodiments stores L4+ session parameters that allow this switch to forward the same or similar service session requests to the same service node clusters in a cluster group that performs the same service.
  • FIG. 15 illustrates a more detailed architecture of a host 1500 that executes the ISS filters of some embodiments of the invention.
  • the host 1500 executes multiple VMs 1505 , an SFE 1510 , multiple ISS filters 1530 , multiple load balancers 1515 , an agent 1520 , and a publisher 1522 .
  • Each ISS filter has an associated ISS rule storage 1550 , a statistics (STAT) data storage 1554 , and a connection state storage 1590 .
  • STAT statistics
  • the host also has an aggregated (global) statistics data storage 1586 .
  • the VMs execute on top of a hypervisor, which is a software layer that enables the virtualization of the shared hardware resources of the host.
  • the hypervisors provide the ISS filters in order to support inline service switching services to its VMs.
  • the SFE 1510 executes on the host to communicatively couple the VMs of the host to each other and to other devices outside of the host (e.g., other VMs on other hosts) through one or more forwarding elements (e.g., switches and/or routers) that operate outside of the host.
  • the SFE 1510 includes a port 1532 to connect to a physical network interface card (not shown) of the host, and a port 1535 that connects to each VNIC 1525 of each VM.
  • the VNICs are software abstractions of the physical network interface card (PNIC) that are implemented by the virtualization software (e.g., by a hypervisor).
  • PNIC physical network interface card
  • Each VNIC is responsible for exchanging data messages between its VM and the SFE 1510 through its corresponding SFE port.
  • a VM's ingress datapath for its data messages includes the SFE port 1532 , the SFE 1510 , the SFE port 1535 , and the VM's VNIC 1525 .
  • a VM's egress datapath for its data messages involves the same components but in the opposite direction, specifically from the VNIC 1525 , to the port 1535 , to the SFE 1510 , and then to the port 1532 .
  • the SFE 1510 connects to the host's PNIC to send outgoing packets and to receive incoming packets.
  • the SFE 1510 performs message-processing operations to forward messages that it receives on one of its ports to another one of its ports. For example, in some embodiments, the SFE tries to use header values in the VM data message to match the message to flow based rules, and upon finding a match, to perform the action specified by the matching rule (e.g., to hand the packet to one of its ports 1532 or 1535 , which directs the packet to be supplied to a destination VM or to the PNIC).
  • the matching rule e.g., to hand the packet to one of its ports 1532 or 1535 , which directs the packet to be supplied to a destination VM or to the PNIC.
  • the SFE extracts from a data message a virtual network identifier (VNI) and a MAC address.
  • VNI virtual network identifier
  • the SFE in these embodiments uses the extracted VNI to identify a logical port group, and then uses the MAC address to identify a port within the port group.
  • the SFE 1510 is a software switch, while in other embodiments it is a software router or a combined software switch/router.
  • the SFE 1510 in some embodiments implements one or more logical forwarding elements (e.g., logical switches or logical routers) with SFEs executing on other hosts in a multi-host environment.
  • a logical forwarding element in some embodiments can span multiple hosts to connect VMs that execute on different hosts but belong to one logical network.
  • different logical forwarding elements can be defined to specify different logical networks for different users, and each logical forwarding element can be defined by multiple SFEs on multiple hosts.
  • Each logical forwarding element isolates the traffic of the VMs of one logical network from the VMs of another logical network that is serviced by another logical forwarding element.
  • a logical forwarding element can connect VMs executing on the same host and/or different hosts.
  • the SFE ports 1535 in some embodiments include one or more function calls to one or more modules that implement special input/output (I/O) operations on incoming and outgoing packets that are received at the ports.
  • I/O input/output
  • One of these function calls for a port is to an ISS filter 1530 .
  • the ISS filter performs the service switch operations on outgoing data messages from the filter's VM.
  • each port 1535 has its own ISS filter 1530 .
  • some or all of the ports 1535 share the same ISS filter 1530 (e.g., all the ports on the same host share one ISS filter, or all ports on a host that are part of the same logical network share one ISS filter).
  • Examples of other I/O operations that are implemented through function calls by the ports 1535 include firewall operations, encryption operations, etc. By implementing a stack of such function calls, the ports can implement a chain of I/O operations on incoming and/or outgoing messages in some embodiments.
  • the ISS filters are called from the ports 1535 for a data message transmitted by a VM.
  • ISS filter from the VM's VNIC or from the port 1532 of the SFE for a data message sent by the VM, or call this filter from the VM's VNIC 1525 , the port 1535 , or the port 1532 for a data message received for the VM (i.e., deploy the service operation call along the ingress path for a VM).
  • an ISS filter 1530 enforces one or more service rules that are stored in the ISS rule storage 1550 . These service rules implement one or more service policies. Based on the service rules, the ISS filter (1) determines whether a sent data message should be processed by one or more service nodes or clusters, and (2) if so, selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster (e.g., through a tunnel).
  • each service rule in the service rule storage 1550 has (1) an associated set of data message identifiers (e.g., packet header values), (2) a set of one or more actions, (3) for each action, a set of service nodes or service node clusters that perform the action, and (4) for each action, a set of load balancing criteria for select a service node or cluster in the rule's set of service node or service node clusters.
  • a rule in some embodiments can identify a service node or cluster by providing an identifier for the tunnel connected to the service node or cluster (e.g., from the host, or the SFE, or the ISS filter).
  • the ISS filter 1530 determines whether the received data message's identifiers (e.g., five tuples) match the data message identifiers of a service rule in its service rule storage. When the received data message's header values do not match the rule-matching identifier of one or more service rules in the service rule storage, the ISS filter 1530 informs the port 1535 that it has completed processing of the data message, without performing any service on the data message. The ISS filter also stores a record of this decision in its connection storage 1590 . This record identifies the data message flow identifier (e.g., its five tuple identifier) and identifies that no service action needs to be performed for this data message flow. This record can be used for quick processing of subsequent data messages of the same flow.
  • the data message flow identifier e.g., its five tuple identifier
  • the ISS filter When a data message's header values matches a service rule, the ISS filter performs the set of actions specified with the matching service rule. When the set of actions includes more than one action, the ISS filter performs the service actions sequentially.
  • a service action of a matching service rule is performed by a service node of a SN group or a SN cluster of a SN cluster group. Accordingly, to perform such a service action, the ISS filter selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster.
  • the ISS filter 1530 forwards the data message to the selected node or cluster through a tunnel.
  • the ISS filter 1530 connects to some service nodes/clusters through tunnels, while not using tunnels to connect to other service nodes/clusters.
  • the ISS filter 1530 might use L3 or L4 destination network address translation (DNAT), or MAC redirect, to forward data messages to some of the service nodes.
  • DNAT destination network address translation
  • one or more service nodes might be executing on the same host computer 1500 as the ISS filter 1530 , and in these embodiments the ISS filter 1530 directs the data messages to these service nodes through DNAT, MAC redirect or some other forwarding mechanism that is part of the filter framework of some embodiments.
  • service rules have identifiers that specify different re-direction mechanisms, as one rule can, or different rules can, identify different service nodes or SN clusters that are accessible through different re-direction mechanisms.
  • the ISS filter 1530 When the ISS filter 1530 uses a tunnel to send a data message to a service node or cluster, the ISS filter in some embodiments encapsulates the data message with a tunnel packet header. This packet header includes a tunnel key in some embodiments. In other embodiments, the ISS filter 1530 has another I/O chain filter encapsulate the data messages with tunnel packet headers.
  • the ISS filter 1530 has to establish an L4 connection session with the service node. In some of embodiments, the ISS filter also has to establish an L4 connection session with its VM. To establish an L4 connection session, the ISS filter performs a three-way TCP/IP handshake with the other end of the connection (e.g., with the service node or VM) in some embodiments.
  • a matching service rule in some embodiments specifies a set of load balancing criteria for each set of service nodes or clusters that perform a service action specified by the rule.
  • the ISS filter 1530 has its associated load balancer 1550 use the rule's specified load balancing criteria to select a service node from the specified SN group, or a service cluster from the specified SN cluster group.
  • the load balancer distributes the data message load for performing a service action to the service nodes or the SN clusters in a load balanced manner specified by the load balancing criteria.
  • the load balancing criteria are weight values associated with the service node or SN clusters.
  • each ISS filter 1530 has its own load balancer 1515 , while in other embodiments, multiple ISS filters 1530 share the same load balancer 1525 (e.g., ISS filters of VMs that are part of one logical network use one load balancer 1515 on each host).
  • the ISS filter 1530 stores in the connection state storage 1590 data records that maintain connection state for data message flows that the ISS filter 1530 has previously processed. This connection state allows the ISS filter 1530 to distribute data messages that are part of the same flow statefully to the same content server.
  • each record in the connection storage corresponds to a data message flow that the ISS filter 1530 has previously processed.
  • Each record stores a description of the set of service rules that have to be applied to the flow's data messages or has a reference (e.g., a pointer) to this description.
  • the connection-storage record when the operation of the service rule set requires the data message to be dropped, the connection-storage record also specifies this action, or specifies this action in lieu of the service rule description.
  • the connection-storage record when no service has to be performed for data messages of this flow, indicates that the ISS should allow the received data message to pass along the VM's egress datapath. In some embodiments, this record stores the flow's identifier (e.g., the five tuple identifiers).
  • connection storage is hash addressable (e.g., locations in the connection storage are identified based on a hash of the flow's identifier) in some embodiments.
  • the ISS filter 1530 stores an L4+ session parameter
  • the ISS filter 1530 in some of these embodiments stores this parameter in the connection state storage 1590 .
  • the ISS filter each time a ISS filter directs a message to a service node or SN cluster, the ISS filter updates the statistics that it maintains in its STAT data storage 1554 for the data traffic that it relays to the service nodes and/or clusters. Examples of such statistics include the number of data messages (e.g., number of packets), data message flows and/or data message bytes relayed to each service node or cluster. In some embodiments, the metrics can be normalized to units of time, e.g., per second, per minute, etc.
  • the agent 1520 gathers (e.g., periodically collects) the statistics that the ISS filters store in the STAT data storages 1554 , and relays these statistics to a controller set. Based on statistics that the controller set gathers from various agents 1520 of various hosts, the controller set (1) distributes the aggregated statistics to each host's agent 1520 so that each agent can define and/or adjust the load balancing criteria for the load balancers on its host, and/or (2) analyzes the aggregated statistics to specify and distribute some or all of the load balancing criteria to the hosts. In some embodiments where the controller set generates the load balancing criteria from the aggregated statistics, the controller set distributes the generated load balancing criteria to the agents 1520 of the hosts.
  • the agent 1520 receives new load balancing criteria or new ISS rules from the controller set, the agent 1520 stores these criteria or new rules in the host-level rule storage 1588 for propagation to the ISS rule storages 1550 .
  • the agent 1520 receives aggregated statistics from the controller set, the agent 1520 stores the aggregated statistics in the global statistics data storage 1586 .
  • the agent 1520 analyzes the aggregated statistics in this storage 1586 to define and/or adjust the load balancing criteria (e.g., weight values), which it then stores in the rule storage 1588 for propagation to the ISS rule storages 1550 .
  • the publisher 1522 retrieves each service rule and/or updated load balancing criteria that the agent 1520 stores in the rule storage 1588 , and stores the retrieved rule or criteria in the ISS rule storage 1550 of each ISS filter that needs to enforce this rule or criteria.
  • the agent 1520 not only propagates service rule updates based on newly received aggregated statistics, but it also propagates service rules or updates service rules based on updates to SN group or cluster group that it receives from the controller set. Again, the agent 1520 stores such updated rules in the rule data storage 1588 , from where the publisher propagates them to ISS rule storages 1550 of the ISS filters 1530 that need to enforce these rules.
  • the controller set provides the ISS agent 1520 with high level service policies that the ISS agent converts into service rules for the ISS filters to implement.
  • the agent 1520 communicates with the controller set through an out-of-band control channel.
  • the controller set 120 provides a host computer with parameters for establishing several tunnels, each between the host computer and a service node that can be in the same datacenter as the host computer or can be at a different location as the datacenter.
  • the provided tunnel-establishing parameters include tunnel header packet parameters in some embodiments. These parameters in some embodiments also include tunnel keys, while in other embodiments, these parameters include parameters for generating the tunnel keys. Tunnel keys are used in some embodiments to allow multiple different data message flows to use one tunnel from a host to a service node.
  • establishing a tunnel entails configuring modules at the tunnel endpoints with provisioned tunnel parameters (e.g., tunnel header parameters, tunnel keys, etc.).
  • the tunnels connect the host computer with several service nodes of one or more service providers that operate in the same datacenter or outside of the datacenter.
  • only one tunnel is established between each host and a service node and all ISS filters on the host use the same tunnel for relaying data messages to the service node. This is done to reduce the number of tunnels.
  • This approach can be viewed as establishing one tunnel between the host's SFE and the service node.
  • more than one tunnel is established between a host and a service node. For instance, in some deployments, one tunnel is established between each ISS filter on the host and the service node.
  • the controller set 120 define data-message distribution rules for SCNs in the datacenter, and push these rules to the ISS filters of the SCNs.
  • the ISS filters then distribute the data messages to the data compute nodes (DCNs) that are identified by the distribution rules as the DCNs for the data messages.
  • the controller set 120 define data-message distribution policies for SCNs in the datacenter, and push these policies to the hosts that execute the SCNs. The hosts then generate distribution rules from these policies and then configure their ISS filters based on these policies.
  • distribution rule includes (1) a rule identifier that is used to identify data message flows that match the rule, and (2) a set of service actions for data message flows that match the rule.
  • the rule identifier can be defined in terms of group identifiers (such as virtual IP addresses (VIPs)) or metadata tags assigned by application level gateways.
  • each service action of a rule is defined by reference an identifier that identifies a set of service nodes for performing the service action.
  • Some rules can specify two or more service actions that are performed by two or more sets of service nodes of two or more service providers.
  • each service-node set is a service node cluster and is defined in the rule by reference to a set of tunnel identifiers (1) that identifies one tunnel to the service node cluster, or (2) that identifies one tunnel to each service node in the service-node cluster.
  • a distribution rule also includes a set of selection criteria for each set of service action of the rule.
  • the selection criteria set includes one or more criteria that are dynamically assessed (e.g., based on the identity of SCNs executing on the host, etc.).
  • the selection criteria set is a load balancing criteria set that specifies criteria for distributing new data message flows amongst the service nodes that perform the service action.
  • This controller-driven method can seamlessly reconfigure the application or service layer deployment in the datacenter without having to configure the SCNs to use new group addresses or tags (e.g., new VIPs).
  • the controller set only needs to provide the inline switches with new distribution rules that dictate new traffic distribution patterns based on previously configured group addresses or tags.
  • the seamless reconfiguration can be based on arbitrary packet header parameters (e.g., L2, L3, L4 or L7 parameters) that are used by the SCNs. In other words, these packet header parameters in some cases would not have to include group addresses or tags.
  • the inline switches in some embodiments can be configured to distribute data messages based on metadata tags that are associated with the packets, and injected into the packets (e.g., as L7 parameters) by application level gateways (ALGs).
  • ALGs application level gateways
  • the controller set in some embodiments is configured to push new distribution policies and/or rules to the inline switches that configure these switches to implement new application or service layer deployment in the network domain.
  • FIG. 16 illustrates an example of a controller re-configuring the application layer deployment to insert a firewall service operation between a set of webservers 1605 and a set of application servers 1610 .
  • This figure illustrates a datacenter that implement a three-server layer deployment, in which the first layer includes one or more webservers 1605 , the second layer includes one or more application servers 1610 , and the third layer includes one or more database servers 1615 .
  • a controller 1620 initially configures the inline switches 1630 of the webservers 1610 with message distribution rules that direct the switches to forward received packet flows that have a particular VIP (VIP1) as their destination IP address to the application servers.
  • VIP1 VIP1
  • AS application server
  • IP address set 1 the IP address of the application servers 1610 .
  • the controller 1620 re-configures these switches 1630 with new packet distribution rules 1655 that direct the switches (1) to first forward such a packet flow (i.e., a packet flow with VIP1 for their destination IP address) to a set of firewall servers 1625 , and then (2) if the firewall servers do not direct the webservers to drop the packet flow, to forward the packets of this packet flow to the application servers 1610 .
  • a packet flow i.e., a packet flow with VIP1 for their destination IP address
  • each rule 1655 specifies (1) VIP1 as a flow-matching attribute, (2) FW (firewall) type as the first action's type, (3) the IP address set 2 as the set of IP addresses of the firewall servers 1625 , (4) AS (application server) type as the second action's type, and (5) the IP address set 1 as the set of IP addresses of the application servers 1610 .
  • the new packet distribution rule that the controller 1620 provides to the webservers switches 1630 specifies, for flows with VIP1 destination IP, a service policy chain that (1) first identifies a firewall operation and then (2) identifies an application-level operation. This new rule replaces a prior rule that only specifies for flows with VIP1 destination IP the application-level operation.
  • the rule for each operation that the rule specifies, includes, or refers to, (1) identifiers (e.g., IP addresses, tunnel identifiers, etc.) of a set of servers that perform that operation, and (2) load balancing criteria for distributing different flows to different servers in the set.
  • the inline switches perform load-balancing operations based on the load balancing criteria to spread the packet flow load among the firewalls 1625 .
  • the controller 1620 configures the inline switches 1630 with multiple different rules for multiple different VIPs that are associated with multiple different service policy sets.
  • the controller re-configures the webservers 1605 (1) to direct a packet flow with VIP1 as the destination IP addresses to the firewall servers, and then after receiving the firewall servers assessment as to whether the packet flow should not be dropped, (2) to forward the packets for this flow to the application server.
  • FIG. 17 illustrates that in other embodiments, the controller 1720 (1) re-configures the inline switches 1730 of the webservers 1705 to forward all packets with the destination IP address VIP1 to the firewall servers 1725 , and (2) configures the firewall servers 1725 to forward these packets directly to the application servers 1710 if the firewall servers 1725 determine that the packets should not be dropped this approach.
  • the controller 1720 initially configures the inline switches with the rule 1650 , which was described above.
  • the controller then re-configures the inline switches with the rule 1755 , which specifies (1) VIP1 as a flow-matching attribute, (2) FW (firewall) type as the action type, and (3) the IP address set 2 as the set of IP addresses of the firewall servers 1725 .
  • the controller then configures the firewall servers 1725 to forward any passed-through packets directly to the application servers 1710 .
  • the controller configures the firewall servers by configuring the inline switches that are placed in the egress paths of the firewall servers to forward the firewall processed packets to the application servers 1710 .
  • FIG. 18 illustrates a process 1800 that a controller 1620 performs to define the service policy rules for an inline switch of a VM that is being provisioned on a host.
  • the process 1800 initially identifies (at 1805 ) a new inline switch to configure.
  • the process selects a virtual identifier (e.g., a VIP, a virtual address, etc.) that may be used to identify DCN groups or security policies/rules in packet flows that the inline switch may receive.
  • a virtual identifier e.g., a VIP, a virtual address, etc.
  • the process 1800 identifies a service policy set that is associated with the selected virtual identifier.
  • a service policy set specifies one or more service actions that need to be performed for packet flows that are associated with the selected virtual identifier.
  • the process then defines (at 1820 ) a service rule for the identified service policy set. For each service action in the service policy set, the service rule specifies a set of service nodes or service-node clusters that performs the service action.
  • the process selects a service action in the identified service policy set.
  • the process generates and stores in the defined rule (i.e., the rule defined at 1820 ) load balancing criteria for the set of service nodes or service-node clusters that perform the selected service action.
  • the process generates the load balancing criteria based on the membership of the set of service nodes or service-node clusters, and statistics regarding the packet flow load on the service-node or service-cluster set that the controller collects from the inline switches.
  • the process determines whether it has examined all the service actions in the identified service policy set. If not, the process selects (at 1840 ) another service action in the identified service policy set, and then transitions back to 1830 to generate and store load balancing criteria for the set of service nodes or service-node clusters that perform the selected service action.
  • the process determines (at 1845 ) whether it has processed all virtual identifiers that may be used to identify DCN groups or security policies/rules in packet flows that the inline switch may receive.
  • the process selects (at 1850 ) another virtual identifier that may be used to identify DCN groups or security policies/rules in packet flows that the inline switch may receive. After 1850 , the process returns to 1815 to repeat operations 1815 - 1850 for the selected virtual identifier. When the process determines (at 1845 ) that it has examined all virtual identifiers for the inline switch, it ends.
  • a service policy set is associated with a virtual identifier that may be used in a packet flow that an inline switch may receive.
  • the controller can define a services rule for a service policy set that is associated with a set of two or more virtual identifiers (e.g., a VIP and a L7 tag), or with a virtual identifier and one or more other packet header values (e.g., source IP address, source port address, etc.). More generally, the controller in some embodiments can define a service rule that defines one or more service actions to implement a service policy set and can associate this service rule with any arbitrary combination of physical and/or virtual packet header values.
  • a controller in some embodiments can seamlessly reconfigure the application or service layer deployment in the datacenter without having to configure the SCNs to use new DCN group addresses (e.g., new VIPs).
  • the controller only needs to provide the inline switches with new distribution rules that dictate new traffic distribution patterns based on previously configured DCN group addresses and/or based on any arbitrary packet header parameters (e.g., L2, L3, L4 or L7 parameters) that are used by the SCNs.
  • FIG. 19 illustrates a process 1900 for modifying a service rule and reconfiguring inline service switches that implement this service rule.
  • This process is performed by each controller in a set of one or more controllers in some embodiments.
  • the process 1900 starts (at 1905 ) when it receives a modification to a service policy set for which the controller set has previously generated a service rule and distributed this service rule to a set of one or more inline switches that implements the service policy set.
  • the received modification may involve the removal of one or more service actions from the service policy set or the addition of one or more service actions to the service policy set. Alternatively or conjunctively, the received modification may involve the reordering of one or more service actions in the service policy set.
  • the process 1900 changes the service action chain in the service rule to account for the received modification.
  • This change may insert one or more service actions in the rule's action chain, may remove one or more service actions from the rule's action chain, or may reorder one or more service actions in the rule's action chain.
  • a service rule specifies a service action chain by specifying (1) two or more service action types and (2) for each service action type, specifying a set of IP addresses that identify a set of service nodes or service-node clusters that perform the service action type.
  • Each service rule in some embodiments also specifies a set of load balancing criteria for each action type's set of IP addresses.
  • the process 1900 For each new service action in the service action chain, the process 1900 then defines (at 1915 ) the set of load balancing criteria (e.g., a set of weight values for a weighted, round-robin load balancing scheme).
  • the process generates the load balancing criteria set based on (1) the membership of the set of service nodes or service-node clusters that perform the service action, and (2) statistics regarding the packet flow load on the service-node or service-cluster set that the controller collects from the inline switches.
  • the process distributes the modified service rule to the hosts that execute the inline service switches that process the service rule. These are the inline service switches that may encounter packets associated with the modified service rule. After 1920 , the process ends.
  • Computer readable storage medium also referred to as computer readable medium.
  • processing unit(s) e.g., one or more processors, cores of processors, or other processing units
  • processing unit(s) e.g., one or more processors, cores of processors, or other processing units
  • Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc.
  • the computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
  • the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor.
  • multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions.
  • multiple software inventions can also be implemented as separate programs.
  • any combination of separate programs that together implement a software invention described here is within the scope of the invention.
  • the software programs when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
  • FIG. 20 conceptually illustrates an electronic system 2000 with which some embodiments of the invention are implemented.
  • the electronic system 2000 can be used to execute any of the control, virtualization, or operating system applications described above.
  • the electronic system 2000 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device.
  • Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.
  • Electronic system 2000 includes a bus 2005 , processing unit(s) 2010 , a system memory 2025 , a read-only memory 2030 , a permanent storage device 2035 , input devices 2040 , and output devices 2045 .
  • the bus 2005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 2000 .
  • the bus 2005 communicatively connects the processing unit(s) 2010 with the read-only memory 2030 , the system memory 2025 , and the permanent storage device 2035 .
  • the processing unit(s) 2010 retrieves instructions to execute and data to process in order to execute the processes of the invention.
  • the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
  • the read-only-memory (ROM) 2030 stores static data and instructions that are needed by the processing unit(s) 2010 and other modules of the electronic system.
  • the permanent storage device 2035 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 2000 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2035 .
  • the system memory 2025 is a read-and-write memory device. However, unlike storage device 2035 , the system memory is a volatile read-and-write memory, such a random access memory.
  • the system memory stores some of the instructions and data that the processor needs at runtime.
  • the invention's processes are stored in the system memory 2025 , the permanent storage device 2035 , and/or the read-only memory 2030 . From these various memory units, the processing unit(s) 2010 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
  • the bus 2005 also connects to the input and output devices 2040 and 2045 .
  • the input devices enable the user to communicate information and select commands to the electronic system.
  • the input devices 2040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
  • the output devices 2045 display images generated by the electronic system.
  • the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
  • bus 2005 also couples electronic system 2000 to a network 2065 through a network adapter (not shown).
  • the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 2000 may be used in conjunction with the invention.
  • Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
  • computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks.
  • CD-ROM compact discs
  • CD-R recordable compact discs
  • the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
  • Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • integrated circuits execute instructions that are stored on the circuit itself.
  • the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
  • display or displaying means displaying on an electronic device.
  • the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
  • the inline switches intercept the data messages along the egress datapath of the SCNs. In other embodiments, however, the inline switches intercept the data messages along the ingress datapath of the SCNs.

Abstract

Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.

Description

BACKGROUND
Datacenters today use a very static, configuration intensive way to distribute data messages between different application layers and to different service layers. A common approach today is to configure the virtual machines to send packets to virtual IP addresses, and then configure the forwarding elements and load balancers in the datacenter with forwarding rules that direct them to forward VIP addressed packets to appropriate application and/or service layers. Another problem with existing message distribution schemes is that today's load balancers often are chokepoints for the distributed traffic. Accordingly, there is a need in the art for a new approach to seamlessly distribute data messages in the datacenter between different application and/or service layers. Ideally, this new approach would allow the distribution scheme to be easily modified without reconfiguring the servers that transmit the data messages.
BRIEF SUMMARY
Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes.
Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
In some embodiments, an inline service switch performs load-balancing operations to distribute data messages among several service nodes or service-node clusters that perform the same service. Alternatively, or conjunctively, a service cluster in some embodiments can have one or more load balancers that distribute data messages received for the cluster among the service nodes of the service cluster.
In some embodiments, at least one service cluster implements an elastic model in which one primary service node receives the cluster's data messages from the inline service switches. This service node then either performs the service on the data message itself or directs the data message (e.g., through L3 and/or L4 network address translation, through MAC redirect, etc.) to one of the other service nodes (called secondary service nodes) in the cluster to perform the service on the data message. The primary service node in some embodiments elastically shrinks or grows the number of secondary service nodes in the cluster based on the received data message load.
Some embodiments provide an inline load-balancing switch that statefully distributes the service load to a number of service nodes based on one or more L4+ parameters, which are packet header parameters that are above L1-L4 parameters. Examples of L4+ parameters include session keys, session cookies (e.g., SSL session identifiers), file names, database server attributes (e.g., user name), etc. To statefully distribute the service load among server nodes, the inline load-balancing switch in some embodiments establishes layer 4 connection sessions (e.g., a TCP/IP sessions) with the data-message SCNs and the service nodes, so that the switch (1) can monitor one or more of the initial payload packets that are exchanged for the session, and (2) can extract and store the L4+ session parameters for later use in its subsequent load balancing operation.
In some embodiments, the inline switch establishes layer 4 connection session with a SCN and another session with a service node by performing a three-way TCP handshake with the SCN and another one with the service node. To relay data messages between the SCN and the service node, the inline switch in some embodiments can adjust the sequence numbers of the relayed data messages to address differences in sequence numbers between the SCN and the service node.
Some embodiments provide a controller-driven method for reconfiguring the application or service layer deployment in a datacenter. In some embodiments, one or more controllers define data-message distribution policies for SCNs in the datacenter, and push these policies, or rules based on these policies, to the inline switches of the SCNs. The inline switches then distribute the data messages to the data compute nodes (DCNs) that are identified by the distribution policies/rules as the DCNs for the data messages. In some embodiments, a distribution policy or rule is expressed in terms of a DCN group address (e.g., a virtual IP address (VIP)) that the SCNs use to address several DCNs that are in a DCN group.
This controller-driven method can seamlessly reconfigure the application or service layer deployment in the datacenter without having to configure the SCNs to use new DCN group addresses (e.g., new VIPs). The controller set only needs to provide the inline switches with new distribution policies or rules that dictate new traffic distribution patterns based on previously configured DCN group addresses. In some embodiments, the seamless reconfiguration can be based on arbitrary packet header parameters (e.g., L2, L3, L4 or L7 parameters) that are used by the SCNs. In other words, these packet header parameters in some cases would not have to include DCN group addresses. In some embodiments, the inline switches can be configured to distribute data messages based on metadata tags that are associated with the packets, and injected into the packets (e.g., as L7 parameters) by application level gateways (ALGs). For example, as ALGs are configured to inspect and tag packets as the packets enter a network domain (e.g., a logical domain), the controller set in some embodiments is configured to push new distribution policies and/or rules to the inline switches that configure these switches to implement new application or service layer deployment in the network domain.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
FIG. 1 illustrates an example of a multi-host system with the inline service switches.
FIG. 2 conceptually illustrates a process that an inline service switch performs in some embodiments.
FIG. 3 illustrates different examples of service rules.
FIG. 4 conceptually illustrates distributing data message flows to services nodes in one service node cluster.
FIG. 5 conceptually illustrates distributing data message flows to different service node clusters that perform the same service.
FIG. 6 illustrates an example of an ISS sequentially calling multiple different service nodes of different clusters.
FIG. 7 illustrates an example of an elastic service model that uses one primary service node and zero or more secondary service nodes.
FIG. 8 illustrates an example of sequentially forwarding a data message from a VM to different elastically adjustable service cluster.
FIG. 9 conceptually illustrates another process that the inline service switch performs in some embodiments.
FIG. 10 conceptually illustrates a process that a primary service node performs in some embodiments of the invention.
FIG. 11 illustrates an example of a multi-host system with inline service switches that statefully distribute the service load to service nodes.
FIG. 12 conceptually illustrates an example of extracting and re-using a session parameter.
FIG. 13 conceptually illustrates another example of extracting and re-using a session parameter.
FIG. 14 conceptually illustrates a process of some embodiments for processing a service request in a sticky manner from an associated VM.
FIG. 15 illustrates a more detailed architecture of a host computing device
FIG. 16 illustrates an example of a controller re-configuring the application layer deployment.
FIG. 17 illustrates another example of a controller re-configuring the application layer deployment.
FIG. 18 conceptually illustrates a process of some embodiments for defining service policy rules for an inline switch.
FIG. 19 conceptually illustrates a process of some embodiments for modifying a service rule and reconfiguring inline service switches that implement this service rule.
FIG. 20 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
DETAILED DESCRIPTION
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapaths). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes.
Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes clusters for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
A tunnel uses a tunnel header to encapsulate the packets from one type of protocol in the datagram of a different protocol. For example, VPN (virtual private network) tunnels use PPTP (point-to-point tunneling protocol) to encapsulate IP (Internet Protocol) packets over a public network, such as the Internet. GRE (generic routing encapsulation) tunnels use GRE headers to encapsulate a wide variety of network layer protocols inside virtual point-to-point links over an IP network. In other words, a GRE tunnel encapsulates a payload inside an outer IP packet.
As used in this document, cloud refers to one or more sets of computers in one or more datacenters that are accessible through a network (e.g., through the Internet). In some embodiments, the XaaS model is implemented by one or more service providers that operate in the same datacenter or in different datacenters in different locations (e.g., different neighborhoods, cities, states, countries, etc.).
Also, as used in this document, a data message refers to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to L2, L3, L4, and L7 layers are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.
In some embodiments, an inline service switch (ISS) performs load balancing operations to distribute data messages among several service nodes or service node clusters that perform the same service. Alternatively, or conjunctively, a service cluster in some embodiments can have one or more load balancers that distribute data messages received for the cluster among the service nodes of the service cluster.
In some embodiments, at least one service cluster implements an elastic model in which one primary service node receives the cluster's data messages from the inline service switches. This service node then either performs the service on the data message itself or directs the data message (e.g., through L3 and/or L4 network address translation, through MAC redirect, etc.) to one of the other service nodes (called secondary service nodes) in the cluster to perform the service on the data message. The primary service node in some embodiments elastically shrinks or grows the number of secondary service nodes in the cluster based on the received data message load.
In some embodiments, an SCN can be a virtual machine (VM) or software container (such as a Docker container) that executes on a host along with other VMs or containers that serve as SCNs or destination compute nodes (DCNs). Examples of DCNs in some embodiments include compute end nodes that generate or consume data messages, or middlebox service nodes that perform some type of data processing on the data messages as these messages are being relayed between the data compute end nodes. Examples of data compute end nodes include webservers, application servers, database servers, etc., while example of middlebox service nodes include firewalls, intrusion detection systems, intrusion prevention systems, etc.
A service node is a standalone appliance or is a DCN (e.g., a VM, container, or module) that executes on a host computer. The service nodes can be data compute end nodes (e.g., webservers, application servers, database servers, etc.), or can be middlebox service nodes (e.g., firewalls, intrusion detection systems, intrusion prevention systems, etc.).
In some embodiments, the inline service switch is another software module that executes on the same host as the SCN. Two or more of the SCNs on the host use the same inline service switch in some embodiments, while in other embodiments, each SCN on the host has its own inline service switch that executes on the host. The host also executes a software forwarding element (SFE) in some embodiments. The SFE communicatively couples the SCNs of the host to each other and to other devices (e.g., other SCNs) outside of the host. In some embodiments, the inline switches are inserted in the egress path of the SCNs before the SFE.
In some embodiments, one or more controllers configure the inline service switches by providing the service distribution policies or by providing distribution rules that are defined based on the service distribution policies. One example of these controllers are the ISS controllers 120 of FIG. 1. This figure illustrates an example of a multi-host system 100 with the inline service switches 105 of some embodiments. This system includes multiple host computing devices 110, a set of ISS controllers 120, a set of one or more VM managing controllers 125, and multiple service node clusters 150. As shown in FIG. 1, the hosts 110, the ISS controller set 120, the VM manager set 125, and the service node clusters 150 communicatively couple through a network 175, which can include a local area network (LAN), a wide area network (WAN) or a network of networks (e.g., Internet).
Each host computing device 110 (e.g., computer) executes one or more VMs 115, one or more SFEs 130 (e.g., a software switch, a software router, etc.), an ISS agent 135, and one or more inline service switches 105. The VMs include SCNs and DCNs in some embodiments. In some embodiments, an SFE 130 on a host communicatively couples the VMs of the host to each other and to devices outside of the host (e.g., to VMs of other hosts). Also, in some embodiments, an SFE of a host implements one or more logical networks with the SFEs executing on other hosts. The SFE 130 also communicatively couples an ISS 105 on the host to one or more service nodes or one or more service node clusters 150.
In some embodiments, each ISS 105 is associated with one VM on its host, while in other embodiments, one ISS 105 is associated with more than one VM on its host (e.g., is associated with all VMs on its host that are part of one logical network). For the data messages that are sent by its associated VM, an ISS 105 enforces one or more service rules that implement one or more service policies. Based on the service rules, the ISS (1) determines whether a sent data message should be processed by one or more service nodes or clusters, and (2) if so, selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster through a tunnel.
Each ISS 105 has a load balancer 160 that it uses to determine how to distribute the load for performing a service to one or more service nodes or one or more service node clusters that perform this service. In some embodiments, an ISS 105 connects to a service node or cluster through a tunnel. In other embodiments, the inline switches connect to some service nodes/clusters through tunnels, while not using tunnels to connect to other service nodes/clusters. In some embodiments, the service nodes are in different datacenters than the hosts 110 and controllers 120 and 125, while in other embodiments one or more of the service nodes are in the same datacenter as the hosts 110 and controllers 120 and 125. In some embodiments, some of the service nodes are service VMs that execute on hosts 110.
Also, in some embodiments, different service node clusters can provide the same service or can provide different services. For instance, in the example illustrated in FIG. 1, the service node clusters 150 a and 150 b provide the same service (e.g., firewall service), while the service node cluster 150 c provides a difference service (e.g., intrusion detection). The tunnel-based approach for distributing data messages to service nodes/clusters in the same datacenter or different datacenters is advantageous for seamlessly implementing a cloud-based XaaS model, in which any number of services are provided by service providers in the cloud.
This tunnel-based, XaaS model architecture allows hosts 110 and VMs 115 in a private datacenter (e.g., in an enterprise datacenter) to seamlessly use one or more service clusters that are in one or more public multi-tenant datacenters in one or more locations. The private datacenter typically connects to a public datacenter through a public network, such as the Internet. Examples of cloud service providers include: firewall-service providers, email spam service providers, intrusion detection service providers, data compression service providers, etc. One provider can provide multiple cloud services (e.g., firewall, intrusion detection, etc.), while another provider can provide only one service (e.g., data compression).
In some embodiments, the ISS for a VM is deployed in the VM's egress datapath. For instance, in some embodiments, each VM has a virtual network interface card (VNIC) that connects to a port of the SFE. In some of these embodiments, the inline switch for a VM is called by the VM's VNIC or by the SFE port to which the VM's VNIC connects. In some embodiments, the VMs execute on top of a hypervisor, which is a software layer that enables the virtualization of the shared hardware resources of the host. In some of these embodiments, the hypervisor provides the inline switches that provide the inline switching and load balancing service to its VMs.
Multiple inline service switches that execute on multiple hosts can implement a distributed service switch. In a distributed service switch, the data messages from one group of related VMs on multiple different hosts get distributed to one or more service nodes or clusters according to the same service distribution policies. These data messages are distributed according to the same service distribution policies because the individual inline service switches for the SCN group are configured with the same policies or rules.
The VM managing controllers 125 provide control and management functionality for defining (e.g., allocating or instantiating) and managing one or more VMs on each host. The ISS controller set 120 configures the inline switches 105 and their associated load balancers 160 through the ISS agent 135. In some embodiments, one of these two controller sets 120 and 125 provide control and management functionality for defining and managing multiple logical networks that are defined on the common SFE physical infrastructure of the hosts. The controllers 120 and 125 communicate with their agents that execute on the hosts through out-of-band control channel communication in some embodiments. In some embodiments, controllers 120 and 125 are standalone servers or are servers executing on host machines along with other servers.
In some embodiments, the ISS controller set 120 provides the ISS agent with high level service policies that the ISS agent converts into service rules for the inline switches to implement. These service policies and rules include load balancing policies and rules that the load balancers of the inline switches implement. In some embodiments, the ISS controller set provides the ISS agent with service rules that the agent passes along to the inline switches and load balancers. In still other embodiments, the ISS controller set provides both service policies and service rules to the ISS agent. In these embodiments, the ISS agent converts the service policies to service rules, and then it provides the received and converted service rules to the inline switches and load balancers. In yet other embodiments, the ISS controller set directly configures the inline switches and load balancers without going through an ISS agent.
In some embodiments, the ISS controller set also provides to the ISS agents 135, service switches 105 or their load balancers 160, load balancing criteria that the load balancers use to perform their load balancing operations. For example, the load balancing criteria includes a set of weight values that specify how the load balancers should distribute the data message load among a set of service nodes in a weighted round robin approach. In some embodiments, the ISS controller set 120 distributes data-message load statistics and the service agents 135, ISS 105 or the load balancers 160 generate load balancing criteria based on these statistics.
More specifically, in some embodiments, the ISS controller set 120 gathers statistics from inline switches and based on the gathered statistics, dynamically adjusts the service policies, service rules and/or load balancing criteria that it distributes directly or indirectly (through the ISS agent) to the inline switches and load balancers. In some embodiment, each inline switch stores statistics regarding its data message distribution in a data storage (called STAT storage below) that it updates on its host. The ISS agent 135 periodically gathers the collected statistics from the STAT data storage (not shown in FIG. 1), and relays these statistics to the ISS controller set 120. In some embodiments, the agent 135 aggregate and/or analyze some of the statistics before relaying processed statistics to the ISS controller set 120, while in other embodiments the agents relay collected raw statistics to the ISS controller set 120.
The ISS controller set 120 of some embodiments aggregates the statistics that it receives from the agents of the hosts. In some embodiments, the ISS controller set 120 then distributes the aggregated statistics to the agents that execute on the hosts. These agents then analyze the aggregated statistics to generate and/or to adjust rules or criteria that their associated inline switches or their load balancers enforce. In other embodiments, the controller set analyzes the aggregated statistics to generate and/or to adjust service policies, service rules and/or LB criteria, which the controller set then distributes to the agents 135 of the hosts for their inline switches and load balancers to enforce.
In some of these embodiments, the controller set distributes the same policies, rules and/or criteria to each ISS in a group of associated ISS, while in other embodiments, the controller set distributes different policies, rules and/or criteria to different ISS in a group of associated ISS. In some embodiments, the controller set distributes updated policies, rules and/or criteria to some of the inline switches in an associated group of switches, while not distributing the updated policies, rules and/or criteria to other inline switches in the associated group. In some embodiments, the controller set updates and distributes some policies, rules or criteria based on the aggregated statistics, while also distributing some or all aggregated statistics to the hosts so that their agents can generate other rules or criteria. One of ordinary skill in the art will realize that in some embodiments the policies, rules and/or criteria are not always adjusted based on the aggregated statistics, but rather are modified only when the aggregated statistics require such modification.
Irrespective of the implementation for updating the policies, rules, and/or criteria, the collection and aggregation of the data traffic statistics allows the switching rules or criteria to be dynamically adjusted. For instance, when the statistics show one service node as being too congested with data traffic, the load balancing rules or criteria can be adjusted dynamically for the load balancers that send data messages to this service node, in order to reduce the load on this service node while increasing the load on one or more other service node in the same service node cluster. In some embodiments, the collection and aggregation of the data traffic statistics also allows the controller set 120 to reduce the load on any service node in a service-node cluster by dynamically directing a service-node management controller set (not shown) to provision new service node(s) or allocate previously provisioned service node(s) to the service cluster.
FIG. 2 illustrates a process 200 that an ISS 105 of a VM 115 performs for a data message sent by the VM. In performing this process, the ISS 105 (1) determines whether the data message should be processed by one or more service nodes or clusters, and (2) if so, selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster through a tunnel. To select a service node or service-node cluster, the ISSS performs a load balancing operation to ensure that the data message flows that it processes are distributed among several service nodes or clusters based on a set of load balancing criteria. The process 200 will be described below by reference to FIGS. 3-5. FIG. 3 different examples of service rules enforced by the process 200 in some embodiments. FIGS. 4 and 5 respectively show an ISS 105 distributing data message flows to service nodes 305 in one service node cluster 310, and distributing data message flows to different service-node clusters 405 that perform the same service.
The process 200 starts when the ISS 105 receives a data message that its associated VM sends. As mentioned above, the ISS 105 is deployed in the VM's egress datapath so that it can intercept the data messages sent by its VM. In some embodiments, the ISS 105 is called by the VM's VNIC or by the SFE port that communicatively connects to the VM's VNIC.
At 210, the process determines whether the data message is part of a data message flow for which the process has processed other data messages. In some embodiments, the process makes this determination by examining a connection storage that the ISS maintains to keep track of the data message flows that it has recently processed. Two data messages are part of the same flow when they share the same message headers. For example, two packets are part of the same flow when they have the same five tuples identifier, which includes the source IP address, destination IP address, source port, destination port, and protocol.
As further described below by reference to 255, the connection storage stores one record for each data message flow that the ISS has recently processed. This record stores a description of the set of service rules that have to be applied to the flow's data messages or has a reference (e.g., a pointer) to this description. In some embodiments, when the operation of the service rule set requires the data message to be dropped, the connection-storage record also specifies this action, or specifies this action in lieu of the service rule description. Also, when no service has to be performed for data messages of this flow, the connection-storage record indicates that the ISS should allow the received data message to pass along the VM's egress datapath.
In some embodiments, this record stores the flow's identifier (e.g., the five tuple identifiers). In addition, the connection storage is hash addressable (e.g., locations in the connection storage are identified based on a hash of the flow's identifier) in some embodiments. When the process determines (at 210) that it has previously processed a data message from the same flow as the received data message, it transitions to 215, where it performs the action or service-rule set that was previously specified for data messages of this flow in the connection-storage record for this flow. After performing these service operations, the process 200 provides (at 215) a data message to the module (e.g., SFE port or VNIC) that called it, assuming that the service operations do not require the data message to be dropped, in which case the process so notifies the calling module. Typically, because of the service operation(s) performed, the data message that the process 200 returns to the calling module is a modified version of the data message received at 205. The modified data message may have different header value and/or datagram (i.e., payload) than the received data message. In some cases, the returned data message might be identical to the received data message. After 215, the process ends.
When the process determines (at 210) that it has not previously processed a data message from the same data message flow, the process determines (at 220) whether the service rules that it enforces require one or more service actions to be performed on the received data message. In some embodiments, the ISS has a service rule storage that stores several service rules that the ISS enforces. Each service rule can be associated with one or more data message flows from the inline switch's VM, and different service rule can be associated with different data message flows from this VM. In some embodiments, each service rule in the service rule storage has (1) an associated set of data message identifiers (e.g., packet header values) and (2) a set of one or more actions.
The process 200 determines (at 220) whether the received data message's identifiers (e.g., five tuples) match the data message identifiers of any service rule in its service rule storage. When a data message matches more than one service rule, the process 200 of some embodiments only performs the set of actions that is specified by the highest priority matching service rule. In some such embodiments, the service rule storage stores the rules according to a sort that is based on their priorities so that the process 200 first matches the data message to a higher priority rule before being able to match it to a lower priority rule, when more than one rule matches the data message.
When the received data message's header values do not match the rule-matching identifier of any service rule that specifies a service action in the service rule storage, the process 200 determines that it does not need to forward the data message to any service node to perform any service action. Hence, it creates (at 222) a record in the connection storage to specify that no service action is needed for data messages that are part of the same flow as the received data message. For some embodiments of the invention, the structure of the connection storage was described above and further described below. At 222, the process also notifies the module (e.g., SFE port or the VM VNIC) that called it that the process has finished processing the data message. In some embodiments, this notification is not accompanied by the data message, while in other embodiments, this notification is accompanied by the data message. In sending this notification, the process 200 is allowing the received data message to pass without any service being performed on it. After 222, the process ends.
When the received data message's identifiers match the rule-matching identifier of one or more service rules in the service rule storage, the process performs 225-250 to process the actions of the matching service rule or rules. In some embodiments, each service rule can specify only one action, while in other embodiments, a service rule can specify a sequence of one or more actions. A service action in some embodiments entails forwarding the matching data messages to a service node or cluster. For such an action, the service rule identifies directly, or through another record (to which the rule refers), the service nodes of a cluster or service-node clusters of a group of service clusters for performing the service. As further described below, the process 200 selects one of the identified service nodes or clusters.
FIG. 3 illustrates several examples of service rules specifying service actions. This figure illustrates a service rule storage 300 that stores multiple service rules. Each service rule has an associated service rule identifier set 305 that is expressed in terms of one or more data message header values (e.g., one or more five tuple values, as described above). The process 200 compares the service rule identifier set to a data message's header values in order to determine whether the service rule matches a received data message.
Each service rule also specifies one or more actions, with each action being specified in terms of an action type 310 (e.g., firewall action type, IPS action type, IDS action type, etc.) and a tunnel ID set 315. In some embodiments, the tunnel ID set of each action of a service rule identifies (1) one or more tunnels between the ISS and one or more service nodes in a cluster, or (2) one or more service clusters in a service cluster group that provides the service. In some embodiments, the tunnel ID sets of the service rules are supplied as a part of the data initially supplied by the ISS controller set (e.g., in order to configure the ISS) or are supplied in subsequent updates that is provided by the controller set.
When a service rule specifies more than one action, the actions can be associated with more than one service. In this manner, a service rule can specify a sequence of service operations that need to be performed on a matching data message. As mentions above, some embodiments store the service rules in the data storage 300 according to a sort that is based on the rule priorities, because the process 200 in these embodiments matches a data message to only one service rule, and the sorted order allows the process to match a data message to a matching higher priority rule instead of lower priority matching rule.
In the example illustrated in FIG. 3, service rule 350 has one associated action, while service rule 355 has multiple associated actions. In other embodiments, each service rule can only specify one service action. Also, in other embodiments, the service rule does not directly identify the tunnel ID for the service node or cluster. For instance, in some embodiments, the process 200 identifies the tunnel ID by using a service-node identifier or service-cluster identifier to retrieve the tunnel ID from a table that identifies these IDs.
At 225, the process selects a service action of a service rule that matches the received data message header value. When a matching service rule specifies a sequence of two or more service actions, the process 200 maintains a record (e.g., a count) that identifies where it is in the sequence of actions that it has to perform so that when it returns to 225 it will know which is the next service action that it has to select in the sequence. This will be further described below.
In some embodiments, this service action has an associated tunnel ID set 315 that specifies one or more tunnels of one or more service nodes or service node clusters that perform the service action. Accordingly, at 230, the process 200 uses the load balancer of the ISS to select for the data message in a load-balance way, one service node or one service node cluster from the set of service nodes or service-node clusters that are identified by the tunnel ID set. In some embodiments, the ISS load balancer distributes the load in a stateful manner so that data messages that are part of the same flow are processed by the same service node or the same service node cluster.
To select service nodes or service-node clusters in a load-balance manner, each service rule in some embodiments specifies a set of weight values (not shown) for each of the rule's specified tunnel ID set. Alternatively, in other embodiments, each service rule refers to another record that identifies the weight value set for each tunnel ID set identified for the rule. Each weight value set specifies a weight value for each tunnel ID in the associated tunnel ID set, and provides the load-balancing criteria for the ISS load balancer to spread the traffic to the service nodes or clusters that are identified by the tunnel ID set.
For instance, in some embodiments, the ISS load balancer uses these weight values to implement a weighted round robin scheme to spread the traffic to the nodes or clusters. As one example, assume that the tunnel ID set has five tunnel IDs and the weight values for the tunnel IDs are 1, 3, 1, 3, and 2. Based on these values, the ISS load balancer would distribute data messages that are part of ten new flows as follows: 1 to the first tunnel ID, 3 to the second tunnel ID, 1 to the third tunnel ID, 3 to the fourth tunnel ID, and 2 to the fifth tunnel ID. As further described below, the weight values for a service rule are generated and adjusted by the ISS agent 135 and/or ISS controller set 120 in some embodiments based on the statistics that the controller set collects from the inline switches. To gracefully switch between different load balancing criteria, a tunnel ID set can have multiple weight value sets and the service rule in some embodiments can specify different time periods during which different weight values (i.e., different load balancing criteria) of the tunnel ID set are valid.
After selecting (230) a service node or service-node cluster for the data message, the process (at 235) identifies a tunnel key, encapsulates the data message with a tunnel header (that includes the identified tunnel key) for the tunnel to the selected service node or service-node cluster, and provides this tunnel-header encapsulated data message to its host's SFE for forwarding to the selected service node or service-node cluster. Examples of such tunnels and keys are GRE tunnels, Geneve tunnels, GRE keys, Geneve keys, etc. As further described below, the inline switches of some embodiments also use other redirection mechanisms (such as MAC redirect, destination network address translation, etc.) to forward data messages to some of the service nodes and service-node clusters.
Tunnel keys (e.g., GRE keys) allow multiple data message flows to share the same tunnel. For each data message flow, the process in some embodiments uses one GRE key to send the flow's data messages to service node or cluster at the other end of the tunnel and to receive responsive data messages in response to the sent data messages from this node or cluster. For data messages from the service node or cluster, the tunnel key also allows the process 200 to associate the data message to the data message that the process sent to the service node or cluster.
FIG. 4 presents an example that shows the inline service switches 105, of several related VMs 115 executing on the same host or on different hosts, using several tunnels 450 to distribute their VM data messages to several service nodes 405 of a service node cluster 410 that perform the same service (e.g., a firewall service or an IPS service) on these messages. An ISS performs a load balancing operation to select the service node for each data message flow.
In FIG. 4, each tunnel is established between an ISS 105 and a service node 405 in the cluster. For data messages of different flows that share the same tunnel to the same service node, an ISS 105 uses different tunnel keys so that different flows can share the same tunnel. Also, through each service-node tunnel, the ISS receives data messages in response to the data messages that it sends to the service node, and uses the tunnel keys to associate each responsive data message with a data message that it sent.
In some embodiments, each service node 405 is a standalone appliance. In other embodiments, one or more service nodes 405 are servers executing on a host computer. For such service nodes, the tunnels 405 in some embodiments are tunnels that are provisioned for the host computer, or for an SFE of the host computer, on which the service node executes. On the inline-switch side, the tunnel can also be provisioned at the host level in some embodiments. In other words, in some embodiments, two or more inline switches 105 that execute on the same host computer uses the same tunnel to a service node.
FIG. 5 presents an example that shows the inline service switches 105, of several related VMs 115 executing on the same host or on different hosts, using several tunnels 550 to distribute their VM data messages to several service-node clusters 505 that perform the same service (e.g., a firewall service or an IPS service) on these messages. In this example, an ISS performs a load balancing operation to select the service cluster for each data message flow. As in the example of FIG. 4, different tunnel keys are used to identify data messages of different flows that share the same tunnel in the example of FIG. 5.
In the example illustrated in FIG. 5, each service cluster 505 has multiple service nodes 510 that perform the same service, and a load-balancing webserver set 515 (with one or more webservers) that distributes the received data messages to the service nodes of its cluster. In this example, each tunnel is established between the ISS 105 and a load-balancing webserver 515 of the cluster. Through its load balancing operation 230, the ISS selects one cluster in the group of clusters of FIG. 5, in order to distribute the service load to the different clusters that perform the same service. The load-balancing webservers 515 of each cluster then have the task of distributing each cluster's load among the cluster's service nodes. In some embodiments, these webservers distribute the load in a stateful manner so that the same service node in the cluster processes data messages that are part of the same flow.
In some embodiments, the different service clusters of a service cluster group illustrated in FIG. 5 are in different datacenters at different locations. Having different service clusters in different locations that perform the same service can be advantageous in that it allows different ISS in different locations to bias their service cluster selection to service clusters that are closer to the ISS location. Also, having different service clusters perform the same service action also provides different tenants in a datacenter the ability to pick different service providers for the same service and to easily switch between these providers without the need to reconfigure the inline switches or their servers (e.g., their VMs or containers). In other embodiments, one or more of these service clusters 505 are in the same datacenter. Such service clusters might be created when different service providers provide the same service in one datacenter.
Also, the architecture illustrated in FIG. 5 is also used in some embodiments to terminate tunnels on non-service node elements (e.g., on load balancers such as load balancers 515) that distribute data messages that they receive from the inline switches 105 to one or more service nodes that perform the same service or different services. In one such approach, service nodes 515 of one service provider can be in different clusters 505. Also, in such an approach, each service cluster can have just one service node. In view of the foregoing, one of ordinary skill will realize that the tunnel that an inline switch uses to forward data message to a service node does not necessarily have to terminate (i.e., does not have to be provisioned) at the service node, but can terminate at a machine or appliance that forwards the data messages it receives through the tunnel to the service node.
A time period after sending (at 235) the data message to the service node or cluster, the process receives (at 240) a service completion confirmation from the service node or cluster through the tunnel that was used to send the data message at 235. The confirmation is part of one or more data messages that are received from the service node or cluster and that are encapsulated with the tunnel header with the tunnel key. The tunnel key allows the process 200 to associate the received data message(s) with the sent data message (i.e., the data message sent at 235).
The received confirmation might indicate that the data message should be dropped (e.g., when the service node performs a security service operation (e.g., firewall, IPS, IDS, etc.) that determines that the data message should be dropped). Alternatively, the confirmation data message(s) might return a data message with one or more modified data message header. These modified header values may re-direct the data message to a different destination once the process 200 completes its processing of the data message.
Also, the confirmation data message(s) in some embodiments might return a new or modified payload to replace the payload of the data message that was sent at 235 to the service node or cluster. For instance, when the service node or cluster performs an encryption or compression operation, the new payload might be the encrypted or compressed version of the payload of the sent data message. When the returned data message(s) provide a new or modified payload for the sent data message, the process 200 replaces the sent data message payload with the received new or modified payload before having another service node or cluster perform another service on the data message, or before having the SFE forward the data message to its eventual destination.
After receiving (at 240) the service completion confirmation, the process 200 determines (at 245) whether it should continue processing the data message. When the received confirmation indicates that the data message should be dropped, the process 200 transitions to 255, where it creates a record in the ISS connection storage to specify that data messages that are part of the same flow (as the data message received at 205) should be dropped. This record is created so that for subsequent data messages that are part of the same flow, the process does not have to search the service rule data storage and to perform the service actions before it determines that it should drop the data message. At 255, the process 200 also updates the statistics that it maintains in the ISS STAT storage to reflect the current data message's processing by the service node or nodes that processed this data message before it was dropped.
Alternatively, when the process determines (at 245) that it should continue processing the data message, it determines (at 250) whether its service rule check at 220 identified any other service actions that it has to perform the current data message. As mentioned above, the process in some embodiments can identify multiple matching service rules with multiple service actions that have to be performed on the data message. In other embodiments, the process can only identify one matching service rule to the data message. However, in some embodiments, a matching service rule might specify multiple service actions that have to be performed on a data message.
Accordingly, when the process determines (at 250) that it needs to perform another service action on the data message, it returns to 225 to select another service action and to repeat operations 230-250. When a matching service rule specifies a sequence of two or more service actions, the process 200 maintains a record (e.g., a count) that identifies where it is in the sequence of actions that it has to perform so that when it returns to 225 it will know which is the next service action that it has to select in the sequence. In other words, this record maintains the state where the process is in the service policy chain that it has to implement for a received data message.
FIG. 6 illustrates an example of an ISS sequentially calling multiple different service nodes of different clusters that perform different services in order to implement a complex service policy that involves multiple different individual service policies. This figure illustrates an ISS 105 of a VM 115 sequentially using X service nodes 605 of X different service clusters 610 to perform a complex service policy that involves X individual service action, where X is an integer. As shown, the ISS uses different tunnels 650 to send data messages to the X service nodes. FIG. 6 shows the tunnels that are used to process the data message in terms of solid lines, while showing other candidate tunnels that the ISS 105 does not select in terms of dashed lines. The use of the tunnels allows some or all of the clusters to be in the cloud. In other words, the tunnels allow the ISS to seamlessly implement a cloud-based XaaS model.
In some embodiments, the different service clusters 610 can be located in the same datacenter with each other, or in different datacenters. Also, a service cluster 610 can be located in the same datacenter as the VM 115 and ISS 105, or it can be in a different datacenter. The VM 115 is in a private datacenter (e.g., in an enterprise datacenter) while the one or more service clusters are in a public multi-tenant datacenter in a different location. As mentioned above, the tunnel-based approach for distributing data messages to service nodes/clusters in the same datacenter or different datacenters is advantageous for seamlessly implementing a cloud-based XaaS model, in which any number of services are provided by service providers in the cloud.
In some embodiments, when an inline switch 105 sequentially calls multiple service nodes or clusters to perform multiple service actions for a data message that the switch has received, the inline switch sends a data message to each service node or cluster that is identical to the data message that the inline service switch initially receives when the process 200 starts, or identical to the data message that the inline service switch receives from a previous service node that performed a previous service action on a data message that the inline service switch sent to the previous service node. In other words, in these embodiments, the inline switch just relays in the tunnels that connect it to the service nodes or clusters, the data messages that it receives (at 205) at the start of the process 200 and receives (at 240) from the service nodes. In these situations, the inline switch just places a tunnel packet header on the data message that it receives before forwarding it to the next service action node.
In performing its service action on a received data message, one service node might modify the data message's header value and/or its datagram before sending back the modified data message. Notwithstanding this modification, the discussion in this document refers to all the data messages that are received by the inline switch during the execution of the process 200 (i.e., while this switch is directing the service node(s) or cluster(s) to perform a desired sequence of service operations that are initiated when the first data message is received at 205 to start the process 200) as the received data message. One of ordinary skill will realize that after each service operation, the data message can be modified so that the resulting message is not similar (e.g., has a different header value or different datagram) to the message on which the operation was perform.
Also, one of ordinary skill will realize that in some embodiments the inline switch might just send a portion of a received data message to the service node. For instance, in some embodiments, the inline switch might send only the header of a data message, a portion of this header, the payload of the data message, or a portion of the payload. Analogously, the service nodes in some embodiments do not send back a data message that is a modified version of a data message that they receive, but instead send back a value (e.g., Allow, Drop, etc.).
When the process determines (at 250) that it has performed all service actions that it identified for the data message received at 205, the process creates (at 255) a record in the ISS connection storage to specify the service action or service-action sequence that should be performed for data messages that are part of the same flow (as the data message received at 205). This record is created so that for subsequent data messages that are part of the same flow, the process does not have to search the service rule data storage. Instead, at 210, the process can identify for these subsequent data messages the service action(s) that it has to perform from the record in the connection storage, and it can perform these actions at 215. For each service action that the process 200 identifies in the connection storage, the process also identifies, in the connection storage record, the identified service node or cluster (i.e., the node or cluster identified at 225) that has to perform the service action, so that all the data messages of the same flow are processed by the same service node or cluster for that service action.
At 255, the process 200 also updates the statistics that it maintains in the ISS STAT storage to reflect the current data message's processing by the service node or nodes that processed this data message. After performing the service operations, the process 200 provides (at 255) a data message to the module (e.g., SFE port or VNIC) that called it, assuming that the service operations do not require the data message to be dropped, in which case the process so notifies the calling module. Again, because of the service operation(s) performed, the data message that the process 200 returns to the calling module is typically a modified version of the data message received at 205 (e.g., has one or more different header value and/or a modified payload), but in some cases, the returned data message might be identical to the received data message. After 255, the process ends.
In several examples described above by reference to FIGS. 2-6, the inline switch selects in a load-balanced manner a service node or cluster for processing a data message, and then sends the data message to the selected node or cluster through a tunnel. In other embodiments, the inline switch does not select a service node from several service nodes, nor does it select a service cluster from several service clusters. For instance, in some embodiments, the inline switch simply relays a data message along one tunnel to a service cluster so that a load-balancing node at the service cluster can then select a service node of the cluster to perform the service.
In some of these embodiments, at least one service cluster implements an elastic model in which one primary service node receives the cluster's data messages from the inline service switches. This service node then either performs the service on the data message itself or directs the data message (e.g., through L3 and/or L4 network address translation, through MAC redirect, etc.) to one of the other service nodes (called secondary service nodes) in the cluster to perform the service on the data message. The primary service node in some embodiments elastically shrinks or grows the number of secondary service nodes in the cluster based on the received data message load.
FIG. 7 illustrates an example of such an elastic service model that uses one primary service node and zero or more secondary service nodes. This example is illustrated in three stages 705-715 that illustrate the operation of a service node cluster 700 at three different instances in time. The first stage 705 illustrates that at a time T1, the cluster includes just one primary service node (PSN) 720. As shown, the PSN 720 has a load balancer (LB) and a service virtual machine (SVM).
In the first stage 705, the PSN receives all data messages on which the cluster has to perform its service. These messages are sent by an inline switch 105 captures and sends from its VM to the cluster 700 through a tunnel 750. In the first stage 705, the PSN's SVM 730 performs the needed service on these messages, and then directs these messages back to the inline switch 105 through the tunnel 750.
The second stage 710 illustrates that a time T2, the cluster has been expanded to include another service node, SSN1, which is implemented by a second service virtual machine. In some embodiments, the service node SSN1 is added to the cluster because the data message load on the cluster has exceeded a first threshold value. In some embodiments, a service-node controller set (not shown) adds SSN1 when it detects that the data message load has exceeded the first threshold value, or when the PSN detects this condition and directs the controller set to add SSN1. In some embodiments, the service-node controller set obtains the data message load from the PSN.
To assess whether the data message load exceeds a threshold value, the controller set or PSN in different embodiments quantify the data message load based on different metrics. In some embodiments, these metrics include one or more of the following parameters: (1) number of flows being processed by the cluster or by individual service nodes in the cluster, (2) number of packets being processed by the cluster or by individual service nodes in the cluster, (3) amount of packet data being processed by the cluster or by individual service nodes in the group.
The second stage 710 also illustrates that at time T2 the PSN performs the cluster's service on some of the data message flows, while directing other data message flows to SSN1 so that this service node can perform this service on these other flows. Once either the PSN or SSN1 performs the service on a data message, the PSN directs the data message to the ISS 105. As shown, once the SSN1 performs the services, this service node in some embodiments returns the data message to the PSN, which directs it back to the ISS. In other embodiments, the SSNs return the processed data messages directly to the inline switch. In some of these embodiments, the SSNs and the inline switches are configured to insert the appropriate packet header values and to examine the appropriate packet header values to identify data messages that have been processed by the SSNs. In still other embodiments, the SSNs establish tunnels with the inline switches (e.g., with the hosts of the inline switches) once the SSNs are provisioned so that they can return their processed messages directly to the inline switches.
The load balancer 725 of the PSN performs a load balancing operation that selects which service node (primary or secondary) in the cluster should perform the group's service on each data message that the PSN receives. In some embodiments, the load balancer 725 distributes the data messages based on a hash of the different tunnel keys that the ISS 105 uses to send different data-message flows through the tunnel 750. This hashing ensures that the data messages that are part of the same flows are processed by the same service node in the cluster. The load balancing is also based on some of the inner packet header values in some embodiments. In other embodiments, the load balancing is just based on the inner packet header values (i.e., it is not based on the tunnel keys). In some embodiment, the load balancer 725 stores in a connection storage a record of each service node selection for each data-message flow, and uses this record to forego re-assessing selection of a service node for a flow after picking a service node for the first data message in the flow. In some embodiments, the load balancer of the PSN also determines when service nodes should be added to or removed from the cluster.
The third stage 715 illustrates that a time T3, the cluster has been expanded to include yet another service node, SSN2, which is a third service virtual machine. In some embodiments, the service node SSN2 is added to the cluster because the data message load on the cluster has exceeded a second threshold value, which is the same as the first threshold value in some embodiments or is different than the first threshold value in other embodiments. Some embodiments add the service node SSN2 when the load on either PSN or SSN1 exceed a second threshold amount. Other embodiments add a new service node when the load on N (e.g., two or three) service nodes exceeds a threshold value. As before, the service-node controller set in some embodiments adds SSN2 when it or the PSN detects that the data message load has exceeded the second threshold value.
The third stage 715 also illustrates that time T3, the PSN performs the cluster's service on some of the data message flows, while directing other data message flows to SSN1 or SSN2, so that these service nodes can perform this service on these other flows. As shown, once any of the service nodes, PSN, SSN1, or SSN2, performs the service on a data message, the PSN returns the data message to the ISS 105 through the tunnel 750. After processing the data message, SSN2, like SSN1, provides its reply data message to the PSN so that the PSN can forward this message to the ISS 105 through the tunnel 750.
FIG. 8 illustrates an example where the ISS 105 of a VM 115 sequentially forwards a data message from the VM to different clusters of elastically adjusted service-node clusters. In this example, different service clusters perform different service operations on the data message. In some embodiments, SSNs of one cluster can be PSNs of other clusters, when the multiple clusters reside in the same location.
The ISS 105 connects to the PSN of each service cluster through a tunnel, which allows each service cluster to reside outside of the ISS' local area network. By sequentially relaying the data message to different service clusters, the ISS 105 can implement a complex service policy with multiple service actions (X in this example) on the data message. The use of the tunnels allows some or all of the clusters to be in the cloud. In other words, the tunnels allow the ISS to seamlessly implement a cloud-based XaaS model.
FIG. 9 illustrates a process 900 that the ISS 105 performs in some embodiments to process data messages with one or more elastically adjusted service node clusters. This process is identical to the process 200 of FIG. 2 except that process 900 does not perform the load-balancing operation 230 to select a service node in the cluster. As shown, after identifying (at 225) a service action that is to be performed by a service node of a service cluster, the process 900 just forwards (at 235) the data message to the service cluster along the tunnel that connects the ISS to the service cluster.
FIG. 10 conceptually illustrates a process 1000 that such a PSN performs whenever the PSN receives a data message in some embodiments. The process 1000 identifies one service node in the PSN's SN group that should process the received data message, and then directs the identified service node to perform the SN group's service for the received data message. The identified service node can be the PSN itself, or it can be an SSN in the SN group.
As shown in FIG. 10, the process 1000 starts (at 1005) when the PSN receives a data message through a tunnel from an ISS filter. After receiving the data message, the process determines (at 1010) whether the received message is part of a particular data message flow for which the PSN has previously processed at least one data message.
To make this determination, the process examines (at 1010) a connection-state data storage that stores (1) the identity of each of several data message flows that the PSN previously processed, and (2) the identity of the service node that the PSN previously identified as the service node for processing the data messages of each identified flow. In some embodiments, the process identifies each flow in the connection-state data storage in terms of one or more flow attributes, e.g., the flow's five tuple identifier. Also, in some embodiments, the connection-state data storage is hash indexed based on the hash of the flow attributes (e.g., of the flow's five tuple header values). For such a storage, the PSN generates a hash value from the header parameter set of a data message, and then uses this hash value to identify one or more locations in the storage to examine for a matching header parameter set (i.e., for a matching data message flow attribute set).
When the process identifies (at 1010) an entry in the flow connection-state data storage that matches the received data message flow's attributes (i.e., when the process determines that it previously processed another data message that is part of the same flow as the received data message), the process directs (at 1015) the received data message to the service node (in the SN group) that is identified in the matching entry of the connection-state data storage (i.e., to the service node that the PSN previously identified for processing the data messages of the particular data message flow). This service node then performs the service on the data message. This service node can be the PSN itself, or it can be an SSN in the SN group. After performing (at 1015) the service on the data message, the SN returns a reply data message (e.g., the processed data message) to the ISS filter that called it, and then ends.
On the other hand, when the process determines (at 1010) that the connection-state data storage does not store an entry for the received data message (i.e., determines that it previously did not process another data message that is part of the same flow as the received data message), the process transitions to 1020. In some embodiments, the connection-state data storage periodically removes old entries that have not matched any received data messages in a given duration of time. Accordingly, in some embodiments, when the process determines (at 1010) that the connection-state data storage does not store an entry for the received data message, the process may have previously identified a service node for the data message's flow, but the matching entry might have been removed from the connection-state data storage.
At 1020, the process determines whether the received data message should be processed locally by the PSN, or remotely by another service node of the SN group. To make this determination, the PSN in some embodiments performs a load balancing operation that identifies the service node for the received data message flow based, based on the load balancing parameter set that the PSN maintains for the SN group at the time that the data message is received. The load balancing parameter set is adjusted in some embodiments (1) based on updated statistic data regarding the traffic load on each service node in the SN group, and (2) based on service nodes that are added to or removed from the SN group.
The process 1000 performs different load balancing operations (at 1020) in different embodiments. In some embodiments, the load balancing operation relies on L2 parameters of the data message flows (e.g., generates hash values form the L2 parameters, such as source MAC addresses, to identify hash ranges that specify service nodes for the generated hash values) to distribute the data messages to service nodes, while in other embodiments, the load balancing operations relies on L3/L4 parameters of the flows (e.g., generates hash values form the L3/L4 parameters, such as five tuple header values, to identify hash ranges that specify service nodes for the generated hash values) to distribute the data messages to service nodes. In yet other embodiments, the load balancing operations (at 1020) use different techniques (e.g., round robin techniques) to distribute the load amongst the service nodes.
When the process determines (at 1020) that the PSN should process the received data message, the process directs (at 1025) a service module of the PSN to perform the SN group's service on the received data message. At 1025, the process 1000 also creates an entry in the flow connection-state data storage to identify the PSN as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies the PSN and identifies the received data message header values (e.g., five tuple values) that specify the message's flow. After performing (at 1025) the service on the data message, the PSN returns a reply data message (e.g., the processed data message) to the ISS filter that called it, and then ends.
When the process determines (at 1020) that based on its load balancing parameter set, the PSN should not process the received data message, the process identifies (at 1020) another service node in the PSN's SN group to perform the service on the data message. Thus, in this situation, the process directs (at 1030) the message to another service node in the PSN's SN group. To redirect the data messages, the PSN in different embodiments uses different techniques, such as MAC redirect (for L2 forwarding), IP destination network address translation (for L3 forwarding), port address translation (for L4 forwarding), L2/L3 tunneling, etc.
To perform MAC redirect, the process 1000 in some embodiments changes the MAC address to a MAC address of the service node that it identifies at 1020. For instance, in some embodiments, the process changes the MAC address to a MAC address of another SFE port in a port group that contains the SFE port connected with the PSN. More specifically, in some embodiments, the service nodes (e.g., SVMs) of a SN group are assigned ports of one port group that can be specified on the same host or different hosts. In some such embodiments, when the PSN wants to redirect the data message to another service node, it replaces the MAC address of the PSN's port in the data message with the MAC address of the port of the other service node, and then provides this data message to the SFE so that the SFE can forward it directly or indirectly (through other intervening forwarding elements) to the port of the other service node.
Similarly, to redirect the data message to the other service node through IP destination network address translation (DNAT), the PSN replaces the destination IP address in the data message to the destination IP address of the other service node, and then provides this data message to the SFE so that the SFE can forward it directly or indirectly (through other intervening forwarding elements) to the other service node.
To redirect the data message to the other service node through port address translation, the PSN replaces the destination port address in the data message to the destination port address of the other service node, and then uses this new port address to direct the data message to the other service node. In some embodiments, the PSN's network address translation may include changes to two or more of the MAC address, IP address, and port address.
After directing (at 1030) the data message to the other service node, the process creates (at 1035) an entry in the connection-state data storage to identify the other service node as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies (1) the other service node and (2) the received data message header values (e.g., five tuple values) that specify the message's flow. After performing the service on the data message, the SSN returns a reply data message (e.g., the processed data message) to the ISS filter that called it, and then ends. In some embodiments, the SSN returns the reply data message directly to the ISS filter, while in other embodiments, the SSN returns this reply data message to the ISS filter through the PSN.
The inline service switch of some embodiments statefully distributes the service load to a number of service nodes based on one or more L4+ parameters. Examples of L4+ parameters include session keys, session cookies (e.g., SSL session identifiers), file names, database server attributes (e.g., user name), etc. To statefully distribute the service load among server nodes, the inline service switch in some embodiments establishes layer 4 connection sessions (e.g., a TCP/IP sessions) with the data-message SCNs and the service nodes, so that the switch (1) can examine one or more of the initial payload packets that are exchanged for a session, and (2) can extract and store the L4+ session parameters for later use in its subsequent load balancing operation of a session.
FIG. 11 illustrates an example of a multi-host system 1100 of some embodiments with the inline service switches 1105 that statefully distributes the service load to a number of service nodes based on one or more L4+ parameters. The system 1100 is identical to the system 100 of FIG. 1, except that its inline service switches 1105 of the hosts 1110 establish layer 4 connection sessions (e.g., a TCP/IP sessions) with their associated VMs and with the service nodes.
Through the layer 4 sessions with its VM and a service node that it selects, an ISS 1105 (1) can examine one or more of the initial payload packets that are exchanged for a session, and (2) can extract and store the L4+ session parameters for later use in its subsequent load balancing operation for its VM. After establishing the L4 sessions with its VM and the service node, the ISS filter (1) receives a data packet from a session end point (i.e., from the VM or the service node), (2) extracts the old packet header, (3) examines the packet payload (i.e., the datagram after the L3 and L4 packet header values) to identify any L4+ session parameter that it needs to extract, (4) extracts any needed L4+ session parameter if one such parameter is found, (5) stores any extracted session parameter (e.g., in the connection storage 1190 on its host 1110), and (6) re-encapsulated the payload with a new packet header before relaying the packet to the other session's end point (i.e., to the service node or the VM). In some embodiments, the new and old packet headers are similar except for specifying different TCP sequence numbers as further described below.
FIG. 12 illustrates an ISS 1105 extracting and re-using a session parameter by establishing an L4 connection session with its VM and a service node 1205 of a service cluster 1250. In this example, the service cluster 1250 includes several servers (service nodes) that perform a service (e.g., provide access to secure content) through SSL (secure sockets layer) sessions. Also, in this example, the extracted and re-used session parameters are SSL session cookies.
FIG. 12 presents a messaging diagram that illustrates how the ISS 1105 relays two different sets 1230 and 1235 of SSL packets from its associated VM to a service node 1205. As shown, in both the messaging flows, the ISS 1105 first establishes a TCP session with the VM by performing a 3-way TCP handshake. After establishing the first TCP session with its VM (for the first set of SSL packets 1230), the ISS 1105 examines an initial set of one or more packets that its VM 115 sends and determines that the VM is requesting an SSL service session. The ISS 1105 then determines that the requested SSL service session is a new one as this request is not accompanied by an SSL session cookie.
Hence, the ISS 1105 determines that it has to select a service node for the requested SSL session from the service cluster 1250, and that it has to monitor the packets exchanged between the VM and this service node so that it can record the SSL session cookie for this session. In some embodiments, the ISS 1105 selects the service node 1205 in the cluster based on a set of load balancing criteria that it considers for the service cluster 1250.
After selecting the service node 1205, the ISS 1105 performs a 3-way TCP handshake with the service node 1205 in order to establish an L4 connection session with the service node 1205. Once this session is established, the ISS 1105 starts to relay the packets that it receives from its VM 115 to the service node 1205, and to relay the packets that it receives from the service node 1205 to its VM 115. In relaying the data packets between the VM 115 and the service node 1205, ISS 1105 in some embodiments can adjust the sequence numbers of the relayed data messages to address differences in sequence numbers between the VM and the service node. In some embodiments, the ISS 1105 sends packets to and receives packets from the service node 1205 through a tunnel.
In relaying one or more responsive packets from the service node 1205 to the VM 115, the ISS 1105 identifies in an initial set of packet an SSL session ID that is generated by the service node 1205. This session ID is often referred to as SSL session ID or cookie. After the SSL session ID is created, an SSL session key is generated, e.g., by the VM based on an SSL certificate of the service node. Generation of an SSL session key is computational intensive.
As the ISS 1105 has established an L4 connection with the service node 1205, it can extract the SSL session cookie from the initial set of one or more packets that the service node 1205 sends. As shown, the ISS 1105 stores the SSL session cookie in the connection storage 1190. In some embodiments, the connection storage record that stores this SSL session cookie also includes the identity of the service node 1205 as the service node that generated this cookie. In some embodiments, this record also includes one or more packet header attributes of the current flow (such as source IP, destination IP, destination port, and protocol of the current flow).
In the example illustrated in FIG. 12, the VM stops communicating with the service node for a time period. It then resumes this communication by sending a second set of data packets. Because the VM wants to continue using the same SSL session as before, the VM sends the SSL session cookie that it obtained previously. However, in such situations, it is not unusual for the VM to use a different source port for these new data packet. Because of the different source port, the ISS 1105 initially assumes that the new data packets are for a new flow.
Hence, the ISS 1105 establishes another TCP session with the VM by performing another 3-way TCP handshake. After establishing this second TCP session with its VM, the ISS 1105 examines an initial set of one or more packets sent by its VM 115 and determines this set of packets includes an SSL session cookie. As shown, the ISS 1105 extracts this cookie, compares it with the cookies in its connection storage 1190, identifies the record that stores this cookie (i.e., determines that it has previously stored this cookie) and from this record, identifies service node 1205 as the service node for processing the SSL session associated with this request.
The ISS 1105 then performs another 3-way TCP handshake with the service node 1205 in order to establish another L4 connection session with the service node 1205 because it has determined that this service node is the node that should process the request SSL session. Once this session is established, the ISS 1105 starts to relay packets back and forth between its VM 115 and the service node 1205. By extracting and storing the SSL session cookie when the SSL session was initially established, the ISS 1105 can properly route subsequent data packets from its VM 115 that include this session's cookie to the same service node 1205. This is highly beneficial in that it allows the SSL session to quickly resume, and saves the computational resources from having to generate another session key.
As mentioned above, the inline service switches of some embodiments can extract and store different L4+ session parameters for later use in facilitating efficient distribution of service requests from VMs to service nodes in service-node clusters. Other examples include session keys, file names, database server attributes (e.g., user name), etc. FIG. 13 illustrates an example of a file name as the extracted L4+ session parameter. The file name is the name of a piece of content (e.g., image, video, etc.) that is requested by a VM 115 and that is provided by the servers of a service cluster 1350.
In the example of FIG. 13, the VM's ISS 1105 stores the requested file name as part of a first set of content processing messages 1330. As part of these messages, the ISS (1) performs an initial TCP 3-way handshake, (2) receives the VM's initial request, and (3) extracts the file name from the request. In some embodiments, the VM's initial request is in the form of a URL (uniform resource locator), and the ISS 1105 extracts the file name from this URL. The URL often contains the name or acronym of the type of content being requested (e.g., contain .mov, .img, .jpg, or other similar designations that are postscripts that identify the name requested content). The ISS in some embodiments stores the extracted file name in its connection storage 1190 in a record that identifies the service node 1305 that it selects to process this request. From the servers of the cluster 1350, the ISS identifies the service node 1305 by performing a load balancing operation based on a set of load balancing criteria that it processes for content requests that it distributes to the cluster 1350.
Next, the ISS 1105 performs a 3-way TCP handshake with the service node 1305 in order to establish an L4 connection session with the service node 1305. Once this session is established, the ISS 1105 relays the content request to the service node 1305. In relaying this request to the service node 1305, ISS 1105 in some embodiments can adjust the sequence numbers of the relayed data packets to address differences in sequence numbers between the VM and the service node 1305. In some embodiments, the ISS 1105 sends packets to and receives packets from the service node 1305 through a tunnel.
The ISS 1105 then receives one or more responsive packets from the service node 1305 and relays these packets to the VM 115. This set of packets includes the requested content piece. In some embodiments, the ISS 1105 creates the record in the connection storage 1190 to identify the service node 1305 as the server that retrieved the requested content piece only after receiving the responsive packets from this server.
In some embodiments, the service node 1305 directly sends its reply packets to the VM 115. In some of these embodiments, the ISS 1105 provides a TCP sequence number offset to the service node, so that this node can use this offset in adjusting its TCP sequence numbers that it uses in its reply packets that respond to packets from the VM 115. In some embodiments, the ISS 1105 provides the TCP sequence number offset in the encapsulating tunnel packet header of a tunnel that is used to relay packets from the ISS to the service node 1305. Also, in some embodiments, the inline service switch 1105 is configured to, or is part of a filter architecture that is configured to, establish the L4 connection session for its associated VM. In these embodiments, the ISS 1105 would not need to establish a L4 connection session with its VM in order to examine L4 parameters sent by the VM.
A time period after its initial request for the content piece, the VM 115 starts a second set of content processing messages 1335 by requesting the same content piece. In such situations, it is not unusual for the VM to use a different source port for these new data packet. Because of the different source port, the ISS 1105 initially assumes that the new data packets are for a new flow. Hence, the ISS 1105 establishes another TCP session with its VM by performing a 3-way TCP handshake. After establishing this second TCP session with its VM, the ISS 1105 examines an initial set of one or more packets sent by its VM 115 and determines this set of packets includes a content request. ISS 1105 then extracts the file name from the URL of this request, compares this file name with the file names stored in its connection storage 1190, and determines that it has previously processed a request for this content piece by using service node 1305.
Accordingly, the ISS 1105 performs another 3-way TCP handshake with the service node 1305 in order to establish another L4 connection session with the service node 1305. Once this session is established, the ISS 1105 relays the content request to this service node, and after obtaining the responsive data packets, relays them to its VM.
This approach is highly beneficial in that it saves the service cluster's resources from having to obtain the same piece of content twice. In other words, going to the same service node is efficient as the service node 1305 probably still has the requested content in its cache or memory. When multiple ISS 1105 on the same host share the same connection storage, this approach is also beneficial in that it allows one ISS of one VM to go to the same service node as the ISS of another VM when both VMs requested the same piece of content within a particular time period.
FIG. 14 illustrates a process 1400 that an ISS 1105 of a VM 115 performs to process a service request in a sticky manner from an associated VM. In performing this process, the ISS 1105 (1) determines whether the request is associated with a service request previously processed by a service node of a service-node cluster, and (2) if so, directs the service request to the service node that was previously used. The ISS 1105 determines whether the request is associated with a previously processed request by examining L4+ session parameters that it stored for previous requests in its connection storage 1190.
The process 1400 starts when the ISS 1105 receives a data message sent by its associated VM. In some embodiments, the ISS 1105 is deployed in the VM's egress datapath so that it can intercept the data messages sent by its VM. In some embodiments, the ISS 1105 is called by the VM's VNIC or by the SFE port that communicatively connects to the VM's VNIC. In some embodiments, the received data message is addressed to a destination address (e.g., destination IP or virtual IP address) associated with a service node cluster. Based on this addressing, the ISS ascertains (at 1405) that the data message is a request for a service that is performed by the service nodes of the cluster.
At 1410, the process determines whether the data message is part of a data message flow for which the process has processed other data messages. In some embodiments, the process makes this determination by examining its connection storage 1190, which stores records of the data message flows that it has recently processed as further described below by reference to 1445. Each record stores one or more service parameters that the process previously extracted from the previous data messages that it processed. Examples of such session parameters include session cookies, session keys, file names, database server attributes (e.g., user name), etc. Each record also identifies the service node that previously processed data messages that are part of the same flow. In some embodiments, this record also stores the flow's identifier (e.g., the five tuple identifier). In addition, the connection storage is hash addressable (e.g., locations in the connection storage are identified based on a hash of the flow's identifier) in some embodiments.
When the process determines (at 1410) that it has previously processed a data message from the same flow as the received data message, it transitions to 1415. At 1415, the process retrieves from the connection storage 1190 the identity of the service node that it used to process previous data messages of the same flow, and forwards the received data message to the identified service node to process. In some cases, at 1415, the process also (1) retrieves the previously stored session parameter(s) (e.g., session cookie) for the data message's flow from the connection storage 1190, and (2) forwards the retrieved parameter(s) to the identified service node so that this node can use the parameter(s) to process the forwarded data message. Instead of forwarding the retrieved service parameter(s) to the service node, the process 1400 in some embodiments uses the retrieved service parameter(s) to perform an operation on the received data message, before forwarding the data message to the identified service node. Also, in some embodiments, the process provides additional context information (e.g., Tenant ID, Network ID, etc.), which cannot be encoded in the tunnel key. After 1415, the process 1400 ends.
When the process determines (at 1410) that it has not previously processed a data messages from the same data message flow, the process establishes (at 1420) an L4 session with the VM (e.g., by performing a three-way TCP handshake with the VM). After establishing the L4 session with its VM, the process determines (at 1425) whether an initial set of one or more packets sent by its VM contain one or more L4 service parameters that the process can use to determine whether it has previously processed a similar service request. Again, examples of such session parameters include session cookies, session keys, file names, database server attributes (e.g., user name), etc.
When the set of packets includes one or more such L4 service parameters, the process determines (at 1420) whether the connection storage 1190 contains a record for the identified L4 service parameter(s). If so, the process transitions to 1415 to forward the data message to the record's identified service node. In some embodiments, the process 1400 also performs other operations at 1415, as described above. The process 1400 can transition from either 1410 or 1420 to 1415, because the process can determine that the same session record is applicable based either on outer packet header values (e.g., L2, L3 and L4 values) of one message flow, or on inner packet values (e.g., L4+ parameters) of another message flow. The inner packet values might match a session record when the VM uses a different source port for a service session that follows an earlier related service session, as described above by reference to FIG. 12. This would also result when the VM requests the same file and the file name is used to identify the same service node, as described above by reference to FIG. 13.
When the process 1400 determines that the examined packets do not include an L4+ service parameter for which the connection storage stores a record that identifies a service node as the service node for processing the VM's service request, the process uses (at 1430) the load balancer of the ISS to select a service node in a service node cluster to process the service request from the VM. To select service nodes in a load-balance manner, the process 1400 uses a service rule that matches the received message flow attributes. The service rule specifies a set of service nodes, and a set of load-balancing criteria (e.g., weight values) for each of the rule's specified service nodes. Different service rules in some embodiments specify different service action sets that have to be performed, and the load-balancing criteria for each service action of the rule specify the criteria for distributing data messages amongst the service nodes for that action.
After selecting (1430) a service node for the data message, the process establishes (at 1435) an L4 session with the service node (e.g., through a three-way TCP handshake with the service node), because it soft terminated the session with the VM. Next, at 1440, the process uses this connection session to forward the data messages that it receives from the VM to the selected service node.
Through this connection, the process also receives responsive data messages from the selected service node, and it forwards these received data messages to the VM through its connection session with the VM. In relaying the data messages back and forth, the process in some embodiments adjusts the TCP sequence numbers of the data messages, as described above. In some embodiments, the process exchanges messages with the selected service node through a tunnel. Hence, in these embodiments, the process encapsulated the data messages that it relays to the service node with a tunnel header, and it removes this tunnel header from the data messages that it passes from the service node to the VM. As the process 1400 relays data messages to the service node, it updates in some embodiments the statistics that it maintains in the ISS STAT storage to keep track of the data messages that it is directing to different service nodes.
At 1445, the process stores in the connections storage 1190 one or more L4+ parameters that it extracts from the data messages that it relays between the VM and selected service node. In some embodiments, the process stores the L4+ parameter set in a record that identifies the selected service node, as mentioned above. By storing the selected service node's identity for the extracted L4+ parameter set, the process can later re-use the selected service node for processing data messages that related to the same L4+ parameter set. In some embodiments, the record created at 1445 also stores the flow identifier of the data message received at 1405, so that this record can also be identified based on the outer packet header attributes of the flow. After 1445, the process ends.
The inline service switches of the embodiments described above by reference to FIGS. 12-14 select service nodes in a service node cluster, and relay data messages to the selected service nodes. However, as described above, the inline service switches of some embodiments select service node clusters in a group of service node clusters, and forward data messages to the selected clusters. One of ordinary skill will realize that the inline service switches of some embodiments implement sticky service request processing by forwarding data messages to service clusters (that perform the same service) in a sticky manner. In other words, an inline switch in these embodiments stores L4+ session parameters that allow this switch to forward the same or similar service session requests to the same service node clusters in a cluster group that performs the same service.
FIG. 15 illustrates a more detailed architecture of a host 1500 that executes the ISS filters of some embodiments of the invention. As shown, the host 1500 executes multiple VMs 1505, an SFE 1510, multiple ISS filters 1530, multiple load balancers 1515, an agent 1520, and a publisher 1522. Each ISS filter has an associated ISS rule storage 1550, a statistics (STAT) data storage 1554, and a connection state storage 1590. The host also has an aggregated (global) statistics data storage 1586.
In some embodiments, the VMs execute on top of a hypervisor, which is a software layer that enables the virtualization of the shared hardware resources of the host. In some of these embodiments, the hypervisors provide the ISS filters in order to support inline service switching services to its VMs.
The SFE 1510 executes on the host to communicatively couple the VMs of the host to each other and to other devices outside of the host (e.g., other VMs on other hosts) through one or more forwarding elements (e.g., switches and/or routers) that operate outside of the host. As shown, the SFE 1510 includes a port 1532 to connect to a physical network interface card (not shown) of the host, and a port 1535 that connects to each VNIC 1525 of each VM.
In some embodiments, the VNICs are software abstractions of the physical network interface card (PNIC) that are implemented by the virtualization software (e.g., by a hypervisor). Each VNIC is responsible for exchanging data messages between its VM and the SFE 1510 through its corresponding SFE port. As shown, a VM's ingress datapath for its data messages includes the SFE port 1532, the SFE 1510, the SFE port 1535, and the VM's VNIC 1525. A VM's egress datapath for its data messages involves the same components but in the opposite direction, specifically from the VNIC 1525, to the port 1535, to the SFE 1510, and then to the port 1532.
Through its port 1532 and a NIC driver (not shown), the SFE 1510 connects to the host's PNIC to send outgoing packets and to receive incoming packets. The SFE 1510 performs message-processing operations to forward messages that it receives on one of its ports to another one of its ports. For example, in some embodiments, the SFE tries to use header values in the VM data message to match the message to flow based rules, and upon finding a match, to perform the action specified by the matching rule (e.g., to hand the packet to one of its ports 1532 or 1535, which directs the packet to be supplied to a destination VM or to the PNIC). In some embodiments, the SFE extracts from a data message a virtual network identifier (VNI) and a MAC address. The SFE in these embodiments uses the extracted VNI to identify a logical port group, and then uses the MAC address to identify a port within the port group. In some embodiments, the SFE 1510 is a software switch, while in other embodiments it is a software router or a combined software switch/router.
The SFE 1510 in some embodiments implements one or more logical forwarding elements (e.g., logical switches or logical routers) with SFEs executing on other hosts in a multi-host environment. A logical forwarding element in some embodiments can span multiple hosts to connect VMs that execute on different hosts but belong to one logical network. In other words, different logical forwarding elements can be defined to specify different logical networks for different users, and each logical forwarding element can be defined by multiple SFEs on multiple hosts. Each logical forwarding element isolates the traffic of the VMs of one logical network from the VMs of another logical network that is serviced by another logical forwarding element. A logical forwarding element can connect VMs executing on the same host and/or different hosts.
The SFE ports 1535 in some embodiments include one or more function calls to one or more modules that implement special input/output (I/O) operations on incoming and outgoing packets that are received at the ports. One of these function calls for a port is to an ISS filter 1530. In some embodiments, the ISS filter performs the service switch operations on outgoing data messages from the filter's VM. In the embodiments illustrated in FIG. 15, each port 1535 has its own ISS filter 1530. In other embodiments, some or all of the ports 1535 share the same ISS filter 1530 (e.g., all the ports on the same host share one ISS filter, or all ports on a host that are part of the same logical network share one ISS filter).
Examples of other I/O operations that are implemented through function calls by the ports 1535 include firewall operations, encryption operations, etc. By implementing a stack of such function calls, the ports can implement a chain of I/O operations on incoming and/or outgoing messages in some embodiments. In the example illustrated in FIG. 15, the ISS filters are called from the ports 1535 for a data message transmitted by a VM. Other embodiments call the ISS filter from the VM's VNIC or from the port 1532 of the SFE for a data message sent by the VM, or call this filter from the VM's VNIC 1525, the port 1535, or the port 1532 for a data message received for the VM (i.e., deploy the service operation call along the ingress path for a VM).
For the data messages that are sent by its associated VM, an ISS filter 1530 enforces one or more service rules that are stored in the ISS rule storage 1550. These service rules implement one or more service policies. Based on the service rules, the ISS filter (1) determines whether a sent data message should be processed by one or more service nodes or clusters, and (2) if so, selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster (e.g., through a tunnel).
In some embodiments, each service rule in the service rule storage 1550 has (1) an associated set of data message identifiers (e.g., packet header values), (2) a set of one or more actions, (3) for each action, a set of service nodes or service node clusters that perform the action, and (4) for each action, a set of load balancing criteria for select a service node or cluster in the rule's set of service node or service node clusters. As further described below, a rule in some embodiments can identify a service node or cluster by providing an identifier for the tunnel connected to the service node or cluster (e.g., from the host, or the SFE, or the ISS filter).
After being called to process a data message, the ISS filter 1530 in some embodiments determines whether the received data message's identifiers (e.g., five tuples) match the data message identifiers of a service rule in its service rule storage. When the received data message's header values do not match the rule-matching identifier of one or more service rules in the service rule storage, the ISS filter 1530 informs the port 1535 that it has completed processing of the data message, without performing any service on the data message. The ISS filter also stores a record of this decision in its connection storage 1590. This record identifies the data message flow identifier (e.g., its five tuple identifier) and identifies that no service action needs to be performed for this data message flow. This record can be used for quick processing of subsequent data messages of the same flow.
When a data message's header values matches a service rule, the ISS filter performs the set of actions specified with the matching service rule. When the set of actions includes more than one action, the ISS filter performs the service actions sequentially. In some embodiments, a service action of a matching service rule is performed by a service node of a SN group or a SN cluster of a SN cluster group. Accordingly, to perform such a service action, the ISS filter selects a service node or cluster for processing the data message and forwards the data message to the selected node or cluster.
In some embodiments, the ISS filter 1530 forwards the data message to the selected node or cluster through a tunnel. In other embodiments, the ISS filter 1530 connects to some service nodes/clusters through tunnels, while not using tunnels to connect to other service nodes/clusters. For instance, in some embodiments, the ISS filter 1530 might use L3 or L4 destination network address translation (DNAT), or MAC redirect, to forward data messages to some of the service nodes. Also, in some embodiments, one or more service nodes might be executing on the same host computer 1500 as the ISS filter 1530, and in these embodiments the ISS filter 1530 directs the data messages to these service nodes through DNAT, MAC redirect or some other forwarding mechanism that is part of the filter framework of some embodiments. In some embodiments, service rules have identifiers that specify different re-direction mechanisms, as one rule can, or different rules can, identify different service nodes or SN clusters that are accessible through different re-direction mechanisms.
When the ISS filter 1530 uses a tunnel to send a data message to a service node or cluster, the ISS filter in some embodiments encapsulates the data message with a tunnel packet header. This packet header includes a tunnel key in some embodiments. In other embodiments, the ISS filter 1530 has another I/O chain filter encapsulate the data messages with tunnel packet headers.
In some embodiments, the ISS filter 1530 has to establish an L4 connection session with the service node. In some of embodiments, the ISS filter also has to establish an L4 connection session with its VM. To establish an L4 connection session, the ISS filter performs a three-way TCP/IP handshake with the other end of the connection (e.g., with the service node or VM) in some embodiments.
As mentioned above, a matching service rule in some embodiments specifies a set of load balancing criteria for each set of service nodes or clusters that perform a service action specified by the rule. In these embodiments, the ISS filter 1530 has its associated load balancer 1550 use the rule's specified load balancing criteria to select a service node from the specified SN group, or a service cluster from the specified SN cluster group.
The load balancer distributes the data message load for performing a service action to the service nodes or the SN clusters in a load balanced manner specified by the load balancing criteria. In some embodiments, the load balancing criteria are weight values associated with the service node or SN clusters. One example of using weight values to distribute new data message flows to service nodes in a load balancing way was described above.
In some embodiments, the weight values are generated and adjusted by the agent 1520 and/or a controller set based on the load statistics. In some embodiments, each ISS filter 1530 has its own load balancer 1515, while in other embodiments, multiple ISS filters 1530 share the same load balancer 1525 (e.g., ISS filters of VMs that are part of one logical network use one load balancer 1515 on each host).
The ISS filter 1530 stores in the connection state storage 1590 data records that maintain connection state for data message flows that the ISS filter 1530 has previously processed. This connection state allows the ISS filter 1530 to distribute data messages that are part of the same flow statefully to the same content server. In some embodiments, each record in the connection storage corresponds to a data message flow that the ISS filter 1530 has previously processed.
Each record stores a description of the set of service rules that have to be applied to the flow's data messages or has a reference (e.g., a pointer) to this description. In some embodiments, when the operation of the service rule set requires the data message to be dropped, the connection-storage record also specifies this action, or specifies this action in lieu of the service rule description. Also, when no service has to be performed for data messages of this flow, the connection-storage record in some embodiments indicates that the ISS should allow the received data message to pass along the VM's egress datapath. In some embodiments, this record stores the flow's identifier (e.g., the five tuple identifiers). In addition, the connection storage is hash addressable (e.g., locations in the connection storage are identified based on a hash of the flow's identifier) in some embodiments. When the ISS filter 1530 stores an L4+ session parameter, the ISS filter 1530 in some of these embodiments stores this parameter in the connection state storage 1590.
In some embodiments, each time a ISS filter directs a message to a service node or SN cluster, the ISS filter updates the statistics that it maintains in its STAT data storage 1554 for the data traffic that it relays to the service nodes and/or clusters. Examples of such statistics include the number of data messages (e.g., number of packets), data message flows and/or data message bytes relayed to each service node or cluster. In some embodiments, the metrics can be normalized to units of time, e.g., per second, per minute, etc.
In some embodiments, the agent 1520 gathers (e.g., periodically collects) the statistics that the ISS filters store in the STAT data storages 1554, and relays these statistics to a controller set. Based on statistics that the controller set gathers from various agents 1520 of various hosts, the controller set (1) distributes the aggregated statistics to each host's agent 1520 so that each agent can define and/or adjust the load balancing criteria for the load balancers on its host, and/or (2) analyzes the aggregated statistics to specify and distribute some or all of the load balancing criteria to the hosts. In some embodiments where the controller set generates the load balancing criteria from the aggregated statistics, the controller set distributes the generated load balancing criteria to the agents 1520 of the hosts.
In the embodiments, where the agent 1520 receives new load balancing criteria or new ISS rules from the controller set, the agent 1520 stores these criteria or new rules in the host-level rule storage 1588 for propagation to the ISS rule storages 1550. In the embodiment where the agent 1520 receives aggregated statistics from the controller set, the agent 1520 stores the aggregated statistics in the global statistics data storage 1586. In some embodiments, the agent 1520 analyzes the aggregated statistics in this storage 1586 to define and/or adjust the load balancing criteria (e.g., weight values), which it then stores in the rule storage 1588 for propagation to the ISS rule storages 1550. The publisher 1522 retrieves each service rule and/or updated load balancing criteria that the agent 1520 stores in the rule storage 1588, and stores the retrieved rule or criteria in the ISS rule storage 1550 of each ISS filter that needs to enforce this rule or criteria.
The agent 1520 not only propagates service rule updates based on newly received aggregated statistics, but it also propagates service rules or updates service rules based on updates to SN group or cluster group that it receives from the controller set. Again, the agent 1520 stores such updated rules in the rule data storage 1588, from where the publisher propagates them to ISS rule storages 1550 of the ISS filters 1530 that need to enforce these rules. In some embodiments, the controller set provides the ISS agent 1520 with high level service policies that the ISS agent converts into service rules for the ISS filters to implement. In some embodiments, the agent 1520 communicates with the controller set through an out-of-band control channel.
Some embodiments provide a controller-driven method for reconfiguring the application or service layer deployment in a datacenter. In some embodiments, the controller set 120 provides a host computer with parameters for establishing several tunnels, each between the host computer and a service node that can be in the same datacenter as the host computer or can be at a different location as the datacenter. The provided tunnel-establishing parameters include tunnel header packet parameters in some embodiments. These parameters in some embodiments also include tunnel keys, while in other embodiments, these parameters include parameters for generating the tunnel keys. Tunnel keys are used in some embodiments to allow multiple different data message flows to use one tunnel from a host to a service node. In some embodiments, establishing a tunnel entails configuring modules at the tunnel endpoints with provisioned tunnel parameters (e.g., tunnel header parameters, tunnel keys, etc.).
In some embodiments, the tunnels connect the host computer with several service nodes of one or more service providers that operate in the same datacenter or outside of the datacenter. In some deployments, only one tunnel is established between each host and a service node and all ISS filters on the host use the same tunnel for relaying data messages to the service node. This is done to reduce the number of tunnels. This approach can be viewed as establishing one tunnel between the host's SFE and the service node. In other deployments, more than one tunnel is established between a host and a service node. For instance, in some deployments, one tunnel is established between each ISS filter on the host and the service node.
In some embodiments, the controller set 120 define data-message distribution rules for SCNs in the datacenter, and push these rules to the ISS filters of the SCNs. The ISS filters then distribute the data messages to the data compute nodes (DCNs) that are identified by the distribution rules as the DCNs for the data messages. In other embodiments, the controller set 120 define data-message distribution policies for SCNs in the datacenter, and push these policies to the hosts that execute the SCNs. The hosts then generate distribution rules from these policies and then configure their ISS filters based on these policies.
In some embodiments, distribution rule includes (1) a rule identifier that is used to identify data message flows that match the rule, and (2) a set of service actions for data message flows that match the rule. In some embodiments, the rule identifier can be defined in terms of group identifiers (such as virtual IP addresses (VIPs)) or metadata tags assigned by application level gateways.
In some embodiments, each service action of a rule is defined by reference an identifier that identifies a set of service nodes for performing the service action. Some rules can specify two or more service actions that are performed by two or more sets of service nodes of two or more service providers. In some embodiments, each service-node set is a service node cluster and is defined in the rule by reference to a set of tunnel identifiers (1) that identifies one tunnel to the service node cluster, or (2) that identifies one tunnel to each service node in the service-node cluster.
For each service action, a distribution rule also includes a set of selection criteria for each set of service action of the rule. In some embodiments, the selection criteria set includes one or more criteria that are dynamically assessed (e.g., based on the identity of SCNs executing on the host, etc.). In some embodiments, the selection criteria set is a load balancing criteria set that specifies criteria for distributing new data message flows amongst the service nodes that perform the service action.
This controller-driven method can seamlessly reconfigure the application or service layer deployment in the datacenter without having to configure the SCNs to use new group addresses or tags (e.g., new VIPs). The controller set only needs to provide the inline switches with new distribution rules that dictate new traffic distribution patterns based on previously configured group addresses or tags. In some embodiments, the seamless reconfiguration can be based on arbitrary packet header parameters (e.g., L2, L3, L4 or L7 parameters) that are used by the SCNs. In other words, these packet header parameters in some cases would not have to include group addresses or tags.
As mentioned above, the inline switches in some embodiments can be configured to distribute data messages based on metadata tags that are associated with the packets, and injected into the packets (e.g., as L7 parameters) by application level gateways (ALGs). For example, as ALGs are configured to inspect and tag packets as the packets enter a network domain (e.g., a logical domain), the controller set in some embodiments is configured to push new distribution policies and/or rules to the inline switches that configure these switches to implement new application or service layer deployment in the network domain.
The controller-driven method of some embodiments will now be described by reference to FIGS. 16-19. FIG. 16 illustrates an example of a controller re-configuring the application layer deployment to insert a firewall service operation between a set of webservers 1605 and a set of application servers 1610. This figure illustrates a datacenter that implement a three-server layer deployment, in which the first layer includes one or more webservers 1605, the second layer includes one or more application servers 1610, and the third layer includes one or more database servers 1615.
As shown, a controller 1620 initially configures the inline switches 1630 of the webservers 1610 with message distribution rules that direct the switches to forward received packet flows that have a particular VIP (VIP1) as their destination IP address to the application servers. FIG. 16 illustrates an example of this rule 1650. As shown, this rule specifies VIP1 as a flow-matching attribute, AS (application server) type as the action type to perform, and the IP address set 1 as the set of IP addresses of the application servers 1610.
A time period after initially configuring the inline switches 1630, the controller 1620 re-configures these switches 1630 with new packet distribution rules 1655 that direct the switches (1) to first forward such a packet flow (i.e., a packet flow with VIP1 for their destination IP address) to a set of firewall servers 1625, and then (2) if the firewall servers do not direct the webservers to drop the packet flow, to forward the packets of this packet flow to the application servers 1610. As shown, each rule 1655 specifies (1) VIP1 as a flow-matching attribute, (2) FW (firewall) type as the first action's type, (3) the IP address set 2 as the set of IP addresses of the firewall servers 1625, (4) AS (application server) type as the second action's type, and (5) the IP address set 1 as the set of IP addresses of the application servers 1610.
In some embodiments, the new packet distribution rule that the controller 1620 provides to the webservers switches 1630 specifies, for flows with VIP1 destination IP, a service policy chain that (1) first identifies a firewall operation and then (2) identifies an application-level operation. This new rule replaces a prior rule that only specifies for flows with VIP1 destination IP the application-level operation.
In some embodiments, for each operation that the rule specifies, the rule includes, or refers to, (1) identifiers (e.g., IP addresses, tunnel identifiers, etc.) of a set of servers that perform that operation, and (2) load balancing criteria for distributing different flows to different servers in the set. In directing the data messages to the firewalls 1625, the inline switches perform load-balancing operations based on the load balancing criteria to spread the packet flow load among the firewalls 1625. In some embodiments, the controller 1620 configures the inline switches 1630 with multiple different rules for multiple different VIPs that are associated with multiple different service policy sets.
In the example of FIG. 16, the controller re-configures the webservers 1605 (1) to direct a packet flow with VIP1 as the destination IP addresses to the firewall servers, and then after receiving the firewall servers assessment as to whether the packet flow should not be dropped, (2) to forward the packets for this flow to the application server. FIG. 17 illustrates that in other embodiments, the controller 1720 (1) re-configures the inline switches 1730 of the webservers 1705 to forward all packets with the destination IP address VIP1 to the firewall servers 1725, and (2) configures the firewall servers 1725 to forward these packets directly to the application servers 1710 if the firewall servers 1725 determine that the packets should not be dropped this approach.
As shown, the controller 1720 initially configures the inline switches with the rule 1650, which was described above. The controller then re-configures the inline switches with the rule 1755, which specifies (1) VIP1 as a flow-matching attribute, (2) FW (firewall) type as the action type, and (3) the IP address set 2 as the set of IP addresses of the firewall servers 1725. In the example of FIG. 17, the controller then configures the firewall servers 1725 to forward any passed-through packets directly to the application servers 1710. In some of these embodiments, the controller configures the firewall servers by configuring the inline switches that are placed in the egress paths of the firewall servers to forward the firewall processed packets to the application servers 1710.
FIG. 18 illustrates a process 1800 that a controller 1620 performs to define the service policy rules for an inline switch of a VM that is being provisioned on a host. As shown, the process 1800 initially identifies (at 1805) a new inline switch to configure. Next, at 1810, the process selects a virtual identifier (e.g., a VIP, a virtual address, etc.) that may be used to identify DCN groups or security policies/rules in packet flows that the inline switch may receive.
At 1815, the process 1800 identifies a service policy set that is associated with the selected virtual identifier. A service policy set specifies one or more service actions that need to be performed for packet flows that are associated with the selected virtual identifier. The process then defines (at 1820) a service rule for the identified service policy set. For each service action in the service policy set, the service rule specifies a set of service nodes or service-node clusters that performs the service action.
At 1825, the process then selects a service action in the identified service policy set. Next, at 1830, the process generates and stores in the defined rule (i.e., the rule defined at 1820) load balancing criteria for the set of service nodes or service-node clusters that perform the selected service action. The process generates the load balancing criteria based on the membership of the set of service nodes or service-node clusters, and statistics regarding the packet flow load on the service-node or service-cluster set that the controller collects from the inline switches.
At 1835, the process determines whether it has examined all the service actions in the identified service policy set. If not, the process selects (at 1840) another service action in the identified service policy set, and then transitions back to 1830 to generate and store load balancing criteria for the set of service nodes or service-node clusters that perform the selected service action. When the process determines that it has examined all the service actions in the identified service policy set, the process determines (at 1845) whether it has processed all virtual identifiers that may be used to identify DCN groups or security policies/rules in packet flows that the inline switch may receive.
If not, the process selects (at 1850) another virtual identifier that may be used to identify DCN groups or security policies/rules in packet flows that the inline switch may receive. After 1850, the process returns to 1815 to repeat operations 1815-1850 for the selected virtual identifier. When the process determines (at 1845) that it has examined all virtual identifiers for the inline switch, it ends.
In process 1800, a service policy set is associated with a virtual identifier that may be used in a packet flow that an inline switch may receive. In other embodiments, the controller can define a services rule for a service policy set that is associated with a set of two or more virtual identifiers (e.g., a VIP and a L7 tag), or with a virtual identifier and one or more other packet header values (e.g., source IP address, source port address, etc.). More generally, the controller in some embodiments can define a service rule that defines one or more service actions to implement a service policy set and can associate this service rule with any arbitrary combination of physical and/or virtual packet header values.
In this manner, a controller in some embodiments can seamlessly reconfigure the application or service layer deployment in the datacenter without having to configure the SCNs to use new DCN group addresses (e.g., new VIPs). The controller only needs to provide the inline switches with new distribution rules that dictate new traffic distribution patterns based on previously configured DCN group addresses and/or based on any arbitrary packet header parameters (e.g., L2, L3, L4 or L7 parameters) that are used by the SCNs.
FIG. 19 illustrates a process 1900 for modifying a service rule and reconfiguring inline service switches that implement this service rule. This process is performed by each controller in a set of one or more controllers in some embodiments. As shown, the process 1900 starts (at 1905) when it receives a modification to a service policy set for which the controller set has previously generated a service rule and distributed this service rule to a set of one or more inline switches that implements the service policy set. The received modification may involve the removal of one or more service actions from the service policy set or the addition of one or more service actions to the service policy set. Alternatively or conjunctively, the received modification may involve the reordering of one or more service actions in the service policy set.
Next, at 1910, the process 1900 changes the service action chain in the service rule to account for the received modification. This change may insert one or more service actions in the rule's action chain, may remove one or more service actions from the rule's action chain, or may reorder one or more service actions in the rule's action chain. In some embodiments, a service rule specifies a service action chain by specifying (1) two or more service action types and (2) for each service action type, specifying a set of IP addresses that identify a set of service nodes or service-node clusters that perform the service action type. Each service rule in some embodiments also specifies a set of load balancing criteria for each action type's set of IP addresses.
For each new service action in the service action chain, the process 1900 then defines (at 1915) the set of load balancing criteria (e.g., a set of weight values for a weighted, round-robin load balancing scheme). In some embodiments, the process generates the load balancing criteria set based on (1) the membership of the set of service nodes or service-node clusters that perform the service action, and (2) statistics regarding the packet flow load on the service-node or service-cluster set that the controller collects from the inline switches.
Lastly, at 1920, the process distributes the modified service rule to the hosts that execute the inline service switches that process the service rule. These are the inline service switches that may encounter packets associated with the modified service rule. After 1920, the process ends.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
FIG. 20 conceptually illustrates an electronic system 2000 with which some embodiments of the invention are implemented. The electronic system 2000 can be used to execute any of the control, virtualization, or operating system applications described above. The electronic system 2000 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 2000 includes a bus 2005, processing unit(s) 2010, a system memory 2025, a read-only memory 2030, a permanent storage device 2035, input devices 2040, and output devices 2045.
The bus 2005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 2000. For instance, the bus 2005 communicatively connects the processing unit(s) 2010 with the read-only memory 2030, the system memory 2025, and the permanent storage device 2035.
From these various memory units, the processing unit(s) 2010 retrieves instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
The read-only-memory (ROM) 2030 stores static data and instructions that are needed by the processing unit(s) 2010 and other modules of the electronic system. The permanent storage device 2035, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 2000 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2035.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 2035, the system memory 2025 is a read-and-write memory device. However, unlike storage device 2035, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 2025, the permanent storage device 2035, and/or the read-only memory 2030. From these various memory units, the processing unit(s) 2010 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 2005 also connects to the input and output devices 2040 and 2045. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 2040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 2045 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in FIG. 20, bus 2005 also couples electronic system 2000 to a network 2065 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 2000 may be used in conjunction with the invention.
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, in several embodiments described above, the inline switches intercept the data messages along the egress datapath of the SCNs. In other embodiments, however, the inline switches intercept the data messages along the ingress datapath of the SCNs.
In addition, a number of the figures conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims (22)

The invention claimed is:
1. A non-transitory machine readable medium storing an inline service switch for execution on a host computer on which a source virtual machine (SVM) also executes, the inline service switch for seamlessly distributing data messages in a datacenter, the inline service switch comprising sets of instructions for:
identifying, seamlessly at the inline service switch on the host computer without requiring additional configuration at the SVM, a first data message along an egress datapath of the SVM on which data messages transmitted by the SVM are sent out of the host computer;
establishing a connection session with a service node that needs to receive a data message flow associated with the first data message;
extracting and storing a session parameter from a datagram of a second data message that is provided by the SVM or the service node during the connection session; and
using the stored session parameter to relay subsequent data messages from the SVM to the service node.
2. The non-transitory machine readable medium of claim 1, wherein:
the connection session is a Layer 4 (L4) connection session; and
the second data message is a packet comprising a packet header with Layer 3 (L3) and L4 parameters and a payload that is the datagram from which the session parameter is extracted.
3. The non-transitory machine readable medium of claim 2, wherein the session parameter is a session identifier.
4. The non-transitory machine readable medium of claim 3, wherein the session identifier is provided by the service node and the second data message is a message sent by the service node.
5. The non-transitory machine readable medium of claim 3, wherein:
the session identifier is a secure session identifier that is based on a session key generated by the service node; and
the set of instructions for using the stored session parameter comprises a set of instructions for providing the secure session identifier for a subsequent data messages in order to allow the service node to forego generating another session key.
6. The non-transitory machine readable medium of claim 2, wherein the session parameter is a filename and the second data message is a message sent by the SVM.
7. The non-transitory machine readable medium of claim 6, wherein the set of instructions for extracting the filename comprises a set of instructions for extracting the filename from a Uniform Resource Identifier (URI) that is specified in the second data message.
8. The non-transitory machine readable medium of claim 1, wherein the first data message is a request from the SVM to establish a connection session with the service node.
9. The non-transitory machine readable medium of claim 1, wherein the set of instructions for establishing a connection session comprises a set of instructions for performing a three-way Transport Control Protocol/Internet Protocol (TCP/IP) handshake with the service node.
10. The non-transitory machine readable medium of claim 9, wherein:
the second data message comprises a header and a payload;
the connection session is established so that for the second data message, the inline service switch extracts the header, examines the payload to extract the session parameter, and re-encapsulated the datagram with another header before relaying the packet to the SVM or the service node; and
the session parameter is extracted from the examined payload.
11. The non-transitory machine readable medium of claim 1, wherein the SVM is a virtual machine or a container.
12. The non-transitory machine readable medium of claim 1, wherein the set of instructions for extracting the session parameter comprises a set of instructions for extracting the session parameter from a plurality of datagrams of a plurality of data messages including the second data message, said plurality of datagrams exchanged between the SVM and the service node during the connection session.
13. The non-transitory machine readable medium of claim 1:
wherein the service node is a first service node and the first data message is a service request for a service action performed by a group of service nodes including the first service node;
wherein the inline service switch further comprises a set of instructions for selecting, for each service request for the service action, a service node in the service node group;
wherein the set of instructions for using the stored session parameter comprises sets of instructions for extracting the session parameter from a datagram of a subsequently received data message, identifying the first service node as a service node that previously processed a similar service request, and forwarding a subsequent service request associated with the subsequently received data message to the first service node.
14. The non-transitory machine readable medium of claim 1, wherein the set of instructions for selecting the service node comprises a set of instructions for selecting a service node in the service node group based on a set of load balancing criteria and based on stored session parameters.
15. The non-transitory machine readable medium of claim 1, wherein the identified first data message is transmitted by a virtual network interface (VNIC) of the SVM.
16. A non-transitory machine readable medium storing a service processing module for execution on a host computer on which a source compute node (SCN) also executes, the service processing module comprising sets of instructions for:
identifying a first data message along an egress datapath of the SCN on which data messages transmitted by the SCN are sent out of the host computer;
establishing a connection session with the SCN by performing a three-way Transport Control Protocol/Internet Protocol (TCP/IP) handshake with the SCN;
establishing a connection session with a service node that needs to receive a data message flow associated with the first data message;
after establishing the connection session with the SCN, extracting and storing a session parameter from a payload of a second data message that is sent by the SCN, said payload being after the Layer 3 and Layer 4 headers in the second data message; and
using the stored session parameter to forward subsequent data messages from the SCN to the service node.
17. A method of performing a service on data messages associated with a source compute node (SCN) executing on a host computer, the method comprising:
at a service processing module executing on the host computer:
identifying a first data message along an egress datapath of the SCN on which data messages transmitted by the SCN are sent out of the host computer, wherein the SCN is the source of the first data message;
establishing a layer-4 connection session with a service node that needs to receive a data message flow associated with the first data message;
extracting and storing a session parameter from a datagram of a second data message that is provided by the SCN or the service node during the connection session; and
using the stored session parameter to relay subsequent data messages to the service node.
18. The method of claim 17, wherein:
the second data message is a data packet with a packet header and a payload;
the connection session is established so that for the data packet, the service processing module extracts the packet header, examines the payload, and re-encapsulated the payload with another packet header before relaying the packet to the SCN or the service node; and
the session parameter is extracted from the examined payload.
19. The method of claim 17, wherein the service processing module identifies the first data message before the data message reaches a software forwarding element on the host computer.
20. A method of forwarding data messages associated with a source compute node (SCN) executing on a host computer, the method comprising:
at a service processing module executing on the host computer:
on the egress datapath of the SCN along which data messages transmitted by the SCN are sent out of the host computer, identifying a first data message transmitted by a virtual network interface of the SCN;
establishing a connection session with a service node that needs to receive a data message flow associated with the first data message;
performing three-way Transport Control Protocol/Internet Protocol (TCP/IP) handshakes with the SCN and the service node to establish connection sessions with the SCN and the service node;
after establishing connection sessions with the SCN and the service node, extracting and storing a session parameter from a payload of a second data message that is provided by the SCN or the service node during the connection sessions, said payload being after the Layer 3 and Layer 4 headers in the second data message; and
using the stored session parameter to forward subsequent data messages associated with the SCN to the service node.
21. The method of claim 20, wherein the subsequent data messages are messages sent by the SCN.
22. The method of claim 20, wherein the subsequent data messages are messages sent to the SCN.
US14/841,654 2014-09-30 2015-08-31 Sticky service sessions in a datacenter Active 2039-07-02 US11496606B2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US14/841,654 US11496606B2 (en) 2014-09-30 2015-08-31 Sticky service sessions in a datacenter
EP15782148.9A EP3202109B1 (en) 2014-09-30 2015-09-30 Inline service switch
CN202010711875.8A CN112291294A (en) 2014-09-30 2015-09-30 Inline service switch
CN201580057270.9A CN107005584B (en) 2014-09-30 2015-09-30 Method, apparatus, and storage medium for inline service switch
PCT/US2015/053332 WO2016054272A1 (en) 2014-09-30 2015-09-30 Inline service switch
US17/976,783 US20230052818A1 (en) 2014-09-30 2022-10-29 Controller driven reconfiguration of a multi-layered application or service model

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201462058044P 2014-09-30 2014-09-30
US201462083453P 2014-11-24 2014-11-24
US201462086136P 2014-12-01 2014-12-01
US201562142876P 2015-04-03 2015-04-03
US14/841,654 US11496606B2 (en) 2014-09-30 2015-08-31 Sticky service sessions in a datacenter

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/976,783 Continuation US20230052818A1 (en) 2014-09-30 2022-10-29 Controller driven reconfiguration of a multi-layered application or service model

Publications (2)

Publication Number Publication Date
US20160094661A1 US20160094661A1 (en) 2016-03-31
US11496606B2 true US11496606B2 (en) 2022-11-08

Family

ID=55585627

Family Applications (6)

Application Number Title Priority Date Filing Date
US14/841,648 Active 2036-07-01 US10129077B2 (en) 2014-09-30 2015-08-31 Configuring and operating a XaaS model in a datacenter
US14/841,647 Active 2036-08-11 US10225137B2 (en) 2014-09-30 2015-08-31 Service node selection by an inline service switch
US14/841,654 Active 2039-07-02 US11496606B2 (en) 2014-09-30 2015-08-31 Sticky service sessions in a datacenter
US14/841,649 Active 2039-07-26 US11296930B2 (en) 2014-09-30 2015-08-31 Tunnel-enabled elastic service model
US14/841,659 Active 2037-02-13 US10516568B2 (en) 2014-09-30 2015-08-31 Controller driven reconfiguration of a multi-layered application or service model
US17/976,783 Pending US20230052818A1 (en) 2014-09-30 2022-10-29 Controller driven reconfiguration of a multi-layered application or service model

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/841,648 Active 2036-07-01 US10129077B2 (en) 2014-09-30 2015-08-31 Configuring and operating a XaaS model in a datacenter
US14/841,647 Active 2036-08-11 US10225137B2 (en) 2014-09-30 2015-08-31 Service node selection by an inline service switch

Family Applications After (3)

Application Number Title Priority Date Filing Date
US14/841,649 Active 2039-07-26 US11296930B2 (en) 2014-09-30 2015-08-31 Tunnel-enabled elastic service model
US14/841,659 Active 2037-02-13 US10516568B2 (en) 2014-09-30 2015-08-31 Controller driven reconfiguration of a multi-layered application or service model
US17/976,783 Pending US20230052818A1 (en) 2014-09-30 2022-10-29 Controller driven reconfiguration of a multi-layered application or service model

Country Status (4)

Country Link
US (6) US10129077B2 (en)
EP (1) EP3202109B1 (en)
CN (2) CN112291294A (en)
WO (1) WO2016054272A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11609781B2 (en) 2019-02-22 2023-03-21 Vmware, Inc. Providing services with guest VM mobility
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11722559B2 (en) 2019-10-30 2023-08-08 Vmware, Inc. Distributed service chain across multiple clouds
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11750476B2 (en) 2017-10-29 2023-09-05 Nicira, Inc. Service operation chaining
US11805036B2 (en) 2018-03-27 2023-10-31 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US11805056B2 (en) 2013-05-09 2023-10-31 Nicira, Inc. Method and system for service switching using service tags

Families Citing this family (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10129077B2 (en) 2014-09-30 2018-11-13 Nicira, Inc. Configuring and operating a XaaS model in a datacenter
US9531590B2 (en) 2014-09-30 2016-12-27 Nicira, Inc. Load balancing across a group of load balancers
US10135737B2 (en) 2014-09-30 2018-11-20 Nicira, Inc. Distributed load balancing systems
US10320921B2 (en) 2014-12-17 2019-06-11 Vmware, Inc. Specializing virtual network device processing to bypass forwarding elements for high packet rate applications
US9699060B2 (en) * 2014-12-17 2017-07-04 Vmware, Inc. Specializing virtual network device processing to avoid interrupt processing for high packet rate applications
US10594743B2 (en) 2015-04-03 2020-03-17 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US9825851B2 (en) 2015-06-27 2017-11-21 Nicira, Inc. Distributing routing information in a multi-datacenter environment
US10033647B2 (en) 2015-10-13 2018-07-24 Oracle International Corporation System and method for efficient network isolation and load balancing in a multi-tenant cluster environment
US10986039B2 (en) * 2015-11-11 2021-04-20 Gigamon Inc. Traffic broker for routing data packets through sequences of in-line tools
US9716617B1 (en) * 2016-06-14 2017-07-25 ShieldX Networks, Inc. Dynamic, load-based, auto-scaling network security microservices architecture
US10142356B2 (en) 2016-07-29 2018-11-27 ShieldX Networks, Inc. Channel data encapsulation system and method for use with client-server data channels
US10313362B2 (en) * 2016-07-29 2019-06-04 ShieldX Networks, Inc. Systems and methods for real-time configurable load determination
US10320572B2 (en) 2016-08-04 2019-06-11 Microsoft Technology Licensing, Llc Scope-based certificate deployment
US10333959B2 (en) 2016-08-31 2019-06-25 Nicira, Inc. Use of public cloud inventory tags to configure data compute node for logical network
US10397136B2 (en) 2016-08-27 2019-08-27 Nicira, Inc. Managed forwarding element executing in separate namespace of public cloud data compute node than workload application
US10673893B2 (en) * 2016-08-31 2020-06-02 International Business Machines Corporation Isolating a source of an attack that originates from a shared computing environment
US11824863B2 (en) * 2016-11-03 2023-11-21 Nicira, Inc. Performing services on a host
CN108111469B (en) * 2016-11-24 2020-06-02 阿里巴巴集团控股有限公司 Method and device for establishing security channel in cluster
US10523568B2 (en) * 2016-12-09 2019-12-31 Cisco Technology, Inc. Adaptive load balancing for application chains
US10530747B2 (en) * 2017-01-13 2020-01-07 Citrix Systems, Inc. Systems and methods to run user space network stack inside docker container while bypassing container Linux network stack
US20180375762A1 (en) * 2017-06-21 2018-12-27 Microsoft Technology Licensing, Llc System and method for limiting access to cloud-based resources including transmission between l3 and l7 layers using ipv6 packet with embedded ipv4 addresses and metadata
CN109218355B (en) * 2017-06-30 2021-06-15 华为技术有限公司 Load balancing engine, client, distributed computing system and load balancing method
US10567482B2 (en) 2017-08-24 2020-02-18 Nicira, Inc. Accessing endpoints in logical networks and public cloud service providers native networks using a single network interface and a single routing table
US10491516B2 (en) 2017-08-24 2019-11-26 Nicira, Inc. Packet communication between logical networks and public cloud service providers native networks using a single network interface and a single routing table
WO2019046071A1 (en) 2017-08-27 2019-03-07 Nicira, Inc. Performing in-line service in public cloud
CN107666446B (en) * 2017-09-14 2020-06-05 北京京东尚科信息技术有限公司 Method and device for limiting downlink flow, uplink flow and bidirectional flow
US11012420B2 (en) 2017-11-15 2021-05-18 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
US10601705B2 (en) * 2017-12-04 2020-03-24 Nicira, Inc. Failover of centralized routers in public cloud logical networks
US10862753B2 (en) 2017-12-04 2020-12-08 Nicira, Inc. High availability for stateful services in public cloud logical networks
CN108199974B (en) * 2017-12-25 2021-09-07 新华三技术有限公司 Service flow forwarding management method, device and network node
US10797910B2 (en) 2018-01-26 2020-10-06 Nicira, Inc. Specifying and utilizing paths through a network
US10659252B2 (en) 2018-01-26 2020-05-19 Nicira, Inc Specifying and utilizing paths through a network
US10970258B2 (en) 2018-02-23 2021-04-06 Red Hat, Inc. Managing container-image layers
US10728174B2 (en) 2018-03-27 2020-07-28 Nicira, Inc. Incorporating layer 2 service between two interfaces of gateway device
US11943248B1 (en) 2018-04-06 2024-03-26 Keysight Technologies, Inc. Methods, systems, and computer readable media for network security testing using at least one emulated server
CN108683720B (en) * 2018-04-28 2021-12-14 金蝶软件(中国)有限公司 Container cluster service configuration method and device
US11283676B2 (en) * 2018-06-11 2022-03-22 Nicira, Inc. Providing shared memory for access by multiple network service containers executing on single service machine
US10897392B2 (en) 2018-06-11 2021-01-19 Nicira, Inc. Configuring a compute node to perform services on a host
US10812337B2 (en) 2018-06-15 2020-10-20 Vmware, Inc. Hierarchical API for a SDDC
US10942788B2 (en) 2018-06-15 2021-03-09 Vmware, Inc. Policy constraint framework for an sddc
US11343229B2 (en) 2018-06-28 2022-05-24 Vmware, Inc. Managed forwarding element detecting invalid packet addresses
US10708163B1 (en) * 2018-07-13 2020-07-07 Keysight Technologies, Inc. Methods, systems, and computer readable media for automatic configuration and control of remote inline network monitoring probe
US11086700B2 (en) 2018-08-24 2021-08-10 Vmware, Inc. Template driven approach to deploy a multi-segmented application in an SDDC
US11196591B2 (en) 2018-08-24 2021-12-07 Vmware, Inc. Centralized overlay gateway in public cloud
US11374794B2 (en) 2018-08-24 2022-06-28 Vmware, Inc. Transitive routing in public cloud
EP3815312A1 (en) * 2018-09-02 2021-05-05 VMware, Inc. Service insertion at logical network gateway
US11595250B2 (en) 2018-09-02 2023-02-28 Vmware, Inc. Service insertion at logical network gateway
US10944673B2 (en) 2018-09-02 2021-03-09 Vmware, Inc. Redirection of data messages at logical network gateway
CN109104377B (en) * 2018-09-21 2022-07-15 深圳前海微众银行股份有限公司 Long connection load balancing method, equipment, system and computer readable storage medium
US11470176B2 (en) * 2019-01-29 2022-10-11 Cisco Technology, Inc. Efficient and flexible load-balancing for clusters of caches under latency constraint
CN110187912B (en) * 2019-05-16 2022-03-29 华为技术有限公司 Node selection method and device
US11290358B2 (en) 2019-05-30 2022-03-29 Vmware, Inc. Partitioning health monitoring in a global server load balancing system
US11171992B2 (en) * 2019-07-29 2021-11-09 Cisco Technology, Inc. System resource management in self-healing networks
US11411843B2 (en) * 2019-08-14 2022-08-09 Verizon Patent And Licensing Inc. Method and system for packet inspection in virtual network service chains
CN110808945B (en) * 2019-09-11 2020-07-28 浙江大学 Network intrusion detection method in small sample scene based on meta-learning
US20220247719A1 (en) * 2019-09-24 2022-08-04 Pribit Technology, Inc. Network Access Control System And Method Therefor
US11283717B2 (en) 2019-10-30 2022-03-22 Vmware, Inc. Distributed fault tolerant service chain
EP3991393A1 (en) * 2019-10-30 2022-05-04 VMware, Inc. Distributed service chain across multiple clouds
US11418584B2 (en) * 2019-11-14 2022-08-16 Vmware, Inc. Inter-service communications
EP3991359A1 (en) * 2019-12-12 2022-05-04 VMware, Inc. Collecting an analyzing data regarding flows associated with dpi parameters
US11223494B2 (en) 2020-01-13 2022-01-11 Vmware, Inc. Service insertion for multicast traffic at boundary
US11153406B2 (en) 2020-01-20 2021-10-19 Vmware, Inc. Method of network performance visualization of service function chains
EP4078901A4 (en) 2020-04-01 2023-10-11 VMWare, Inc. Auto deploying network elements for heterogeneous compute elements
US11088919B1 (en) 2020-04-06 2021-08-10 Vmware, Inc. Data structure for defining multi-site logical network
US11743172B2 (en) 2020-04-06 2023-08-29 Vmware, Inc. Using multiple transport mechanisms to provide services at the edge of a network
US11777793B2 (en) 2020-04-06 2023-10-03 Vmware, Inc. Location criteria for security groups
US11528214B2 (en) * 2020-04-06 2022-12-13 Vmware, Inc. Logical router implementation across multiple datacenters
US11115301B1 (en) 2020-04-06 2021-09-07 Vmware, Inc. Presenting realized state of multi-site logical network
US11088902B1 (en) 2020-04-06 2021-08-10 Vmware, Inc. Synchronization of logical network state between global and local managers
CN111522661A (en) * 2020-04-22 2020-08-11 腾讯科技(深圳)有限公司 Micro-service management system, deployment method and related equipment
US11362863B2 (en) 2020-07-21 2022-06-14 Vmware, Inc. Handling packets travelling from logical service routers (SRs) for active-active stateful service insertion
US11803408B2 (en) 2020-07-29 2023-10-31 Vmware, Inc. Distributed network plugin agents for container networking
US11863352B2 (en) 2020-07-30 2024-01-02 Vmware, Inc. Hierarchical networking for nested container clusters
US11601474B2 (en) 2020-09-28 2023-03-07 Vmware, Inc. Network virtualization infrastructure with divided user responsibilities
CN112256437A (en) * 2020-11-10 2021-01-22 网易(杭州)网络有限公司 Task distribution method and device
US11611625B2 (en) 2020-12-15 2023-03-21 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11882052B2 (en) * 2021-01-20 2024-01-23 Vmware, Inc. Updating flow cache information for packet processing
US11811861B2 (en) * 2021-05-17 2023-11-07 Vmware, Inc. Dynamically updating load balancing criteria
US11606254B2 (en) 2021-06-11 2023-03-14 Vmware, Inc. Automatic configuring of VLAN and overlay logical switches for container secondary interfaces
US11824780B2 (en) * 2021-07-22 2023-11-21 Vmware, Inc. Managing tunnel interface selection between gateways in a computing environment
CN115695561A (en) * 2021-07-26 2023-02-03 华为技术有限公司 Message forwarding method, device and system and computer readable storage medium
US20230036071A1 (en) * 2021-07-27 2023-02-02 Vmware, Inc. Managing edge gateway selection using exchanged hash information
US20230231741A1 (en) 2022-01-14 2023-07-20 Vmware, Inc. Per-namespace ip address management method for container networks
US11652909B1 (en) * 2022-03-10 2023-05-16 International Business Machines Corporation TCP session closure in container orchestration system
US11848910B1 (en) 2022-11-11 2023-12-19 Vmware, Inc. Assigning stateful pods fixed IP addresses depending on unique pod identity
US11831511B1 (en) 2023-01-17 2023-11-28 Vmware, Inc. Enforcing network policies in heterogeneous systems

Citations (612)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999018534A2 (en) 1997-10-06 1999-04-15 Web Balance, Inc. System for balancing loads among network servers
US6006264A (en) 1997-08-01 1999-12-21 Arrowpoint Communications, Inc. Method and system for directing a flow between a client and a server
US6104700A (en) 1997-08-29 2000-08-15 Extreme Networks Policy based quality of service
US6154448A (en) 1997-06-20 2000-11-28 Telefonaktiebolaget Lm Ericsson (Publ) Next hop loopback
US20020010783A1 (en) 1999-12-06 2002-01-24 Leonard Primak System and method for enhancing operation of a web server cluster
US20020078370A1 (en) 2000-12-18 2002-06-20 Tahan Thomas E. Controlled information flow between communities via a firewall
US20020097724A1 (en) 2001-01-09 2002-07-25 Matti Halme Processing of data packets within a network element cluster
US20020194350A1 (en) 2001-06-18 2002-12-19 Lu Leonard L. Content-aware web switch without delayed binding and methods thereof
US20030065711A1 (en) 2001-10-01 2003-04-03 International Business Machines Corporation Method and apparatus for content-aware web switching
US20030093481A1 (en) 2001-11-09 2003-05-15 Julian Mitchell Middlebox control
US20030097429A1 (en) 2001-11-20 2003-05-22 Wen-Che Wu Method of forming a website server cluster and structure thereof
US20030105812A1 (en) * 2001-08-09 2003-06-05 Gigamedia Access Corporation Hybrid system architecture for secure peer-to-peer-communications
US20030188026A1 (en) 2001-05-18 2003-10-02 Claude Denton Multi-protocol networking processor with data traffic support spanning local, regional and wide area networks
US20030236813A1 (en) 2002-06-24 2003-12-25 Abjanic John B. Method and apparatus for off-load processing of a message stream
US20040066769A1 (en) 2002-10-08 2004-04-08 Kalle Ahmavaara Method and system for establishing a connection via an access network
US6779030B1 (en) 1997-10-06 2004-08-17 Worldcom, Inc. Intelligent network
US20040210670A1 (en) 1999-03-05 2004-10-21 Nikolaos Anerousis System, method and apparatus for network service load and reliability management
US20040215703A1 (en) 2003-02-18 2004-10-28 Xiping Song System supporting concurrent operation of multiple executable application operation sessions
US6826694B1 (en) * 1998-10-22 2004-11-30 At&T Corp. High resolution access control
US6880089B1 (en) 2000-03-31 2005-04-12 Avaya Technology Corp. Firewall clustering for multiple network servers
US20050089327A1 (en) 2003-10-22 2005-04-28 Shlomo Ovadia Dynamic route discovery for optical switched networks
US20050091396A1 (en) 2003-08-05 2005-04-28 Chandrasekharan Nilakantan Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using adaptive flow-based routing
US20050114429A1 (en) 2003-11-25 2005-05-26 Caccavale Frank S. Method and apparatus for load balancing of distributed processing units based on performance metrics
US20050114648A1 (en) 2003-11-24 2005-05-26 Cisco Technology, Inc., A Corporation Of California Dual mode firewall
US20050132030A1 (en) 2003-12-10 2005-06-16 Aventail Corporation Network appliance
US20050198200A1 (en) 2004-03-05 2005-09-08 Nortel Networks Limited Method and apparatus for facilitating fulfillment of web-service requests on a communication network
JP2005311863A (en) 2004-04-23 2005-11-04 Hitachi Ltd Traffic distribution control method, controller and network system
US20050249199A1 (en) 1999-07-02 2005-11-10 Cisco Technology, Inc., A California Corporation Load balancing using distributed forwarding agents with application based feedback for different virtual machines
US6985956B2 (en) 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US7013389B1 (en) 1999-09-29 2006-03-14 Cisco Technology, Inc. Method and apparatus for creating a secure communication channel among multiple event service nodes
US20060069776A1 (en) 2004-09-15 2006-03-30 Shim Choon B System and method for load balancing a communications network
US20060112297A1 (en) 2004-11-17 2006-05-25 Raytheon Company Fault tolerance and recovery in a high-performance computing (HPC) system
US20060130133A1 (en) 2004-12-14 2006-06-15 International Business Machines Corporation Automated generation of configuration elements of an information technology system
US20060155862A1 (en) 2005-01-06 2006-07-13 Hari Kathi Data traffic load balancing based on application layer messages
US20060195896A1 (en) 2004-12-22 2006-08-31 Wake Forest University Method, systems, and computer program products for implementing function-parallel network firewall
US20060233155A1 (en) 2002-03-19 2006-10-19 Srivastava Sunil K Server load balancing using IP option field approach to identify route to selected server
US20070061492A1 (en) 2005-08-05 2007-03-15 Red Hat, Inc. Zero-copy network i/o for virtual hosts
US20070121615A1 (en) 2005-11-28 2007-05-31 Ofer Weill Method and apparatus for self-learning of VPNS from combination of unidirectional tunnels in MPLS/VPN networks
US7239639B2 (en) 2001-12-27 2007-07-03 3Com Corporation System and method for dynamically constructing packet classification rules
US20070153782A1 (en) 2005-12-30 2007-07-05 Gregory Fletcher Reliable, high-throughput, high-performance transport and routing mechanism for arbitrary data flows
US20070214282A1 (en) 2006-03-13 2007-09-13 Microsoft Corporation Load balancing via rotation of cluster identity
US20070248091A1 (en) 2006-04-24 2007-10-25 Mohamed Khalid Methods and apparatus for tunnel stitching in a network
US20070260750A1 (en) 2006-03-09 2007-11-08 Microsoft Corporation Adaptable data connector
US20070288615A1 (en) 2006-06-09 2007-12-13 Cisco Technology, Inc. Technique for dispatching data packets to service control engines
US20070291773A1 (en) * 2005-12-06 2007-12-20 Shabbir Khan Digital object routing based on a service request
US20080005293A1 (en) 2006-06-30 2008-01-03 Telefonaktiebolaget Lm Ericsson (Publ) Router and method for server load balancing
US20080031263A1 (en) 2006-08-07 2008-02-07 Cisco Technology, Inc. Method and apparatus for load balancing over virtual network links
US20080046400A1 (en) 2006-08-04 2008-02-21 Shi Justin Y Apparatus and method of optimizing database clustering with zero transaction loss
US20080049619A1 (en) * 2004-02-09 2008-02-28 Adam Twiss Methods and Apparatus for Routing in a Network
US20080049614A1 (en) 2006-08-23 2008-02-28 Peter John Briscoe Capacity Management for Data Networks
US20080049786A1 (en) 2006-08-22 2008-02-28 Maruthi Ram Systems and Methods for Providing Dynamic Spillover of Virtual Servers Based on Bandwidth
US20080072305A1 (en) 2006-09-14 2008-03-20 Ouova, Inc. System and method of middlebox detection and characterization
US20080084819A1 (en) 2006-10-04 2008-04-10 Vladimir Parizhsky Ip flow-based load balancing over a plurality of wireless network links
US20080095153A1 (en) 2006-10-19 2008-04-24 Fujitsu Limited Apparatus and computer product for collecting packet information
US20080104608A1 (en) 2006-10-27 2008-05-01 Hyser Chris D Starting up at least one virtual machine in a physical machine by a load balancer
US7379465B2 (en) 2001-12-07 2008-05-27 Nortel Networks Limited Tunneling scheme optimized for use in virtual private networks
WO2008095010A1 (en) 2007-02-01 2008-08-07 The Board Of Trustees Of The Leland Stanford Jr. University Secure network switching infrastructure
US20080195755A1 (en) 2007-02-12 2008-08-14 Ying Lu Method and apparatus for load balancing with server state change awareness
US20080225714A1 (en) 2007-03-12 2008-09-18 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic load balancing
US20080239991A1 (en) 2003-03-13 2008-10-02 David Lee Applegate Method and apparatus for efficient routing of variable traffic
US20080247396A1 (en) 2007-04-06 2008-10-09 Ludovic Hazard Method, system and computer processing an ip packet, routing a structured data carrier, preventing broadcast storms, load-balancing and converting a full broadcast ip packet
US7447775B1 (en) 2003-11-07 2008-11-04 Cisco Technology, Inc. Methods and apparatus for supporting transmission of streaming data
US20080276085A1 (en) 2007-05-02 2008-11-06 Cisco Technology, Inc. Allowing differential processing of encrypted tunnels
US20080279196A1 (en) 2004-04-06 2008-11-13 Robert Friskney Differential Forwarding in Address-Based Carrier Networks
US20090003375A1 (en) 2007-06-29 2009-01-01 Martin Havemann Network system having an extensible control plane
US20090003349A1 (en) 2007-06-29 2009-01-01 Martin Havemann Network system having an extensible forwarding plane
US20090003364A1 (en) 2007-06-29 2009-01-01 Kerry Fendick Open platform architecture for integrating multiple heterogeneous network functions
US20090019135A1 (en) 2007-07-09 2009-01-15 Anand Eswaran Method, Network and Computer Program For Processing A Content Request
US7480737B2 (en) 2002-10-25 2009-01-20 International Business Machines Corporation Technique for addressing a cluster of network servers
US7487250B2 (en) 2000-12-19 2009-02-03 Cisco Technology, Inc. Methods and apparatus for directing a flow of data between a client and multiple servers
US20090037713A1 (en) 2007-08-03 2009-02-05 Cisco Technology, Inc. Operation, administration and maintenance (oam) for chains of services
US7499463B1 (en) 2005-04-22 2009-03-03 Sun Microsystems, Inc. Method and apparatus for enforcing bandwidth utilization of a virtual serialization queue
US20090063706A1 (en) 2007-08-30 2009-03-05 International Business Machines Corporation Combined Layer 2 Virtual MAC Address with Layer 3 IP Address Routing
US20090129271A1 (en) 2007-11-19 2009-05-21 Rajesh Ramankutty Providing services to packet flows in a network
US20090172666A1 (en) 2007-12-31 2009-07-02 Netapp, Inc. System and method for automatic storage load balancing in virtual server environments
US20090199268A1 (en) 2008-02-06 2009-08-06 Qualcomm, Incorporated Policy control for encapsulated data flows
US20090235325A1 (en) 2006-03-02 2009-09-17 Theo Dimitrakos Message processing methods and systems
US20090238084A1 (en) 2008-03-18 2009-09-24 Cisco Technology, Inc. Network monitoring using a proxy
US20090249472A1 (en) 2008-03-27 2009-10-01 Moshe Litvin Hierarchical firewalls
US20090265467A1 (en) 2008-04-17 2009-10-22 Radware, Ltd. Method and System for Load Balancing over a Cluster of Authentication, Authorization and Accounting (AAA) Servers
US20090271586A1 (en) 1998-07-31 2009-10-29 Kom Networks Inc. Method and system for providing restricted access to a storage medium
CN101594358A (en) 2009-06-29 2009-12-02 北京航空航天大学 Three layer switching methods, device, system and host
US20090300210A1 (en) 2008-05-28 2009-12-03 James Michael Ferris Methods and systems for load balancing in cloud-based networks
US20090299791A1 (en) 2003-06-25 2009-12-03 Foundry Networks, Inc. Method and system for management of licenses
US20090303880A1 (en) 2008-06-09 2009-12-10 Microsoft Corporation Data center interconnect and traffic engineering
US20090307334A1 (en) 2008-06-09 2009-12-10 Microsoft Corporation Data center without structural bottlenecks
US20090327464A1 (en) 2008-06-26 2009-12-31 International Business Machines Corporation Load Balanced Data Processing Performed On An Application Message Transmitted Between Compute Nodes
US7649890B2 (en) 2005-02-22 2010-01-19 Hitachi Communication Technologies, Ltd. Packet forwarding apparatus and communication bandwidth control method
US20100031360A1 (en) 2008-07-31 2010-02-04 Arvind Seshadri Systems and methods for preventing unauthorized modification of an operating system
US20100036903A1 (en) 2008-08-11 2010-02-11 Microsoft Corporation Distributed load balancer
US7698458B1 (en) 2004-10-29 2010-04-13 Akamai Technologies, Inc. Load balancing network traffic using race methods
US20100100616A1 (en) 2004-09-14 2010-04-22 3Com Corporation Method and apparatus for controlling traffic between different entities on a network
US20100131638A1 (en) 2008-11-25 2010-05-27 Ravi Kondamuru Systems and Methods for GSLB Remote Service Monitoring
CN101729412A (en) 2009-11-05 2010-06-09 北京超图软件股份有限公司 Distributed level cluster method and system of geographic information service
US20100165985A1 (en) 2008-12-29 2010-07-01 Cisco Technology, Inc. Service Selection Mechanism In Service Insertion Architecture Data Plane
US20100223364A1 (en) 2009-02-27 2010-09-02 Yottaa Inc System and method for network traffic management and load balancing
US20100223621A1 (en) 2002-08-01 2010-09-02 Foundry Networks, Inc. Statistical tracking for global server load balancing
US20100235915A1 (en) 2009-03-12 2010-09-16 Nasir Memon Using host symptoms, host roles, and/or host reputation for detection of host infection
US20100254385A1 (en) 2009-04-07 2010-10-07 Cisco Technology, Inc. Service Insertion Architecture (SIA) in a Virtual Private Network (VPN) Aware Network
US20100257278A1 (en) 2003-12-10 2010-10-07 Foundry Networks, Inc. Method and apparatus for load balancing based on packet header content
US7818452B2 (en) 2000-09-13 2010-10-19 Fortinet, Inc. Distributed virtual system to support managed, network-based services
US20100265824A1 (en) 2007-11-09 2010-10-21 Blade Network Technologies, Inc Session-less Load Balancing of Client Traffic Across Servers in a Server Group
US20100281482A1 (en) 2009-04-30 2010-11-04 Microsoft Corporation Application efficiency engine
US20100332595A1 (en) 2008-04-04 2010-12-30 David Fullagar Handling long-tail content in a content delivery network (cdn)
US20110010578A1 (en) 2007-02-22 2011-01-13 Agundez Dominguez Jose Luis Consistent and fault tolerant distributed hash table (dht) overlay network
US20110016348A1 (en) 2000-09-01 2011-01-20 Pace Charles P System and method for bridging assets to network nodes on multi-tiered networks
US20110022812A1 (en) 2009-05-01 2011-01-27 Van Der Linden Rob Systems and methods for establishing a cloud bridge between virtual storage resources
US20110022695A1 (en) 2009-07-27 2011-01-27 Vmware, Inc. Management and Implementation of Enclosed Local Networks in a Virtual Lab
US20110035494A1 (en) * 2008-04-15 2011-02-10 Blade Network Technologies Network virtualization for a virtualized server data center environment
US20110040893A1 (en) 2009-08-14 2011-02-17 Broadcom Corporation Distributed Internet caching via multiple node caching management
US7898959B1 (en) 2007-06-28 2011-03-01 Marvell Israel (Misl) Ltd. Method for weighted load-balancing among network interfaces
US20110055845A1 (en) 2009-08-31 2011-03-03 Thyagarajan Nandagopal Technique for balancing loads in server clusters
US20110058563A1 (en) 2007-12-17 2011-03-10 Girish Prabhakar Saraph Architectural framework of communication network and a method of establishing qos connection
US20110090912A1 (en) 2009-10-15 2011-04-21 International Business Machines Corporation Steering Data Communications Packets Among Service Applications With Server Selection Modulus Values
US7948986B1 (en) 2009-02-02 2011-05-24 Juniper Networks, Inc. Applying services within MPLS networks
US20110164504A1 (en) 2008-09-03 2011-07-07 Nokia Siemens Networks Oy Gateway network element, a method, and a group of load balanced access points configured for load balancing in a communications network
US20110194563A1 (en) 2010-02-11 2011-08-11 Vmware, Inc. Hypervisor Level Distributed Load-Balancing
US20110211463A1 (en) 2010-02-26 2011-09-01 Eldad Matityahu Add-on module and methods thereof
US20110225293A1 (en) 2005-07-22 2011-09-15 Yogesh Chunilal Rathod System and method for service based social network
US20110235508A1 (en) 2010-03-26 2011-09-29 Deepak Goel Systems and methods for link load balancing on a multi-core device
US20110261811A1 (en) 2010-04-26 2011-10-27 International Business Machines Corporation Load-balancing via modulus distribution and tcp flow redirection due to server overload
US20110271007A1 (en) 2010-04-28 2011-11-03 Futurewei Technologies, Inc. System and Method for a Context Layer Switch
US20110268118A1 (en) 2010-04-30 2011-11-03 Michael Schlansker Method for routing data packets using vlans
US20110276695A1 (en) 2010-05-06 2011-11-10 Juliano Maldaner Continuous upgrading of computers in a load balanced environment
US20110283013A1 (en) 2010-05-14 2011-11-17 Grosser Donald B Methods, systems, and computer readable media for stateless load balancing of network traffic flows
US20110295991A1 (en) 2010-02-01 2011-12-01 Nec Corporation Network system, controller, and network control method
US8078903B1 (en) 2008-11-25 2011-12-13 Cisco Technology, Inc. Automatic load-balancing and seamless failover of data flows in storage media encryption (SME)
US20110317708A1 (en) 2010-06-28 2011-12-29 Alcatel-Lucent Usa, Inc. Quality of service control for mpls user access
US20120005265A1 (en) 2010-06-30 2012-01-05 Sony Corporation Information processing device, content providing method and program
US8094575B1 (en) 2009-03-24 2012-01-10 Juniper Networks, Inc. Routing protocol extension for network acceleration service-aware path selection within computer networks
US20120011281A1 (en) 2010-07-07 2012-01-12 Fujitsu Limited Content conversion system and content conversion server
US20120014386A1 (en) 2010-06-29 2012-01-19 Futurewei Technologies, Inc. Delegate Gateways and Proxy for Target Hosts in Large Layer 2 and Address Resolution with Duplicated Internet Protocol Addresses
US20120023231A1 (en) 2009-10-23 2012-01-26 Nec Corporation Network system, control method for the same, and controller
US20120054266A1 (en) 2010-09-01 2012-03-01 Kazerani Alexander A Optimized Content Distribution Based on Metrics Derived from the End User
EP2426956A1 (en) 2009-04-27 2012-03-07 China Mobile Communications Corporation Data transferring method, system and related network device based on proxy mobile (pm) ipv6
US20120089664A1 (en) 2010-10-12 2012-04-12 Sap Portals Israel, Ltd. Optimizing Distributed Computer Networks
US8175863B1 (en) 2008-02-13 2012-05-08 Quest Software, Inc. Systems and methods for analyzing performance of virtual environments
US8190767B1 (en) 2003-06-24 2012-05-29 Nvidia Corporation Data structures and state tracking for network protocol processing
US20120137004A1 (en) 2000-07-17 2012-05-31 Smith Philip S Method and System for Operating a Commissioned E-Commerce Service Prover
US20120144014A1 (en) 2010-12-01 2012-06-07 Cisco Technology, Inc. Directing data flows in data centers with clustering services
US8201219B2 (en) 2007-09-24 2012-06-12 Bridgewater Systems Corp. Systems and methods for server load balancing using authentication, authorization, and accounting protocols
US20120147894A1 (en) 2010-12-08 2012-06-14 Mulligan John T Methods and apparatus to provision cloud computing network elements
EP2466985A1 (en) 2009-09-17 2012-06-20 ZTE Corporation Network based on identity identifier and location separation architecture, backbone network, and network element thereof
US20120155266A1 (en) 2010-12-17 2012-06-21 Microsoft Corporation Synchronizing state among load balancer components
US20120176932A1 (en) 2009-09-17 2012-07-12 Zte Corporation Communication method, method for forwarding data message during the communication process and communication node thereof
US8224885B1 (en) 2009-01-26 2012-07-17 Teradici Corporation Method and system for remote computing session management
US8223634B2 (en) 2004-02-18 2012-07-17 Fortinet, Inc. Mechanism for implementing load balancing in a network
US20120185588A1 (en) 2002-01-30 2012-07-19 Brett Error Distributed Data Collection and Aggregation
US20120195196A1 (en) 2010-08-11 2012-08-02 Rajat Ghai SYSTEM AND METHOD FOR QoS CONTROL OF IP FLOWS IN MOBILE NETWORKS
US20120207174A1 (en) 2011-02-10 2012-08-16 Choung-Yaw Michael Shieh Distributed service processing of network gateways using virtual machines
US20120213074A1 (en) 2011-01-27 2012-08-23 Verint Systems Ltd. System and method for flow table management
US8266261B2 (en) 2009-03-27 2012-09-11 Nec Corporation Server system, collective server apparatus, and MAC address management method
US20120230187A1 (en) 2011-03-09 2012-09-13 Telefonaktiebolaget L M Ericsson (Publ) Load balancing sctp associations using vtag mediation
US20120239804A1 (en) 2009-11-26 2012-09-20 Chengdu Huawei Symantec Technologies Co., Ltd Method, device and system for backup
US20120246637A1 (en) 2011-03-22 2012-09-27 Cisco Technology, Inc. Distributed load balancer in a virtual machine environment
US20120266252A1 (en) 2011-04-18 2012-10-18 Bank Of America Corporation Hardware-based root of trust for cloud environments
US20120281540A1 (en) 2011-05-03 2012-11-08 Cisco Technology, Inc. Mobile service routing in a network environment
US20120287789A1 (en) 2008-10-24 2012-11-15 Juniper Networks, Inc. Flow consistent dynamic load balancing
US20120303809A1 (en) 2011-05-25 2012-11-29 Microsoft Corporation Offloading load balancing packet modification
US20120303784A1 (en) 1998-07-15 2012-11-29 Radware, Ltd. Load balancing
US20120311568A1 (en) 2011-05-31 2012-12-06 Jansen Gerardus T Mechanism for Inter-Cloud Live Migration of Virtualization Systems
US20120317570A1 (en) 2011-06-08 2012-12-13 Dalcher Gregory W System and method for virtual partition monitoring
US20120317260A1 (en) 2011-06-07 2012-12-13 Syed Mohammad Amir Husain Network Controlled Serial and Audio Switch
US8339959B1 (en) 2008-05-20 2012-12-25 Juniper Networks, Inc. Streamlined packet forwarding using dynamic filters for routing and security in a shared forwarding plane
US20120331188A1 (en) 2010-06-29 2012-12-27 Patrick Brian Riordan Techniques for path selection
US20130003735A1 (en) 2011-06-28 2013-01-03 Chao H Jonathan Dynamically provisioning middleboxes
US20130021942A1 (en) 2011-07-18 2013-01-24 Cisco Technology, Inc. Granular Control of Multicast Delivery Services for Layer-2 Interconnect Solutions
US20130031544A1 (en) 2011-07-27 2013-01-31 Microsoft Corporation Virtual machine migration to minimize packet loss in virtualized network
US20130039218A1 (en) 2010-10-25 2013-02-14 Force 10 Networks Limiting mac address learning on access network switches
US20130044636A1 (en) 2011-08-17 2013-02-21 Teemu Koponen Distributed logical l3 routing
US20130058346A1 (en) 2011-09-07 2013-03-07 Microsoft Corporation Distributed Routing Domains in Multi-Tenant Datacenter Virtual Networks
US20130073743A1 (en) * 2011-09-19 2013-03-21 Cisco Technology, Inc. Services controlled session based flow interceptor
US20130100851A1 (en) 2011-10-25 2013-04-25 Cisco Technology, Inc. Multicast Source Move Detection for Layer-2 Interconnect Solutions
US20130125120A1 (en) 2011-11-15 2013-05-16 Nicira, Inc. Migrating middlebox state for distributed middleboxes
US8451735B2 (en) 2009-09-28 2013-05-28 Symbol Technologies, Inc. Systems and methods for dynamic load balancing in a wireless network
US20130136126A1 (en) 2011-11-30 2013-05-30 Industrial Technology Research Institute Data center network system and packet forwarding method thereof
US20130160024A1 (en) 2011-12-20 2013-06-20 Sybase, Inc. Dynamic Load Balancing for Complex Event Processing
US20130159487A1 (en) 2011-12-14 2013-06-20 Microsoft Corporation Migration of Virtual IP Addresses in a Failover Cluster
US20130163594A1 (en) * 2011-12-21 2013-06-27 Cisco Technology, Inc. Overlay-Based Packet Steering
US20130166703A1 (en) 2011-12-27 2013-06-27 Michael P. Hammer System And Method For Management Of Network-Based Services
US20130170501A1 (en) 2011-12-28 2013-07-04 Futurewei Technologies, Inc. Service Router Architecture
US8488577B1 (en) 2012-06-06 2013-07-16 Google Inc. Apparatus for controlling the availability of internet access to applications
US20130201989A1 (en) 2012-02-08 2013-08-08 Radisys Corporation Stateless load balancer in a multi-node system for transparent processing with packet preservation
US8521879B1 (en) 2008-03-11 2013-08-27 United Services Automobile Assocation (USAA) Systems and methods for a load balanced interior gateway protocol intranet
US20130227097A1 (en) 2010-09-14 2013-08-29 Hitachi, Ltd. Multi-tenancy information processing system, management server, and configuration management method
US20130227550A1 (en) 2012-02-27 2013-08-29 Computer Associates Think, Inc. System and method for isolated virtual image and appliance communication within a cloud environment
US20130291088A1 (en) 2012-04-11 2013-10-31 Choung-Yaw Michael Shieh Cooperative network security inspection
US20130287026A1 (en) 2012-04-13 2013-10-31 Nicira Inc. Extension of logical networks across layer 3 virtual private networks
US20130297798A1 (en) 2012-05-04 2013-11-07 Mustafa Arisoylu Two level packet distribution with stateless first level packet distribution to a group of servers and stateful second level packet distribution to a server within the group
US20130301472A1 (en) 2012-05-10 2013-11-14 David Ian Allan 802.1aq support over ietf evpn
US20130311637A1 (en) 2012-05-15 2013-11-21 International Business Machines Corporation Overlay tunnel information exchange protocol
US20130318219A1 (en) 2012-05-23 2013-11-28 Brocade Communications Systems, Inc Layer-3 overlay gateways
US20130332983A1 (en) 2012-06-12 2013-12-12 TELEFONAKTIEBOLAGET L M ERRICSSON (publ) Elastic Enforcement Layer for Cloud Security Using SDN
US20130336319A1 (en) 2012-06-14 2013-12-19 Liwu Liu Multicast to unicast conversion technique
US8615009B1 (en) 2010-01-25 2013-12-24 Juniper Networks, Inc. Interface for extending service capabilities of a network device
US20130343378A1 (en) 2012-06-21 2013-12-26 Mark Veteikis Virtual data loopback and/or data capture in a computing system
US20130343174A1 (en) 2012-06-26 2013-12-26 Juniper Networks, Inc. Service plane triggered fast reroute protection
US20140003232A1 (en) 2012-06-27 2014-01-02 Juniper Networks, Inc. Feedback loop for service engineered paths
US20140003422A1 (en) 2012-06-29 2014-01-02 Jeffrey C. Mogul Implementing a software defined network using event records that are transmitted from a network switch
US20140010085A1 (en) 2012-07-09 2014-01-09 Arun Kavunder System and method associated with a service flow router
CN103516807A (en) 2013-10-14 2014-01-15 中国联合网络通信集团有限公司 Cloud computing platform server load balancing system and method
US20140029447A1 (en) 2012-07-25 2014-01-30 Qualcomm Atheros, Inc. Forwarding tables for hybrid communication networks
US20140046998A1 (en) 2012-08-09 2014-02-13 International Business Machines Corporation Service management modes of operation in distributed node service management
US20140046997A1 (en) 2012-08-09 2014-02-13 International Business Machines Corporation Service management roles of processor nodes in distributed node service management
US20140052844A1 (en) 2012-08-17 2014-02-20 Vmware, Inc. Management of a virtual machine in a storage area network environment
US20140059204A1 (en) 2012-08-24 2014-02-27 Filip Nguyen Systems and methods for providing message flow analysis for an enterprise service bus
US20140059544A1 (en) 2012-08-27 2014-02-27 Vmware, Inc. Framework for networking and security services in virtual networks
US20140068602A1 (en) 2012-09-04 2014-03-06 Aaron Robert Gember Cloud-Based Middlebox Management System
US20140092738A1 (en) 2012-09-28 2014-04-03 Juniper Networks, Inc. Maintaining load balancing after service application with a netwok device
US20140096183A1 (en) 2012-10-01 2014-04-03 International Business Machines Corporation Providing services to virtual overlay network traffic
US20140092914A1 (en) 2012-10-02 2014-04-03 Lsi Corporation Method and system for intelligent deep packet buffering
US20140092906A1 (en) * 2012-10-02 2014-04-03 Cisco Technology, Inc. System and method for binding flows in a service cluster deployment in a network environment
US20140101656A1 (en) 2012-10-10 2014-04-10 Zhongwen Zhu Virtual firewall mobility
US20140101226A1 (en) 2012-10-08 2014-04-10 Motorola Mobility Llc Methods and apparatus for performing dynamic load balancing of processing resources
US20140108665A1 (en) 2012-10-16 2014-04-17 Citrix Systems, Inc. Systems and methods for bridging between public and private clouds through multilevel api integration
US8707383B2 (en) 2006-08-16 2014-04-22 International Business Machines Corporation Computer workload management with security policy enforcement
US20140115578A1 (en) 2012-10-21 2014-04-24 Geoffrey Howard Cooper Providing a virtual security appliance architecture to a virtual cloud infrastructure
US20140129715A1 (en) 2012-11-07 2014-05-08 Yahoo! Inc. Method and system for work load balancing
WO2014069978A1 (en) 2012-11-02 2014-05-08 Silverlake Mobility Ecosystem Sdn Bhd Method of processing requests for digital services
CN103795805A (en) 2014-02-27 2014-05-14 中国科学技术大学苏州研究院 Distributed server load balancing method based on SDN
US20140149696A1 (en) 2012-11-28 2014-05-29 Red Hat Israel, Ltd. Virtual machine backup using snapshots and current configuration
US20140164477A1 (en) 2012-12-06 2014-06-12 Gary M. Springer System and method for providing horizontal scaling of stateful applications
US20140169168A1 (en) 2012-12-06 2014-06-19 A10 Networks, Inc. Configuration of a virtual service network
US20140195666A1 (en) 2011-08-04 2014-07-10 Midokura Sarl System and method for implementing and managing virtual networks
US20140207968A1 (en) 2013-01-23 2014-07-24 Cisco Technology, Inc. Server Load Balancer Traffic Steering
US8804720B1 (en) 2010-12-22 2014-08-12 Juniper Networks, Inc. Pass-through multicast admission control signaling
US8832683B2 (en) 2009-11-30 2014-09-09 Red Hat Israel, Ltd. Using memory-related metrics of host machine for triggering load balancing that migrate virtual machine
US20140254591A1 (en) 2013-03-08 2014-09-11 Dell Products L.P. Processing of multicast traffic in computer networks
US20140254374A1 (en) 2013-03-11 2014-09-11 Cisco Technology, Inc. Methods and devices for providing service clustering in a trill network
US20140281029A1 (en) 2013-03-14 2014-09-18 Time Warner Cable Enterprises Llc System and method for automatic routing of dynamic host configuration protocol (dhcp) traffic
US20140282526A1 (en) 2013-03-15 2014-09-18 Avi Networks Managing and controlling a distributed network service platform
US20140269717A1 (en) 2013-03-15 2014-09-18 Cisco Technology, Inc. Ipv6/ipv4 resolution-less forwarding up to a destination
US20140269724A1 (en) 2013-03-04 2014-09-18 Telefonaktiebolaget L M Ericsson (Publ) Method and devices for forwarding ip data packets in an access network
US20140269487A1 (en) 2013-03-15 2014-09-18 Vivint, Inc. Multicast traffic management within a wireless mesh network
US20140280896A1 (en) 2013-03-15 2014-09-18 Achilleas Papakostas Methods and apparatus to credit usage of mobile devices
US8849746B2 (en) * 2006-12-19 2014-09-30 Teradata Us, Inc. High-throughput extract-transform-load (ETL) of program events for subsequent analysis
US20140301388A1 (en) 2013-04-06 2014-10-09 Citrix Systems, Inc. Systems and methods to cache packet steering decisions for a cluster of load balancers
US20140304231A1 (en) 2013-04-06 2014-10-09 Citrix Systems, Inc. Systems and methods for application-state distributed replication table hunting
US8862883B2 (en) 2012-05-16 2014-10-14 Cisco Technology, Inc. System and method for secure cloud service delivery with prioritized services in a network environment
US20140307744A1 (en) 2013-04-12 2014-10-16 Futurewei Technologies, Inc. Service Chain Policy for Distributed Gateways in Virtual Overlay Networks
US20140310391A1 (en) 2013-04-16 2014-10-16 Amazon Technologies, Inc. Multipath routing in a distributed load balancer
US20140310418A1 (en) 2013-04-16 2014-10-16 Amazon Technologies, Inc. Distributed load balancer
US8868711B2 (en) 2012-02-03 2014-10-21 Microsoft Corporation Dynamic load balancing in a scalable environment
US20140317677A1 (en) 2013-04-19 2014-10-23 Vmware, Inc. Framework for coordination between endpoint security and network security services
US8874789B1 (en) 2007-09-28 2014-10-28 Trend Micro Incorporated Application based routing arrangements and method thereof
US20140321459A1 (en) 2013-04-26 2014-10-30 Cisco Technology, Inc. Architecture for agentless service insertion
US20140334485A1 (en) 2013-05-09 2014-11-13 Vmware, Inc. Method and system for service switching using service tags
US20140334488A1 (en) 2013-05-10 2014-11-13 Cisco Technology, Inc. Data Plane Learning of Bi-Directional Service Chains
US8892706B1 (en) 2010-06-21 2014-11-18 Vmware, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US20140341029A1 (en) 2013-05-20 2014-11-20 Telefonaktiebolaget L M Ericsson (Publ) Encoding a payload hash in the da-mac to facilitate elastic chaining of packet processing elements
US20140351452A1 (en) 2013-05-21 2014-11-27 Cisco Technology, Inc. Chaining Service Zones by way of Route Re-Origination
US20140362682A1 (en) 2013-06-07 2014-12-11 Cisco Technology, Inc. Determining the Operations Performed Along a Service Path/Service Chain
US20140362705A1 (en) 2013-06-07 2014-12-11 The Florida International University Board Of Trustees Load-balancing algorithms for data center networks
US8914406B1 (en) 2012-02-01 2014-12-16 Vorstack, Inc. Scalable network security with fast response protocol
US20140372616A1 (en) 2013-06-17 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Methods of forwarding/receiving data packets using unicast and/or multicast communications and related load balancers and servers
US20140372567A1 (en) 2013-06-17 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Methods of forwarding data packets using transient tables and related load balancers
US20140369204A1 (en) 2013-06-17 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Methods of load balancing using primary and stand-by addresses and related load balancers and servers
US20140372702A1 (en) 2013-06-12 2014-12-18 Oracle International Corporation Handling memory pressure in an in-database sharded queue
US20150003453A1 (en) 2013-06-28 2015-01-01 Vmware, Inc. Network service slotting
US20150003455A1 (en) 2012-07-24 2015-01-01 Telefonaktiebolaget L M Ericsson (Publ) System and method for enabling services chaining in a provider network
US20150009995A1 (en) 2013-07-08 2015-01-08 Nicira, Inc. Encapsulating Data Packets Using an Adaptive Tunnelling Protocol
US20150016279A1 (en) 2013-07-09 2015-01-15 Nicira, Inc. Using Headerspace Analysis to Identify Classes of Packets
US20150026345A1 (en) 2013-07-22 2015-01-22 Vmware, Inc. Managing link aggregation traffic in a virtual environment
US20150023354A1 (en) 2012-11-19 2015-01-22 Huawei Technologies Co., Ltd. Method and device for allocating packet switching resource
US20150026362A1 (en) 2013-07-17 2015-01-22 Cisco Technology, Inc. Dynamic Service Path Creation
US20150030024A1 (en) 2013-07-23 2015-01-29 Dell Products L.P. Systems and methods for a data center architecture facilitating layer 2 over layer 3 communication
US20150052262A1 (en) 2013-08-14 2015-02-19 Nicira, Inc. Providing Services for Logical Networks
US20150052522A1 (en) 2013-08-14 2015-02-19 Nicira, Inc. Generation of DHCP Configuration Files
US8971345B1 (en) 2010-03-22 2015-03-03 Riverbed Technology, Inc. Method and apparatus for scheduling a heterogeneous communication flow
US20150063364A1 (en) 2013-09-04 2015-03-05 Nicira, Inc. Multiple Active L3 Gateways for Logical Networks
US20150063102A1 (en) 2013-08-30 2015-03-05 Cisco Technology, Inc. Flow Based Network Service Insertion
US20150073967A1 (en) 2012-09-12 2015-03-12 Iex Group, Inc. Transmission latency leveling apparatuses, methods and systems
US20150078384A1 (en) 2013-09-15 2015-03-19 Nicira, Inc. Tracking Prefixes of Values Associated with Different Rules to Generate Flows
US8989192B2 (en) 2012-08-15 2015-03-24 Futurewei Technologies, Inc. Method and system for creating software defined ordered service patterns in a communications network
US8996610B1 (en) 2010-03-15 2015-03-31 Salesforce.Com, Inc. Proxy system, method and computer program product for utilizing an identifier of a request to route the request to a networked device
US20150092564A1 (en) 2013-09-27 2015-04-02 Futurewei Technologies, Inc. Validation of Chained Network Services
US20150092551A1 (en) 2013-09-30 2015-04-02 Juniper Networks, Inc. Session-aware service chaining within computer networks
US9009289B1 (en) 2014-03-31 2015-04-14 Flexera Software Llc Systems and methods for assessing application usage
US20150103645A1 (en) 2013-10-10 2015-04-16 Vmware, Inc. Controller side method of generating and updating a controller assignment list
US20150103679A1 (en) 2013-10-13 2015-04-16 Vmware, Inc. Tracing Host-Originated Logical Network Packets
US20150103827A1 (en) 2013-10-14 2015-04-16 Cisco Technology, Inc. Configurable Service Proxy Mapping
US20150109901A1 (en) 2012-06-30 2015-04-23 Huawei Technologies Co., Ltd. Method for managing forwarding plane tunnel resource under control and forwarding decoupled architecture
US20150124840A1 (en) * 2013-11-03 2015-05-07 Ixia Packet flow modification
US20150124622A1 (en) 2013-11-01 2015-05-07 Movik Networks, Inc. Multi-Interface, Multi-Layer State-full Load Balancer For RAN-Analytics Deployments In Multi-Chassis, Cloud And Virtual Server Environments
US20150124608A1 (en) 2013-11-05 2015-05-07 International Business Machines Corporation Adaptive Scheduling of Data Flows in Data Center Networks for Efficient Resource Utilization
US20150138973A1 (en) 2013-11-15 2015-05-21 Cisco Technology, Inc. Shortening of service paths in service chains in a communications network
US20150139041A1 (en) 2013-11-21 2015-05-21 Cisco Technology, Inc. Subscriber dependent redirection between a mobile packet core proxy and a cell site proxy in a network environment
US20150146539A1 (en) 2013-11-25 2015-05-28 Versa Networks, Inc. Flow distribution table for packet flow load balancing
US20150188770A1 (en) 2013-12-27 2015-07-02 Big Switch Networks, Inc. Systems and methods for performing network service insertion
US20150195197A1 (en) 2014-01-06 2015-07-09 Futurewei Technologies, Inc. Service Function Chaining in a Packet Network
US9094464B1 (en) 2014-12-18 2015-07-28 Limelight Networks, Inc. Connection digest for accelerating web traffic
US20150215819A1 (en) 2014-01-24 2015-07-30 Cisco Technology, Inc. Method for Providing Sticky Load Balancing
US20150213087A1 (en) 2014-01-28 2015-07-30 Software Ag Scaling framework for querying
US20150222640A1 (en) 2014-02-03 2015-08-06 Cisco Technology, Inc. Elastic Service Chains
US20150237013A1 (en) 2014-02-20 2015-08-20 Nicira, Inc. Specifying point of enforcement in a firewall rule
US20150236948A1 (en) 2014-02-14 2015-08-20 Futurewei Technologies, Inc. Restoring service functions after changing a service chain instance path
US20150242197A1 (en) 2014-02-25 2015-08-27 Red Hat, Inc. Automatic Installing and Scaling of Application Resources in a Multi-Tenant Platform-as-a-Service (PaaS) System
US20150244617A1 (en) * 2012-06-06 2015-08-27 Juniper Networks, Inc. Physical path determination for virtual network packet flows
US20150263901A1 (en) 2014-03-13 2015-09-17 Cisco Technology, Inc. Service node originated service chains in a network environment
US20150263946A1 (en) 2014-03-14 2015-09-17 Nicira, Inc. Route advertisement by managed gateways
US20150271102A1 (en) 2014-03-21 2015-09-24 Juniper Networks, Inc. Selectable service node resources
US20150280959A1 (en) 2014-03-31 2015-10-01 Amazon Technologies, Inc. Session management in distributed storage systems
US20150281180A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Method and apparatus for integrating a service virtual machine
US20150281089A1 (en) 2014-03-31 2015-10-01 Sandvine Incorporated Ulc System and method for load balancing in computer networks
US20150281098A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Flow Cache Hierarchy
US20150281179A1 (en) 2014-03-31 2015-10-01 Chids Raman Migrating firewall connection state for a firewall service virtual machine
US20150288679A1 (en) 2014-04-02 2015-10-08 Cisco Technology, Inc. Interposer with Security Assistant Key Escrow
US20150295831A1 (en) 2014-04-10 2015-10-15 Cisco Technology, Inc. Network address translation offload to network infrastructure for service chains in a network environment
US9178709B2 (en) 2004-03-30 2015-11-03 Panasonic Intellectual Property Management Co., Ltd. Communication system and method for distributing content
US20150319078A1 (en) 2014-05-02 2015-11-05 Futurewei Technologies, Inc. Computing Service Chain-Aware Paths
US20150319096A1 (en) 2014-05-05 2015-11-05 Nicira, Inc. Secondary input queues for maintaining a consistent network state
US9191293B2 (en) 2008-12-22 2015-11-17 Telefonaktiebolaget L M Ericsson (Publ) Method and device for handling of connections between a client and a server via a communication network
US9203748B2 (en) 2012-12-24 2015-12-01 Huawei Technologies Co., Ltd. Software defined network-based data processing method, node, and system
US20150358235A1 (en) 2014-06-05 2015-12-10 Futurewei Technologies, Inc. Service Chain Topology Map Construction
US20150358294A1 (en) 2014-06-05 2015-12-10 Cavium, Inc. Systems and methods for secured hardware security module communication with web service hosts
US20150365322A1 (en) 2014-06-13 2015-12-17 Cisco Technology, Inc. Providing virtual private service chains in a network environment
US20150372840A1 (en) 2014-06-23 2015-12-24 International Business Machines Corporation Servicing packets in a virtual network and a software-defined network (sdn)
US20150372911A1 (en) 2013-01-31 2015-12-24 Hitachi, Ltd. Communication path management method
US20150370586A1 (en) 2014-06-23 2015-12-24 Intel Corporation Local service chaining with virtual machines and virtualized containers in software defined networking
US20150370596A1 (en) 2014-06-20 2015-12-24 Google Inc. System and method for live migration of a virtualized networking stack
US20150381493A1 (en) 2014-06-30 2015-12-31 Juniper Networks, Inc. Service chaining across multiple networks
US20150381494A1 (en) 2014-06-30 2015-12-31 Nicira, Inc. Methods and systems to offload overlay network packet encapsulation to hardware
US20150381495A1 (en) 2014-06-30 2015-12-31 Nicira, Inc. Methods and systems for providing multi-tenancy support for single root i/o virtualization
US20150379277A1 (en) 2014-06-30 2015-12-31 Leonard Heyman Encryption Architecture
US9232342B2 (en) 2011-10-24 2016-01-05 Interdigital Patent Holdings, Inc. Methods, systems and apparatuses for application service layer (ASL) inter-networking
US20160006654A1 (en) 2014-07-07 2016-01-07 Cisco Technology, Inc. Bi-directional flow stickiness in a network environment
US20160028640A1 (en) 2014-07-22 2016-01-28 Futurewei Technologies, Inc. Service Chain Header and Metadata Transport
US9256467B1 (en) 2014-11-11 2016-02-09 Amazon Technologies, Inc. System for managing and scheduling containers
US9258742B1 (en) 2013-09-30 2016-02-09 Juniper Networks, Inc. Policy-directed value-added services chaining
US20160043901A1 (en) 2012-09-25 2016-02-11 A10 Networks, Inc. Graceful scaling in software driven networks
US20160043952A1 (en) 2014-08-06 2016-02-11 Futurewei Technologies, Inc. Mechanisms to support service chain graphs in a communication network
US9264313B1 (en) 2013-10-31 2016-02-16 Vmware, Inc. System and method for performing a service discovery for virtual networks
US20160057687A1 (en) 2014-08-19 2016-02-25 Qualcomm Incorporated Inter/intra radio access technology mobility and user-plane split measurement configuration
US20160057050A1 (en) * 2012-10-05 2016-02-25 Stamoulis & Weinblatt LLC Devices, methods, and systems for packet reroute permission based on content parameters embedded in packet header or payload
US9277412B2 (en) 2009-11-16 2016-03-01 Interdigital Patent Holdings, Inc. Coordination of silent periods for dynamic spectrum manager (DSM)
US20160065503A1 (en) 2014-08-29 2016-03-03 Extreme Networks, Inc. Methods, systems, and computer readable media for virtual fabric routing
US20160080253A1 (en) 2013-05-23 2016-03-17 Huawei Technologies Co. Ltd. Service routing system, device, and method
US20160094453A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Load balancer of load balancers
US20160094633A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Configuring and Operating a XaaS Model in a Datacenter
US20160094642A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Dynamically adjusting load balancing
US20160099948A1 (en) 2013-06-14 2016-04-07 Tocario Gmbh Method and system for enabling access of a client device to a remote desktop
US20160105333A1 (en) 2014-10-10 2016-04-14 Nicira, Inc. Logical network traffic analysis
US20160119226A1 (en) 2014-10-24 2016-04-28 Cisco Technology, Inc. Transparent Network Service Header Path Proxies
US20160127306A1 (en) 2013-07-11 2016-05-05 Huawei Technologies Co., Ltd. Packet Transmission Method, Apparatus, and System in Multicast Domain Name System
US20160127564A1 (en) 2014-10-29 2016-05-05 Alcatel-Lucent Usa Inc. Policy decisions based on offline charging rules when service chaining is implemented
US20160134528A1 (en) 2014-11-10 2016-05-12 Juniper Networks, Inc. Signaling aliasing capability in data centers
US20160149816A1 (en) 2013-06-14 2016-05-26 Haitao Wu Fault Tolerant and Load Balanced Routing
US20160149784A1 (en) 2014-11-20 2016-05-26 Telefonaktiebolaget L M Ericsson (Publ) Passive Performance Measurement for Inline Service Chaining
US20160149828A1 (en) 2014-11-25 2016-05-26 Netapp, Inc. Clustered storage system path quiescence analysis
US20160164826A1 (en) 2014-12-04 2016-06-09 Cisco Technology, Inc. Policy Implementation at a Network Element based on Data from an Authoritative Source
US20160164776A1 (en) 2014-12-09 2016-06-09 Aol Inc. Systems and methods for software defined networking service function chaining
US20160164787A1 (en) 2014-06-05 2016-06-09 KEMP Technologies Inc. Methods for intelligent data traffic steering
US20160173373A1 (en) 2014-12-11 2016-06-16 Cisco Technology, Inc. Network service header metadata for load balancing
US20160182684A1 (en) 2014-12-23 2016-06-23 Patrick Connor Parallel processing of service functions in service function chains
US20160197831A1 (en) 2013-08-16 2016-07-07 Interdigital Patent Holdings, Inc. Method and apparatus for name resolution in software defined networking
US20160197839A1 (en) 2015-01-05 2016-07-07 Futurewei Technologies, Inc. Method and system for providing qos for in-band control traffic in an openflow network
US20160205015A1 (en) 2015-01-08 2016-07-14 Openwave Mobility Inc. Software defined network and a communication network comprising the same
US9397946B1 (en) 2013-11-05 2016-07-19 Cisco Technology, Inc. Forwarding to clusters of service nodes
US20160212237A1 (en) 2015-01-16 2016-07-21 Fujitsu Limited Management server, communication system and path management method
US20160212048A1 (en) 2015-01-15 2016-07-21 Hewlett Packard Enterprise Development Lp Openflow service chain data packet routing using tables
US20160218918A1 (en) 2015-01-27 2016-07-28 Xingjun Chu Network virtualization for network infrastructure
US9407540B2 (en) 2013-09-06 2016-08-02 Cisco Technology, Inc. Distributed service chaining in a network environment
US20160226762A1 (en) 2015-01-30 2016-08-04 Nicira, Inc. Implementing logical router uplinks
US20160248685A1 (en) 2015-02-25 2016-08-25 Cisco Technology, Inc. Metadata augmentation in a service function chain
US9442752B1 (en) 2014-09-03 2016-09-13 Amazon Technologies, Inc. Virtual secure execution environments
US20160277294A1 (en) 2013-08-26 2016-09-22 Nec Corporation Communication apparatus, communication method, control apparatus, and management apparatus in a communication system
US20160277210A1 (en) 2015-03-18 2016-09-22 Juniper Networks, Inc. Evpn inter-subnet multicast forwarding
US20160294935A1 (en) 2015-04-03 2016-10-06 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US20160294612A1 (en) 2015-04-04 2016-10-06 Nicira, Inc. Route Server Mode for Dynamic Routing Between Logical and Physical Networks
US20160308961A1 (en) 2014-01-06 2016-10-20 Tencent Technology (Shenzhen) Company Limited Methods, Devices, and Systems for Allocating Service Nodes in a Network
US20160308758A1 (en) 2015-04-17 2016-10-20 Huawei Technologies Co., Ltd Software Defined Network (SDN) Control Signaling for Traffic Engineering to Enable Multi-type Transport in a Data Plane
US9479358B2 (en) 2009-05-13 2016-10-25 International Business Machines Corporation Managing graphics load balancing strategies
US20160337189A1 (en) 2013-12-19 2016-11-17 Rainer Liebhart A method and apparatus for performing flexible service chaining
US20160337249A1 (en) 2014-01-29 2016-11-17 Huawei Technologies Co., Ltd. Communications network, device, and control method
US9503530B1 (en) 2008-08-21 2016-11-22 United Services Automobile Association (Usaa) Preferential loading in data centers
US20160344565A1 (en) 2015-05-20 2016-11-24 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
US20160344621A1 (en) 2014-12-17 2016-11-24 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for relocating packet processing functions
US20160352866A1 (en) 2015-05-25 2016-12-01 Juniper Networks, Inc. Selecting and monitoring a plurality of services key performance indicators using twamp
US20160366046A1 (en) 2015-06-09 2016-12-15 International Business Machines Corporation Support for high availability of service appliances in a software-defined network (sdn) service chaining infrastructure
US20160373364A1 (en) 2014-03-04 2016-12-22 Nec Corporation Packet processing device, packet processing method and program
US20160378537A1 (en) 2014-03-12 2016-12-29 Huawei Technologies Co., Ltd. Method and Apparatus for Controlling Virtual Machine Migration
US20170005923A1 (en) 2015-06-30 2017-01-05 Vmware, Inc. Dynamic virtual machine network policy for ingress optimization
US20170005920A1 (en) 2015-07-01 2017-01-05 Cisco Technology, Inc. Forwarding packets with encapsulated service chain headers
US20170005988A1 (en) 2015-06-30 2017-01-05 Nicira, Inc. Global objects for federated firewall rule management
US20170019329A1 (en) 2015-07-15 2017-01-19 Argela-USA, Inc. Method for forwarding rule hopping based secure communication
US20170019341A1 (en) 2014-04-01 2017-01-19 Huawei Technologies Co., Ltd. Service link selection control method and device
US20170019331A1 (en) 2015-07-13 2017-01-19 Futurewei Technologies, Inc. Internet Control Message Protocol Enhancement for Traffic Carried by a Tunnel over Internet Protocol Networks
US20170026417A1 (en) 2015-07-23 2017-01-26 Cisco Technology, Inc. Systems, methods, and devices for smart mapping and vpn policy enforcement
US20170033939A1 (en) 2015-07-28 2017-02-02 Ciena Corporation Multicast systems and methods for segment routing
US20170063683A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Traffic forwarding between geographically dispersed sites
US20170063928A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Defining Network Rules Based on Remote Device Management Attributes
US20170064048A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Packet Data Restoration for Flow-Based Forwarding Element
US20170078961A1 (en) 2015-09-10 2017-03-16 Qualcomm Incorporated Smart co-processor for optimizing service discovery power consumption in wireless service platforms
US20170078176A1 (en) 2015-09-11 2017-03-16 Telefonaktiebolaget L M Ericsson (Publ) Method and system for delay measurement of a traffic flow in a software-defined networking (sdn) system
US9602380B2 (en) 2014-03-28 2017-03-21 Futurewei Technologies, Inc. Context-aware dynamic policy selection for load balancing behavior
US20170093758A1 (en) 2015-09-30 2017-03-30 Nicira, Inc. Ip aliases in logical networks with hardware switches
US20170093698A1 (en) 2015-09-30 2017-03-30 Huawei Technologies Co., Ltd. Method and apparatus for supporting service function chaining in a communication network
US20170099194A1 (en) 2014-06-17 2017-04-06 Huawei Technologies Co., Ltd. Service flow processing method, apparatus, and device
US20170126497A1 (en) 2015-10-31 2017-05-04 Nicira, Inc. Static Route Types for Logical Routers
US20170126522A1 (en) 2015-10-30 2017-05-04 Oracle International Corporation Methods, systems, and computer readable media for remote authentication dial in user service (radius) message loop detection and mitigation
US20170126726A1 (en) 2015-11-01 2017-05-04 Nicira, Inc. Securing a managed forwarding element that operates within a data compute node
US20170134538A1 (en) 2015-11-10 2017-05-11 Telefonaktiebolaget L M Ericsson (Publ) Systems and methods of an enhanced state-aware proxy device
US20170147399A1 (en) 2015-11-25 2017-05-25 International Business Machines Corporation Policy-based virtual machine selection during an optimization cycle
US20170149675A1 (en) 2015-11-25 2017-05-25 Huawei Technologies Co., Ltd. Packet retransmission method and apparatus
US20170149582A1 (en) 2015-11-20 2017-05-25 Oracle International Corporation Redirecting packets for egress from an autonomous system using tenant specific routing and forwarding tables
US20170163724A1 (en) 2015-12-04 2017-06-08 Microsoft Technology Licensing, Llc State-Aware Load Balancing
US20170163531A1 (en) 2015-12-04 2017-06-08 Cisco Technology, Inc. Infrastructure-exclusive service forwarding
US20170171159A1 (en) 2015-12-14 2017-06-15 Nicira, Inc. Packet tagging for improved guest system security
US20170180240A1 (en) 2015-12-16 2017-06-22 Telefonaktiebolaget Lm Ericsson (Publ) Openflow configured horizontally split hybrid sdn nodes
US20170195255A1 (en) 2015-12-31 2017-07-06 Fortinet, Inc. Packet routing using a software-defined networking (sdn) switch
US9705775B2 (en) 2014-11-20 2017-07-11 Telefonaktiebolaget Lm Ericsson (Publ) Passive performance measurement for inline service chaining
US20170208011A1 (en) 2016-01-19 2017-07-20 Cisco Technology, Inc. System and method for hosting mobile packet core and value-added services using a software defined network and service chains
US20170208532A1 (en) 2014-09-30 2017-07-20 Huawei Technologies Co., Ltd. Service path generation method and apparatus
US20170208000A1 (en) 2016-01-15 2017-07-20 Cisco Technology, Inc. Leaking routes in a service chain
US20170214627A1 (en) 2016-01-21 2017-07-27 Futurewei Technologies, Inc. Distributed Load Balancing for Network Service Function Chaining
US20170220306A1 (en) 2016-02-03 2017-08-03 Google Inc. Systems and methods for automatic content verification
US20170230467A1 (en) 2016-02-09 2017-08-10 Cisco Technology, Inc. Adding cloud service provider, could service, and cloud tenant awareness to network service chains
US20170230333A1 (en) 2016-02-08 2017-08-10 Cryptzone North America, Inc. Protecting network devices by a firewall
US20170237656A1 (en) 2016-02-12 2017-08-17 Huawei Technologies Co., Ltd. Method and apparatus for service function forwarding in a service domain
US20170250869A1 (en) 2014-09-12 2017-08-31 Andreas Richard Voellmy Managing network forwarding configurations using algorithmic policies
US20170250917A1 (en) 2014-09-19 2017-08-31 Nokia Solutions And Networks Oy Chaining of network service functions in a communication network
US20170251065A1 (en) 2016-02-29 2017-08-31 Cisco Technology, Inc. System and Method for Data Plane Signaled Packet Capture in a Service Function Chaining Network
US20170250902A1 (en) 2014-09-23 2017-08-31 Nokia Solutions And Networks Oy Control of communication using service function chaining
US9755971B2 (en) 2013-08-12 2017-09-05 Cisco Technology, Inc. Traffic flow redirection between border routers using routing encapsulation
US20170257432A1 (en) 2011-02-09 2017-09-07 Cliqr Technologies Inc. Apparatus, systems and methods for container based service deployment
US20170264677A1 (en) 2014-11-28 2017-09-14 Huawei Technologies Co., Ltd. Service Processing Apparatus and Method
US20170273099A1 (en) 2014-12-09 2017-09-21 Huawei Technologies Co., Ltd. Method and apparatus for processing adaptive flow table
CN107204941A (en) 2016-03-18 2017-09-26 中兴通讯股份有限公司 The method and apparatus that a kind of flexible Ethernet path is set up
US20170279938A1 (en) 2014-12-11 2017-09-28 Huawei Technologies Co., Ltd. Packet processing method and apparatus
US9787559B1 (en) 2014-03-28 2017-10-10 Juniper Networks, Inc. End-to-end monitoring of overlay networks providing virtualized network services
US20170295021A1 (en) 2016-04-07 2017-10-12 Telefonica, S.A. Method to assure correct data packet traversal through a particular path of a network
US20170295100A1 (en) 2016-04-12 2017-10-12 Nicira, Inc. Virtual tunnel endpoints for congestion-aware load balancing
US20170310588A1 (en) 2014-12-17 2017-10-26 Huawei Technologies Co., Ltd. Data forwarding method, device, and system in software-defined networking
US20170310611A1 (en) 2016-04-26 2017-10-26 Cisco Technology, Inc. System and method for automated rendering of service chaining
US9804797B1 (en) 2014-09-29 2017-10-31 EMC IP Holding Company LLC Using dynamic I/O load differential for load balancing
US20170317954A1 (en) 2016-04-28 2017-11-02 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US20170317887A1 (en) 2016-04-29 2017-11-02 Deutsche Telekom Ag Versioning system for network states in a software-defined network
US20170317926A1 (en) 2016-04-27 2017-11-02 Cisco Technology, Inc. Generating packets in a reverse direction of a service function chain
US20170317936A1 (en) 2016-04-28 2017-11-02 Cisco Technology, Inc. Selective steering network traffic to virtual service(s) using policy
US20170318097A1 (en) 2016-04-29 2017-11-02 Hewlett Packard Enterprise Development Lp Virtualized network function placements
US20170324651A1 (en) 2016-05-09 2017-11-09 Cisco Technology, Inc. Traceroute to return aggregated statistics in service chains
US20170331672A1 (en) 2016-05-11 2017-11-16 Hewlett Packard Enterprise Development Lp Filter tables for management functions
US20170339600A1 (en) 2014-12-19 2017-11-23 Telefonaktiebolaget Lm Ericsson (Publ) Method and appratus for relocating packet processing functions
US20170339110A1 (en) 2015-02-13 2017-11-23 Huawei Technologies Co., Ltd. Access Control Apparatus, System, and Method
US20170346764A1 (en) 2012-06-29 2017-11-30 Huawei Technologies Co., Ltd. Method for Processing Information, Forwarding Plane Device and Control Plane Device
US20170353387A1 (en) 2016-06-07 2017-12-07 Electronics And Telecommunications Research Institute Distributed service function forwarding system
US20170364794A1 (en) 2016-06-20 2017-12-21 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows
US20170366605A1 (en) 2016-06-16 2017-12-21 Alcatel-Lucent Usa Inc. Providing data plane services for applications
US20170373990A1 (en) 2016-06-23 2017-12-28 Cisco Technology, Inc. Transmitting network overlay information in a service function chain
US20180004954A1 (en) 2016-06-30 2018-01-04 Amazon Technologies, Inc. Secure booting of virtualization managers
US20180006935A1 (en) 2016-06-30 2018-01-04 Juniper Networks, Inc. Auto discovery and auto scaling of services in software-defined network environment
US20180027101A1 (en) 2013-04-26 2018-01-25 Cisco Technology, Inc. High-efficiency service chaining with agentless service nodes
US20180026911A1 (en) 2016-07-25 2018-01-25 Cisco Technology, Inc. System and method for providing a resource usage advertising framework for sfc-based workloads
US20180041470A1 (en) 2016-08-08 2018-02-08 Talari Networks Incorporated Applications and integrated firewall design in an adaptive private network (apn)
US20180041524A1 (en) 2016-08-02 2018-02-08 Cisco Technology, Inc. Steering of cloned traffic in a service function chain
US20180041425A1 (en) 2016-08-05 2018-02-08 Huawei Technologies Co., Ltd. Service-based traffic forwarding in virtual networks
US20180063087A1 (en) 2016-08-27 2018-03-01 Nicira, Inc. Managed forwarding element executing in separate namespace of public cloud data compute node than workload application
US20180063018A1 (en) 2016-08-30 2018-03-01 Cisco Technology, Inc. System and method for managing chained services in a network environment
CA3034809A1 (en) 2016-08-27 2018-03-08 Nicira, Inc. Extension of network control system into public cloud
EP3300319A1 (en) 2016-09-26 2018-03-28 Juniper Networks, Inc. Distributing service function chain data and service function instance data in a network
US20180102919A1 (en) 2015-06-10 2018-04-12 Huawei Technologies Co., Ltd. Method for implementing service chain, device, and system
US20180102965A1 (en) 2016-10-07 2018-04-12 Alcatel-Lucent Usa Inc. Unicast branching based multicast
US20180115471A1 (en) 2015-04-23 2018-04-26 Hewlett Packard Enterprise Development Lp Network infrastructure device to implement pre-filter rules
US20180123950A1 (en) 2016-11-03 2018-05-03 Parallel Wireless, Inc. Traffic Shaping and End-to-End Prioritization
US20180124061A1 (en) 2016-11-03 2018-05-03 Nicira, Inc. Performing services on a host
US20180139098A1 (en) 2016-11-14 2018-05-17 Futurewei Technologies, Inc. Integrating physical and virtual network functions in a service-chained network environment
US20180145899A1 (en) 2016-11-22 2018-05-24 Gigamon Inc. Dynamic Service Chaining and Late Binding
US20180159943A1 (en) 2016-12-06 2018-06-07 Nicira, Inc. Performing context-rich attribute-based services on a host
US20180159801A1 (en) 2016-12-07 2018-06-07 Nicira, Inc. Service function chain (sfc) data communications with sfc data in virtual local area network identifier (vlan id) data fields
US20180176294A1 (en) 2015-06-26 2018-06-21 Hewlett Packard Enterprise Development Lp Server load balancing
US20180176177A1 (en) 2016-12-20 2018-06-21 Thomson Licensing Method for managing service chaining at a network equipment, corresponding network equipment
US20180184281A1 (en) 2015-06-10 2018-06-28 Soracom, Inc. Communication System And Communication Method For Providing IP Network Access To Wireless Terminals
US20180183764A1 (en) 2016-12-22 2018-06-28 Nicira, Inc. Collecting and processing contextual attributes on a host
US20180191600A1 (en) 2015-08-31 2018-07-05 Huawei Technologies Co., Ltd. Redirection of service or device discovery messages in software-defined networks
US20180198705A1 (en) 2015-07-02 2018-07-12 Zte Corporation Method and apparatus for implementing service function chain
US20180198791A1 (en) 2017-01-12 2018-07-12 Zscaler, Inc. Systems and methods for cloud-based service function chaining using security assertion markup language (saml) assertion
US20180198692A1 (en) 2006-12-29 2018-07-12 Kip Prod P1 Lp Multi-services application gateway and system employing the same
US20180205637A1 (en) 2015-09-14 2018-07-19 Huawei Technologies Co., Ltd. Method for obtaining information about service chain in cloud computing system and apparatus
US20180203736A1 (en) 2017-01-13 2018-07-19 Red Hat, Inc. Affinity based hierarchical container scheduling
US20180213040A1 (en) 2016-12-15 2018-07-26 Arm Ip Limited Enabling Communications Between Devices
US20180219762A1 (en) 2017-02-02 2018-08-02 Fujitsu Limited Seamless service function chaining across domains
US10042722B1 (en) 2015-06-23 2018-08-07 Juniper Networks, Inc. Service-chain fault tolerance in service virtualized environments
US20180227216A1 (en) 2017-02-06 2018-08-09 Silver Peak Systems, Inc. Multi-level Learning For Classifying Traffic Flows From First Packet Data
US20180234360A1 (en) 2017-02-16 2018-08-16 Netscout Systems, Inc Flow and time based reassembly of fragmented packets by ip protocol analyzers
US20180248755A1 (en) 2015-10-28 2018-08-30 Huawei Technologies Co., Ltd. Control traffic in software defined networks
US20180247082A1 (en) 2016-08-11 2018-08-30 Intel Corporation Secure Public Cloud with Protected Guest-Verified Host Control
US20180248713A1 (en) 2015-02-24 2018-08-30 Nokia Solutions And Networks Oy Integrated services processing for mobile networks
US20180278530A1 (en) 2017-03-24 2018-09-27 Intel Corporation Load balancing systems, devices, and methods
US10091276B2 (en) 2013-09-27 2018-10-02 Transvoyant, Inc. Computer-implemented systems and methods of analyzing data in an ad-hoc network for predictive decision-making
US20180288129A1 (en) 2017-03-29 2018-10-04 Ca, Inc. Introspection driven monitoring of multi-container applications
US20180295053A1 (en) 2017-04-10 2018-10-11 Cisco Technology, Inc. Service-function chaining using extended service-function chain proxy for service-function offload
US20180295036A1 (en) 2017-04-07 2018-10-11 Nicira, Inc. Application/context-based management of virtual networks using customizable workflows
US10104169B1 (en) 2013-12-18 2018-10-16 Amazon Technologies, Inc. Optimizing a load balancer configuration
US20180302242A1 (en) 2015-12-31 2018-10-18 Huawei Technologies Co., Ltd. Packet processing method, related apparatus, and nvo3 network system
US10135636B2 (en) 2014-03-25 2018-11-20 Huawei Technologies Co., Ltd. Method for generating forwarding information, controller, and service forwarding entity
US20180337849A1 (en) 2017-05-16 2018-11-22 Sonus Networks, Inc. Communications methods, apparatus and systems for providing scalable media services in sdn systems
US20180349212A1 (en) 2017-06-06 2018-12-06 Shuhao Liu System and method for inter-datacenter communication
US20180351874A1 (en) 2017-05-30 2018-12-06 At&T Intellectual Property I, L.P. Creating Cross-Service Chains of Virtual Network Functions in a Wide Area Network
US10158573B1 (en) 2017-05-01 2018-12-18 Barefoot Networks, Inc. Forwarding element with a data plane load balancer
US20190007382A1 (en) 2017-06-29 2019-01-03 Vmware, Inc. Ssh key validation in a hyper-converged computing environment
CN109213573A (en) 2018-09-14 2019-01-15 珠海国芯云科技有限公司 The equipment blocking method and device of virtual desktop based on container
US20190020684A1 (en) 2017-07-13 2019-01-17 Nicira, Inc. Systems and methods for storing a security parameter index in an options field of an encapsulation header
US20190020580A1 (en) 2017-07-14 2019-01-17 Nicira, Inc. Asymmetric network elements sharing an anycast address
US10187306B2 (en) 2016-03-24 2019-01-22 Cisco Technology, Inc. System and method for improved service chaining
US20190028384A1 (en) 2015-10-15 2019-01-24 Cisco Technology, Inc. Application identifier in service function chain metadata
US20190028577A1 (en) 2016-02-26 2019-01-24 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic re-route in a redundant system of a packet network
US20190036819A1 (en) 2017-07-31 2019-01-31 Nicira, Inc. Use of hypervisor for active-active stateful network service cluster
US10200493B2 (en) 2011-10-17 2019-02-05 Microsoft Technology Licensing, Llc High-density multi-tenant distributed cache as a service
US10212071B2 (en) * 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US20190068500A1 (en) 2017-08-27 2019-02-28 Nicira, Inc. Performing in-line service in public cloud
US20190089679A1 (en) 2017-09-17 2019-03-21 Mellanox Technologies, Ltd. NIC with stateful connection tracking
US20190097838A1 (en) 2017-09-26 2019-03-28 Oracle International Corporation Virtual interface system and method for multi-tenant cloud networking
US10250501B2 (en) 2014-07-23 2019-04-02 Huawei Technologies Co., Ltd. Service packet forwarding method and apparatus
US20190102280A1 (en) 2017-09-30 2019-04-04 Oracle International Corporation Real-time debugging instances in a deployed container platform
US20190121961A1 (en) 2017-10-23 2019-04-25 L3 Technologies, Inc. Configurable internet isolation and security for laptops and similar devices
US20190124096A1 (en) 2016-07-29 2019-04-25 ShieldX Networks, Inc. Channel data encapsulation system and method for use with client-server data channels
US20190132220A1 (en) 2017-10-29 2019-05-02 Nicira, Inc. Service operation chaining
US10284390B2 (en) 2016-06-08 2019-05-07 Cisco Technology, Inc. Techniques for efficient service chain analytics
US20190140950A1 (en) 2016-07-01 2019-05-09 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining sfc
US20190140863A1 (en) 2017-11-06 2019-05-09 Cisco Technology, Inc. Dataplane signaled bidirectional/symmetric service chain instantiation for efficient load balancing
US20190140947A1 (en) 2016-07-01 2019-05-09 Huawei Technologies Co., Ltd. Service Function Chaining SFC-Based Packet Forwarding Method, Apparatus, and System
US20190149518A1 (en) 2017-11-15 2019-05-16 Nicira, Inc. Packet induced revalidation of connection tracker
US20190149516A1 (en) 2017-11-15 2019-05-16 Nicira, Inc. Stateful connection policy filtering
US20190149512A1 (en) 2017-11-15 2019-05-16 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
US20190166045A1 (en) 2016-07-27 2019-05-30 Zte Corporation Packet forwarding method and device
US20190173851A1 (en) 2017-12-04 2019-06-06 Nicira, Inc. Scaling gateway to gateway traffic using flow hash
US20190173778A1 (en) 2016-08-26 2019-06-06 Telefonaktiebolaget Lm Ericsson (Publ) Improving sf proxy performance in sdn networks
US20190173850A1 (en) 2017-12-04 2019-06-06 Nicira, Inc. Scaling gateway to gateway traffic using flow hash
US10333822B1 (en) 2017-05-23 2019-06-25 Cisco Technology, Inc. Techniques for implementing loose hop service function chains price information
US10341427B2 (en) * 2012-12-06 2019-07-02 A10 Networks, Inc. Forwarding policies on a virtual service network
US20190230126A1 (en) 2018-01-24 2019-07-25 Nicira, Inc. Flow-based forwarding element configuration
US20190229937A1 (en) 2018-01-25 2019-07-25 Juniper Networks, Inc. Multicast join message processing by multi-homing devices in an ethernet vpn
US20190238364A1 (en) 2018-01-26 2019-08-01 Nicira, Inc. Specifying and utilizing paths through a network
US20190238363A1 (en) 2018-01-26 2019-08-01 Nicira, Inc. Specifying and utilizing paths through a network
WO2019147316A1 (en) 2018-01-26 2019-08-01 Nicira, Inc. Specifying and utilizing paths through a network
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
WO2019157955A1 (en) 2018-02-13 2019-08-22 华为技术有限公司 Device access method, related platform and computer storage medium
US20190268384A1 (en) 2016-08-05 2019-08-29 Alcatel Lucent Security-on-demand architecture
WO2019168532A1 (en) 2018-03-01 2019-09-06 Google Llc High availability multi-single-tenant services
US20190286475A1 (en) 2018-03-14 2019-09-19 Microsoft Technology Licensing, Llc Opportunistic virtual machine migration
US20190306036A1 (en) 2018-03-27 2019-10-03 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US20190306086A1 (en) 2018-03-27 2019-10-03 Nicira, Inc. Incorporating layer 2 service between two interfaces of gateway device
US20190306063A1 (en) 2018-03-30 2019-10-03 Yuuta Hamada Communication system and upload method
US20190342175A1 (en) 2018-05-02 2019-11-07 Nicira, Inc. Application of profile setting groups to logical network entities
US10484334B1 (en) 2013-02-26 2019-11-19 Zentera Systems, Inc. Distributed firewall security system that extends across different cloud computing networks
WO2019226327A1 (en) 2018-05-23 2019-11-28 Microsoft Technology Licensing, Llc Data platform fabric
US20190379578A1 (en) 2018-06-11 2019-12-12 Nicira, Inc. Configuring a compute node to perform services on a host
US20190379579A1 (en) 2018-06-11 2019-12-12 Nicira, Inc. Providing shared memory for access by multiple network service containers executing on single service machine
US20190377604A1 (en) 2018-06-11 2019-12-12 Nuweba Labs Ltd. Scalable function as a service platform
US20200007388A1 (en) 2018-06-29 2020-01-02 Cisco Technology, Inc. Network traffic optimization using in-situ notification system
US10547508B1 (en) 2016-06-29 2020-01-28 Juniper Networks, Inc. Network services using pools of pre-configured virtualized network functions and service chains
US20200036629A1 (en) 2015-06-15 2020-01-30 Telefonaktiebolaget Lm Ericsson (Publ) Method and network nodes for scalable mapping of tags to service function chain encapsulation headers
US10554484B2 (en) 2015-06-26 2020-02-04 Nicira, Inc. Control plane integration with hardware switches
US20200059761A1 (en) 2018-08-17 2020-02-20 Huawei Technologies Co., Ltd. Systems and methods for enabling private communication within a user equipment group
US20200067828A1 (en) 2018-08-23 2020-02-27 Agora Lab, Inc. Large-Scale Real-Time Multimedia Communications
US20200073739A1 (en) 2018-08-28 2020-03-05 Amazon Technologies, Inc. Constraint solver execution service and infrastructure therefor
WO2020046686A1 (en) 2018-09-02 2020-03-05 Vmware, Inc. Service insertion at logical network gateway
US20200076734A1 (en) 2018-09-02 2020-03-05 Vmware, Inc. Redirection of data messages at logical network gateway
US20200076684A1 (en) 2018-09-02 2020-03-05 Vmware, Inc. Service insertion at logical network gateway
US20200084141A1 (en) 2018-09-12 2020-03-12 Corsa Technology Inc. Methods and systems for network security universal control point
US10609122B1 (en) 2015-06-29 2020-03-31 Amazon Technologies, Inc. Instance backed building or place
US10623309B1 (en) 2016-12-19 2020-04-14 International Business Machines Corporation Rule processing of packets
US10637750B1 (en) 2017-10-18 2020-04-28 Juniper Networks, Inc. Dynamically modifying a service chain based on network traffic information
US20200136960A1 (en) 2018-10-27 2020-04-30 Cisco Technology, Inc. Software version aware networking
US10645060B2 (en) 2015-05-28 2020-05-05 Xi'an Zhongxing New Software Co., Ltd Method, device and system for forwarding message
US10645201B2 (en) 2018-07-31 2020-05-05 Vmware, Inc. Packet handling during service virtualized computing instance migration
US20200145331A1 (en) 2018-11-02 2020-05-07 Cisco Technology, Inc., A California Corporation Using In-Band Operations Data to Signal Packet Processing Departures in a Network
US20200162352A1 (en) 2011-07-15 2020-05-21 Inetco Systems Limited Method and system for monitoring performance of an application system
US20200162318A1 (en) 2018-11-20 2020-05-21 Cisco Technology, Inc. Seamless automation of network device migration to and from cloud managed systems
US20200195711A1 (en) 2018-12-17 2020-06-18 At&T Intellectual Property I, L.P. Model-based load balancing for network data plane
US20200204492A1 (en) 2018-12-21 2020-06-25 Juniper Networks, Inc. Facilitating flow symmetry for service chains in a computer network
US20200220805A1 (en) 2019-01-03 2020-07-09 Citrix Systems, Inc. Method for optimal path selection for data traffic undergoing high processing or queuing delay
US20200272501A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Specifying service chains
US20200274801A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service path computation for service insertion
US20200287962A1 (en) 2019-03-05 2020-09-10 Cisco Technology, Inc. Load balancing in a distributed system
US20200344088A1 (en) 2019-04-29 2020-10-29 Vmware, Inc. Network interoperability support for non-virtualized entities
US10834004B2 (en) 2018-09-24 2020-11-10 Netsia, Inc. Path determination method and system for delay-optimized service function chaining
US20200358696A1 (en) 2018-02-01 2020-11-12 Nokia Solutions And Networks Oy Method and device for interworking between service function chain domains
US10853111B1 (en) 2015-09-30 2020-12-01 Amazon Technologies, Inc. Virtual machine instance migration feedback
US20200382412A1 (en) 2019-05-31 2020-12-03 Microsoft Technology Licensing, Llc Multi-Cast Support for a Virtual Network
US20200382420A1 (en) 2019-05-31 2020-12-03 Juniper Networks, Inc. Inter-network service chaining
US20200389401A1 (en) 2019-06-06 2020-12-10 Cisco Technology, Inc. Conditional composition of serverless network functions using segment routing
CN112181632A (en) 2019-07-02 2021-01-05 慧与发展有限责任合伙企业 Deploying service containers in an adapter device
US20210011812A1 (en) 2019-07-10 2021-01-14 Commvault Systems, Inc. Preparing containerized applications for backup using a backup services container and a backup services container-orchestration pod
US20210029088A1 (en) 2015-04-13 2021-01-28 Nicira, Inc. Method and system of establishing a virtual private network in a cloud service for branch networking
US10938668B1 (en) 2016-09-30 2021-03-02 Amazon Technologies, Inc. Safe deployment using versioned hash rings
US10938716B1 (en) 2017-11-29 2021-03-02 Riverbed Technology, Inc. Preserving policy with path selection
WO2021041440A1 (en) 2019-08-26 2021-03-04 Microsoft Technology Licensing, Llc Computer device including nested network interface controller switches
US20210073736A1 (en) 2019-09-10 2021-03-11 Alawi Holdings LLC Computer implemented system and associated methods for management of workplace incident reporting
US20210120080A1 (en) 2019-10-16 2021-04-22 Vmware, Inc. Load balancing for third party services
US10997177B1 (en) 2018-07-27 2021-05-04 Workday, Inc. Distributed real-time partitioned MapReduce for a data fabric
US20210136141A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Distributed service chain across multiple clouds
US20210136147A1 (en) 2019-10-31 2021-05-06 Keysight Technologies, Inc. Methods, systems and computer readable media for self-replicating cluster appliances
WO2021086462A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Distributed service chain across multiple clouds
US20210136140A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Using service containers to implement service chains
US20210135992A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Distributed fault tolerant service chain
US11055273B1 (en) 2016-11-04 2021-07-06 Amazon Technologies, Inc. Software container event monitoring systems
US20210218587A1 (en) 2020-01-13 2021-07-15 Vmware, Inc. Service insertion for multicast traffic at boundary
US20210227042A1 (en) 2020-01-20 2021-07-22 Vmware, Inc. Method of adjusting service function chains to improve network performance
US20210227041A1 (en) 2020-01-20 2021-07-22 Vmware, Inc. Method of network performance visualization of service function chains
US20210240734A1 (en) 2020-02-03 2021-08-05 Microstrategy Incorporated Deployment of container-based computer environments
US20210266295A1 (en) 2020-02-25 2021-08-26 Uatc, Llc Deterministic Container-Based Network Configurations for Autonomous Vehicles
US20210271565A1 (en) 2020-03-02 2021-09-02 Commvault Systems, Inc. Platform-agnostic containerized application data protection
US20210314310A1 (en) 2020-04-02 2021-10-07 Vmware, Inc. Secured login management to container image registry in a virtualized computer system
US20210311758A1 (en) 2020-04-02 2021-10-07 Vmware, Inc. Management of a container image registry in a virtualized computer system
US11153190B1 (en) 2021-01-21 2021-10-19 Zscaler, Inc. Metric computation for traceroute probes using cached data to prevent a surge on destination servers
US11157304B2 (en) 2019-11-01 2021-10-26 Dell Products L.P. System for peering container clusters running on different container orchestration systems
US20210349767A1 (en) 2020-05-05 2021-11-11 Red Hat, Inc. Migrating virtual machines between computing environments
US11184397B2 (en) 2018-08-20 2021-11-23 Vmware, Inc. Network policy migration to a public cloud
US20210377160A1 (en) 2018-01-12 2021-12-02 Telefonaktiebolaget Lm Ericsson (Publ) Mechanism for control message redirection for sdn control channel failures
US20220019698A1 (en) 2016-08-11 2022-01-20 Intel Corporation Secure Public Cloud with Protected Guest-Verified Host Control
US20220060467A1 (en) 2020-08-24 2022-02-24 Just One Technologies LLC Systems and methods for phone number certification and verification

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1538777A1 (en) * 2003-12-01 2005-06-08 Alcatel Configuration management device for a self-configurable network equipment of a communication network provided with equipment configuration parameter consistency analysis module
GB0508350D0 (en) 2005-04-26 2005-06-01 Great Lakes Chemical Europ Stabilized crosslinked polyolefin compositions
WO2011140028A1 (en) * 2010-05-03 2011-11-10 Brocade Communications Systems, Inc. Virtual cluster switching
JP2012129648A (en) * 2010-12-13 2012-07-05 Fujitsu Ltd Server device, management device, transfer destination address setting program, and virtual network system
US11743172B2 (en) 2020-04-06 2023-08-29 Vmware, Inc. Using multiple transport mechanisms to provide services at the edge of a network

Patent Citations (769)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6154448A (en) 1997-06-20 2000-11-28 Telefonaktiebolaget Lm Ericsson (Publ) Next hop loopback
US6006264A (en) 1997-08-01 1999-12-21 Arrowpoint Communications, Inc. Method and system for directing a flow between a client and a server
US6104700A (en) 1997-08-29 2000-08-15 Extreme Networks Policy based quality of service
US6779030B1 (en) 1997-10-06 2004-08-17 Worldcom, Inc. Intelligent network
WO1999018534A2 (en) 1997-10-06 1999-04-15 Web Balance, Inc. System for balancing loads among network servers
US20050021713A1 (en) 1997-10-06 2005-01-27 Andrew Dugan Intelligent network
US20140330983A1 (en) 1998-07-15 2014-11-06 Radware Ltd. Load balancing
US20120303784A1 (en) 1998-07-15 2012-11-29 Radware, Ltd. Load balancing
US20090271586A1 (en) 1998-07-31 2009-10-29 Kom Networks Inc. Method and system for providing restricted access to a storage medium
US6826694B1 (en) * 1998-10-22 2004-11-30 At&T Corp. High resolution access control
US20040210670A1 (en) 1999-03-05 2004-10-21 Nikolaos Anerousis System, method and apparatus for network service load and reliability management
US20050249199A1 (en) 1999-07-02 2005-11-10 Cisco Technology, Inc., A California Corporation Load balancing using distributed forwarding agents with application based feedback for different virtual machines
US7013389B1 (en) 1999-09-29 2006-03-14 Cisco Technology, Inc. Method and apparatus for creating a secure communication channel among multiple event service nodes
US20020010783A1 (en) 1999-12-06 2002-01-24 Leonard Primak System and method for enhancing operation of a web server cluster
US6880089B1 (en) 2000-03-31 2005-04-12 Avaya Technology Corp. Firewall clustering for multiple network servers
US20120137004A1 (en) 2000-07-17 2012-05-31 Smith Philip S Method and System for Operating a Commissioned E-Commerce Service Prover
US20110016348A1 (en) 2000-09-01 2011-01-20 Pace Charles P System and method for bridging assets to network nodes on multi-tiered networks
US7818452B2 (en) 2000-09-13 2010-10-19 Fortinet, Inc. Distributed virtual system to support managed, network-based services
US6985956B2 (en) 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US20020078370A1 (en) 2000-12-18 2002-06-20 Tahan Thomas E. Controlled information flow between communities via a firewall
US7487250B2 (en) 2000-12-19 2009-02-03 Cisco Technology, Inc. Methods and apparatus for directing a flow of data between a client and multiple servers
US20020097724A1 (en) 2001-01-09 2002-07-25 Matti Halme Processing of data packets within a network element cluster
US20030188026A1 (en) 2001-05-18 2003-10-02 Claude Denton Multi-protocol networking processor with data traffic support spanning local, regional and wide area networks
US6772211B2 (en) 2001-06-18 2004-08-03 Transtech Networks Usa, Inc. Content-aware web switch without delayed binding and methods thereof
US20020194350A1 (en) 2001-06-18 2002-12-19 Lu Leonard L. Content-aware web switch without delayed binding and methods thereof
US20030105812A1 (en) * 2001-08-09 2003-06-05 Gigamedia Access Corporation Hybrid system architecture for secure peer-to-peer-communications
US7406540B2 (en) 2001-10-01 2008-07-29 International Business Machines Corporation Method and apparatus for content-aware web switching
US20030065711A1 (en) 2001-10-01 2003-04-03 International Business Machines Corporation Method and apparatus for content-aware web switching
US7209977B2 (en) 2001-10-01 2007-04-24 International Business Machines Corporation Method and apparatus for content-aware web switching
US20030093481A1 (en) 2001-11-09 2003-05-15 Julian Mitchell Middlebox control
US20030097429A1 (en) 2001-11-20 2003-05-22 Wen-Che Wu Method of forming a website server cluster and structure thereof
US7379465B2 (en) 2001-12-07 2008-05-27 Nortel Networks Limited Tunneling scheme optimized for use in virtual private networks
US7239639B2 (en) 2001-12-27 2007-07-03 3Com Corporation System and method for dynamically constructing packet classification rules
US20120185588A1 (en) 2002-01-30 2012-07-19 Brett Error Distributed Data Collection and Aggregation
US20060233155A1 (en) 2002-03-19 2006-10-19 Srivastava Sunil K Server load balancing using IP option field approach to identify route to selected server
US20030236813A1 (en) 2002-06-24 2003-12-25 Abjanic John B. Method and apparatus for off-load processing of a message stream
US20100223621A1 (en) 2002-08-01 2010-09-02 Foundry Networks, Inc. Statistical tracking for global server load balancing
US20040066769A1 (en) 2002-10-08 2004-04-08 Kalle Ahmavaara Method and system for establishing a connection via an access network
CN1689369A (en) 2002-10-08 2005-10-26 诺基亚公司 Method and system for establishing a connection via an access network
US7480737B2 (en) 2002-10-25 2009-01-20 International Business Machines Corporation Technique for addressing a cluster of network servers
US20040215703A1 (en) 2003-02-18 2004-10-28 Xiping Song System supporting concurrent operation of multiple executable application operation sessions
US20080239991A1 (en) 2003-03-13 2008-10-02 David Lee Applegate Method and apparatus for efficient routing of variable traffic
US8190767B1 (en) 2003-06-24 2012-05-29 Nvidia Corporation Data structures and state tracking for network protocol processing
US20090299791A1 (en) 2003-06-25 2009-12-03 Foundry Networks, Inc. Method and system for management of licenses
US20050091396A1 (en) 2003-08-05 2005-04-28 Chandrasekharan Nilakantan Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using adaptive flow-based routing
US20050089327A1 (en) 2003-10-22 2005-04-28 Shlomo Ovadia Dynamic route discovery for optical switched networks
US7447775B1 (en) 2003-11-07 2008-11-04 Cisco Technology, Inc. Methods and apparatus for supporting transmission of streaming data
US20050114648A1 (en) 2003-11-24 2005-05-26 Cisco Technology, Inc., A Corporation Of California Dual mode firewall
US20050114429A1 (en) 2003-11-25 2005-05-26 Caccavale Frank S. Method and apparatus for load balancing of distributed processing units based on performance metrics
US20100257278A1 (en) 2003-12-10 2010-10-07 Foundry Networks, Inc. Method and apparatus for load balancing based on packet header content
US20050132030A1 (en) 2003-12-10 2005-06-16 Aventail Corporation Network appliance
US20080049619A1 (en) * 2004-02-09 2008-02-28 Adam Twiss Methods and Apparatus for Routing in a Network
US8223634B2 (en) 2004-02-18 2012-07-17 Fortinet, Inc. Mechanism for implementing load balancing in a network
US20050198200A1 (en) 2004-03-05 2005-09-08 Nortel Networks Limited Method and apparatus for facilitating fulfillment of web-service requests on a communication network
US8484348B2 (en) 2004-03-05 2013-07-09 Rockstar Consortium Us Lp Method and apparatus for facilitating fulfillment of web-service requests on a communication network
US9178709B2 (en) 2004-03-30 2015-11-03 Panasonic Intellectual Property Management Co., Ltd. Communication system and method for distributing content
US20080279196A1 (en) 2004-04-06 2008-11-13 Robert Friskney Differential Forwarding in Address-Based Carrier Networks
JP2005311863A (en) 2004-04-23 2005-11-04 Hitachi Ltd Traffic distribution control method, controller and network system
US20100100616A1 (en) 2004-09-14 2010-04-22 3Com Corporation Method and apparatus for controlling traffic between different entities on a network
US20060069776A1 (en) 2004-09-15 2006-03-30 Shim Choon B System and method for load balancing a communications network
US7698458B1 (en) 2004-10-29 2010-04-13 Akamai Technologies, Inc. Load balancing network traffic using race methods
US20060112297A1 (en) 2004-11-17 2006-05-25 Raytheon Company Fault tolerance and recovery in a high-performance computing (HPC) system
US20060130133A1 (en) 2004-12-14 2006-06-15 International Business Machines Corporation Automated generation of configuration elements of an information technology system
US20060195896A1 (en) 2004-12-22 2006-08-31 Wake Forest University Method, systems, and computer program products for implementing function-parallel network firewall
US20060155862A1 (en) 2005-01-06 2006-07-13 Hari Kathi Data traffic load balancing based on application layer messages
US7649890B2 (en) 2005-02-22 2010-01-19 Hitachi Communication Technologies, Ltd. Packet forwarding apparatus and communication bandwidth control method
US7499463B1 (en) 2005-04-22 2009-03-03 Sun Microsystems, Inc. Method and apparatus for enforcing bandwidth utilization of a virtual serialization queue
US20110225293A1 (en) 2005-07-22 2011-09-15 Yogesh Chunilal Rathod System and method for service based social network
US20070061492A1 (en) 2005-08-05 2007-03-15 Red Hat, Inc. Zero-copy network i/o for virtual hosts
US20070121615A1 (en) 2005-11-28 2007-05-31 Ofer Weill Method and apparatus for self-learning of VPNS from combination of unidirectional tunnels in MPLS/VPN networks
US20070291773A1 (en) * 2005-12-06 2007-12-20 Shabbir Khan Digital object routing based on a service request
US20070153782A1 (en) 2005-12-30 2007-07-05 Gregory Fletcher Reliable, high-throughput, high-performance transport and routing mechanism for arbitrary data flows
US20090235325A1 (en) 2006-03-02 2009-09-17 Theo Dimitrakos Message processing methods and systems
US20070260750A1 (en) 2006-03-09 2007-11-08 Microsoft Corporation Adaptable data connector
US20070214282A1 (en) 2006-03-13 2007-09-13 Microsoft Corporation Load balancing via rotation of cluster identity
US20070248091A1 (en) 2006-04-24 2007-10-25 Mohamed Khalid Methods and apparatus for tunnel stitching in a network
US9900410B2 (en) 2006-05-01 2018-02-20 Nicira, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US20150071301A1 (en) 2006-05-01 2015-03-12 Vmware, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US20070288615A1 (en) 2006-06-09 2007-12-13 Cisco Technology, Inc. Technique for dispatching data packets to service control engines
US20080005293A1 (en) 2006-06-30 2008-01-03 Telefonaktiebolaget Lm Ericsson (Publ) Router and method for server load balancing
US20080046400A1 (en) 2006-08-04 2008-02-21 Shi Justin Y Apparatus and method of optimizing database clustering with zero transaction loss
US20080031263A1 (en) 2006-08-07 2008-02-07 Cisco Technology, Inc. Method and apparatus for load balancing over virtual network links
US8707383B2 (en) 2006-08-16 2014-04-22 International Business Machines Corporation Computer workload management with security policy enforcement
US20080049786A1 (en) 2006-08-22 2008-02-28 Maruthi Ram Systems and Methods for Providing Dynamic Spillover of Virtual Servers Based on Bandwidth
US20080049614A1 (en) 2006-08-23 2008-02-28 Peter John Briscoe Capacity Management for Data Networks
US20080072305A1 (en) 2006-09-14 2008-03-20 Ouova, Inc. System and method of middlebox detection and characterization
US20080084819A1 (en) 2006-10-04 2008-04-10 Vladimir Parizhsky Ip flow-based load balancing over a plurality of wireless network links
US20080095153A1 (en) 2006-10-19 2008-04-24 Fujitsu Limited Apparatus and computer product for collecting packet information
US20080104608A1 (en) 2006-10-27 2008-05-01 Hyser Chris D Starting up at least one virtual machine in a physical machine by a load balancer
US8849746B2 (en) * 2006-12-19 2014-09-30 Teradata Us, Inc. High-throughput extract-transform-load (ETL) of program events for subsequent analysis
US20180198692A1 (en) 2006-12-29 2018-07-12 Kip Prod P1 Lp Multi-services application gateway and system employing the same
WO2008095010A1 (en) 2007-02-01 2008-08-07 The Board Of Trustees Of The Leland Stanford Jr. University Secure network switching infrastructure
US20080195755A1 (en) 2007-02-12 2008-08-14 Ying Lu Method and apparatus for load balancing with server state change awareness
US20110010578A1 (en) 2007-02-22 2011-01-13 Agundez Dominguez Jose Luis Consistent and fault tolerant distributed hash table (dht) overlay network
US20080225714A1 (en) 2007-03-12 2008-09-18 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic load balancing
US20080247396A1 (en) 2007-04-06 2008-10-09 Ludovic Hazard Method, system and computer processing an ip packet, routing a structured data carrier, preventing broadcast storms, load-balancing and converting a full broadcast ip packet
US20080276085A1 (en) 2007-05-02 2008-11-06 Cisco Technology, Inc. Allowing differential processing of encrypted tunnels
US8230493B2 (en) 2007-05-02 2012-07-24 Cisco Technology, Inc. Allowing differential processing of encrypted tunnels
US7898959B1 (en) 2007-06-28 2011-03-01 Marvell Israel (Misl) Ltd. Method for weighted load-balancing among network interfaces
US20090003349A1 (en) 2007-06-29 2009-01-01 Martin Havemann Network system having an extensible forwarding plane
US20090003375A1 (en) 2007-06-29 2009-01-01 Martin Havemann Network system having an extensible control plane
US20090003364A1 (en) 2007-06-29 2009-01-01 Kerry Fendick Open platform architecture for integrating multiple heterogeneous network functions
US20090019135A1 (en) 2007-07-09 2009-01-15 Anand Eswaran Method, Network and Computer Program For Processing A Content Request
US20090037713A1 (en) 2007-08-03 2009-02-05 Cisco Technology, Inc. Operation, administration and maintenance (oam) for chains of services
US20090063706A1 (en) 2007-08-30 2009-03-05 International Business Machines Corporation Combined Layer 2 Virtual MAC Address with Layer 3 IP Address Routing
US8201219B2 (en) 2007-09-24 2012-06-12 Bridgewater Systems Corp. Systems and methods for server load balancing using authentication, authorization, and accounting protocols
US8874789B1 (en) 2007-09-28 2014-10-28 Trend Micro Incorporated Application based routing arrangements and method thereof
US20100265824A1 (en) 2007-11-09 2010-10-21 Blade Network Technologies, Inc Session-less Load Balancing of Client Traffic Across Servers in a Server Group
US20090129271A1 (en) 2007-11-19 2009-05-21 Rajesh Ramankutty Providing services to packet flows in a network
US20110058563A1 (en) 2007-12-17 2011-03-10 Girish Prabhakar Saraph Architectural framework of communication network and a method of establishing qos connection
US20090172666A1 (en) 2007-12-31 2009-07-02 Netapp, Inc. System and method for automatic storage load balancing in virtual server environments
US20090199268A1 (en) 2008-02-06 2009-08-06 Qualcomm, Incorporated Policy control for encapsulated data flows
US8175863B1 (en) 2008-02-13 2012-05-08 Quest Software, Inc. Systems and methods for analyzing performance of virtual environments
US8521879B1 (en) 2008-03-11 2013-08-27 United Services Automobile Assocation (USAA) Systems and methods for a load balanced interior gateway protocol intranet
US20090238084A1 (en) 2008-03-18 2009-09-24 Cisco Technology, Inc. Network monitoring using a proxy
US20090249472A1 (en) 2008-03-27 2009-10-01 Moshe Litvin Hierarchical firewalls
US20100332595A1 (en) 2008-04-04 2010-12-30 David Fullagar Handling long-tail content in a content delivery network (cdn)
US20110035494A1 (en) * 2008-04-15 2011-02-10 Blade Network Technologies Network virtualization for a virtualized server data center environment
US20090265467A1 (en) 2008-04-17 2009-10-22 Radware, Ltd. Method and System for Load Balancing over a Cluster of Authentication, Authorization and Accounting (AAA) Servers
US8339959B1 (en) 2008-05-20 2012-12-25 Juniper Networks, Inc. Streamlined packet forwarding using dynamic filters for routing and security in a shared forwarding plane
US20090300210A1 (en) 2008-05-28 2009-12-03 James Michael Ferris Methods and systems for load balancing in cloud-based networks
US20090307334A1 (en) 2008-06-09 2009-12-10 Microsoft Corporation Data center without structural bottlenecks
US20090303880A1 (en) 2008-06-09 2009-12-10 Microsoft Corporation Data center interconnect and traffic engineering
US20090327464A1 (en) 2008-06-26 2009-12-31 International Business Machines Corporation Load Balanced Data Processing Performed On An Application Message Transmitted Between Compute Nodes
US20100031360A1 (en) 2008-07-31 2010-02-04 Arvind Seshadri Systems and methods for preventing unauthorized modification of an operating system
US20100036903A1 (en) 2008-08-11 2010-02-11 Microsoft Corporation Distributed load balancer
US9503530B1 (en) 2008-08-21 2016-11-22 United Services Automobile Association (Usaa) Preferential loading in data centers
US8873399B2 (en) 2008-09-03 2014-10-28 Nokia Siemens Networks Oy Gateway network element, a method, and a group of load balanced access points configured for load balancing in a communications network
US20110164504A1 (en) 2008-09-03 2011-07-07 Nokia Siemens Networks Oy Gateway network element, a method, and a group of load balanced access points configured for load balancing in a communications network
US20120287789A1 (en) 2008-10-24 2012-11-15 Juniper Networks, Inc. Flow consistent dynamic load balancing
US20100131638A1 (en) 2008-11-25 2010-05-27 Ravi Kondamuru Systems and Methods for GSLB Remote Service Monitoring
US8078903B1 (en) 2008-11-25 2011-12-13 Cisco Technology, Inc. Automatic load-balancing and seamless failover of data flows in storage media encryption (SME)
US9191293B2 (en) 2008-12-22 2015-11-17 Telefonaktiebolaget L M Ericsson (Publ) Method and device for handling of connections between a client and a server via a communication network
US20100165985A1 (en) 2008-12-29 2010-07-01 Cisco Technology, Inc. Service Selection Mechanism In Service Insertion Architecture Data Plane
US8224885B1 (en) 2009-01-26 2012-07-17 Teradici Corporation Method and system for remote computing session management
US7948986B1 (en) 2009-02-02 2011-05-24 Juniper Networks, Inc. Applying services within MPLS networks
US20100223364A1 (en) 2009-02-27 2010-09-02 Yottaa Inc System and method for network traffic management and load balancing
US20100235915A1 (en) 2009-03-12 2010-09-16 Nasir Memon Using host symptoms, host roles, and/or host reputation for detection of host infection
US8094575B1 (en) 2009-03-24 2012-01-10 Juniper Networks, Inc. Routing protocol extension for network acceleration service-aware path selection within computer networks
US8266261B2 (en) 2009-03-27 2012-09-11 Nec Corporation Server system, collective server apparatus, and MAC address management method
US20100254385A1 (en) 2009-04-07 2010-10-07 Cisco Technology, Inc. Service Insertion Architecture (SIA) in a Virtual Private Network (VPN) Aware Network
EP2426956A1 (en) 2009-04-27 2012-03-07 China Mobile Communications Corporation Data transferring method, system and related network device based on proxy mobile (pm) ipv6
US20120140719A1 (en) 2009-04-27 2012-06-07 Min Hui Data transmission method, system and related network device based on proxy mobile (pm) ipv6
US20100281482A1 (en) 2009-04-30 2010-11-04 Microsoft Corporation Application efficiency engine
US20110022812A1 (en) 2009-05-01 2011-01-27 Van Der Linden Rob Systems and methods for establishing a cloud bridge between virtual storage resources
US9479358B2 (en) 2009-05-13 2016-10-25 International Business Machines Corporation Managing graphics load balancing strategies
CN101594358A (en) 2009-06-29 2009-12-02 北京航空航天大学 Three layer switching methods, device, system and host
US20110022695A1 (en) 2009-07-27 2011-01-27 Vmware, Inc. Management and Implementation of Enclosed Local Networks in a Virtual Lab
US20110040893A1 (en) 2009-08-14 2011-02-17 Broadcom Corporation Distributed Internet caching via multiple node caching management
US20110055845A1 (en) 2009-08-31 2011-03-03 Thyagarajan Nandagopal Technique for balancing loads in server clusters
US8804746B2 (en) 2009-09-17 2014-08-12 Zte Corporation Network based on identity identifier and location separation architecture backbone network, and network element thereof
EP2466985A1 (en) 2009-09-17 2012-06-20 ZTE Corporation Network based on identity identifier and location separation architecture, backbone network, and network element thereof
US20120176932A1 (en) 2009-09-17 2012-07-12 Zte Corporation Communication method, method for forwarding data message during the communication process and communication node thereof
US8451735B2 (en) 2009-09-28 2013-05-28 Symbol Technologies, Inc. Systems and methods for dynamic load balancing in a wireless network
US20110090912A1 (en) 2009-10-15 2011-04-21 International Business Machines Corporation Steering Data Communications Packets Among Service Applications With Server Selection Modulus Values
US8811412B2 (en) 2009-10-15 2014-08-19 International Business Machines Corporation Steering data communications packets among service applications with server selection modulus values
US20120023231A1 (en) 2009-10-23 2012-01-26 Nec Corporation Network system, control method for the same, and controller
CN101729412A (en) 2009-11-05 2010-06-09 北京超图软件股份有限公司 Distributed level cluster method and system of geographic information service
US9277412B2 (en) 2009-11-16 2016-03-01 Interdigital Patent Holdings, Inc. Coordination of silent periods for dynamic spectrum manager (DSM)
US20120239804A1 (en) 2009-11-26 2012-09-20 Chengdu Huawei Symantec Technologies Co., Ltd Method, device and system for backup
US8832683B2 (en) 2009-11-30 2014-09-09 Red Hat Israel, Ltd. Using memory-related metrics of host machine for triggering load balancing that migrate virtual machine
US8615009B1 (en) 2010-01-25 2013-12-24 Juniper Networks, Inc. Interface for extending service capabilities of a network device
US20110295991A1 (en) 2010-02-01 2011-12-01 Nec Corporation Network system, controller, and network control method
US20110194563A1 (en) 2010-02-11 2011-08-11 Vmware, Inc. Hypervisor Level Distributed Load-Balancing
US20110211463A1 (en) 2010-02-26 2011-09-01 Eldad Matityahu Add-on module and methods thereof
US8996610B1 (en) 2010-03-15 2015-03-31 Salesforce.Com, Inc. Proxy system, method and computer program product for utilizing an identifier of a request to route the request to a networked device
US8971345B1 (en) 2010-03-22 2015-03-03 Riverbed Technology, Inc. Method and apparatus for scheduling a heterogeneous communication flow
US9225659B2 (en) 2010-03-22 2015-12-29 Riverbed Technology, Inc. Method and apparatus for scheduling a heterogeneous communication flow
US20110235508A1 (en) 2010-03-26 2011-09-29 Deepak Goel Systems and methods for link load balancing on a multi-core device
US20110261811A1 (en) 2010-04-26 2011-10-27 International Business Machines Corporation Load-balancing via modulus distribution and tcp flow redirection due to server overload
US20110271007A1 (en) 2010-04-28 2011-11-03 Futurewei Technologies, Inc. System and Method for a Context Layer Switch
US20110268118A1 (en) 2010-04-30 2011-11-03 Michael Schlansker Method for routing data packets using vlans
US20110276695A1 (en) 2010-05-06 2011-11-10 Juliano Maldaner Continuous upgrading of computers in a load balanced environment
US20110283013A1 (en) 2010-05-14 2011-11-17 Grosser Donald B Methods, systems, and computer readable media for stateless load balancing of network traffic flows
US8892706B1 (en) 2010-06-21 2014-11-18 Vmware, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US20180248986A1 (en) 2010-06-21 2018-08-30 Nicira, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US20110317708A1 (en) 2010-06-28 2011-12-29 Alcatel-Lucent Usa, Inc. Quality of service control for mpls user access
US20120331188A1 (en) 2010-06-29 2012-12-27 Patrick Brian Riordan Techniques for path selection
US20120014386A1 (en) 2010-06-29 2012-01-19 Futurewei Technologies, Inc. Delegate Gateways and Proxy for Target Hosts in Large Layer 2 and Address Resolution with Duplicated Internet Protocol Addresses
US20120005265A1 (en) 2010-06-30 2012-01-05 Sony Corporation Information processing device, content providing method and program
US20120011281A1 (en) 2010-07-07 2012-01-12 Fujitsu Limited Content conversion system and content conversion server
US20120195196A1 (en) 2010-08-11 2012-08-02 Rajat Ghai SYSTEM AND METHOD FOR QoS CONTROL OF IP FLOWS IN MOBILE NETWORKS
US20120054266A1 (en) 2010-09-01 2012-03-01 Kazerani Alexander A Optimized Content Distribution Based on Metrics Derived from the End User
US20130227097A1 (en) 2010-09-14 2013-08-29 Hitachi, Ltd. Multi-tenancy information processing system, management server, and configuration management method
US20120089664A1 (en) 2010-10-12 2012-04-12 Sap Portals Israel, Ltd. Optimizing Distributed Computer Networks
US20130039218A1 (en) 2010-10-25 2013-02-14 Force 10 Networks Limiting mac address learning on access network switches
US20120144014A1 (en) 2010-12-01 2012-06-07 Cisco Technology, Inc. Directing data flows in data centers with clustering services
US20120147894A1 (en) 2010-12-08 2012-06-14 Mulligan John T Methods and apparatus to provision cloud computing network elements
US20120155266A1 (en) 2010-12-17 2012-06-21 Microsoft Corporation Synchronizing state among load balancer components
US8804720B1 (en) 2010-12-22 2014-08-12 Juniper Networks, Inc. Pass-through multicast admission control signaling
US20120213074A1 (en) 2011-01-27 2012-08-23 Verint Systems Ltd. System and method for flow table management
US20170257432A1 (en) 2011-02-09 2017-09-07 Cliqr Technologies Inc. Apparatus, systems and methods for container based service deployment
US20120207174A1 (en) 2011-02-10 2012-08-16 Choung-Yaw Michael Shieh Distributed service processing of network gateways using virtual machines
US20120230187A1 (en) 2011-03-09 2012-09-13 Telefonaktiebolaget L M Ericsson (Publ) Load balancing sctp associations using vtag mediation
US20120246637A1 (en) 2011-03-22 2012-09-27 Cisco Technology, Inc. Distributed load balancer in a virtual machine environment
US20120266252A1 (en) 2011-04-18 2012-10-18 Bank Of America Corporation Hardware-based root of trust for cloud environments
US8743885B2 (en) 2011-05-03 2014-06-03 Cisco Technology, Inc. Mobile service routing in a network environment
US20120281540A1 (en) 2011-05-03 2012-11-08 Cisco Technology, Inc. Mobile service routing in a network environment
US20140169375A1 (en) 2011-05-03 2014-06-19 Cisco Technology, Inc. Mobile service routing in a network environment
US20120303809A1 (en) 2011-05-25 2012-11-29 Microsoft Corporation Offloading load balancing packet modification
US20120311568A1 (en) 2011-05-31 2012-12-06 Jansen Gerardus T Mechanism for Inter-Cloud Live Migration of Virtualization Systems
US20120317260A1 (en) 2011-06-07 2012-12-13 Syed Mohammad Amir Husain Network Controlled Serial and Audio Switch
US20120317570A1 (en) 2011-06-08 2012-12-13 Dalcher Gregory W System and method for virtual partition monitoring
US20130003735A1 (en) 2011-06-28 2013-01-03 Chao H Jonathan Dynamically provisioning middleboxes
US20200162352A1 (en) 2011-07-15 2020-05-21 Inetco Systems Limited Method and system for monitoring performance of an application system
US20130021942A1 (en) 2011-07-18 2013-01-24 Cisco Technology, Inc. Granular Control of Multicast Delivery Services for Layer-2 Interconnect Solutions
US20130031544A1 (en) 2011-07-27 2013-01-31 Microsoft Corporation Virtual machine migration to minimize packet loss in virtualized network
US20140195666A1 (en) 2011-08-04 2014-07-10 Midokura Sarl System and method for implementing and managing virtual networks
US20130151661A1 (en) 2011-08-17 2013-06-13 Nicira, Inc. Handling nat migration in logical l3 routing
US20130148505A1 (en) 2011-08-17 2013-06-13 Nicira, Inc. Load balancing in a logical pipeline
US20130142048A1 (en) 2011-08-17 2013-06-06 Nicira, Inc. Flow templating in logical l3 routing
US20130044636A1 (en) 2011-08-17 2013-02-21 Teemu Koponen Distributed logical l3 routing
US9407599B2 (en) 2011-08-17 2016-08-02 Nicira, Inc. Handling NAT migration in logical L3 routing
US20130058346A1 (en) 2011-09-07 2013-03-07 Microsoft Corporation Distributed Routing Domains in Multi-Tenant Datacenter Virtual Networks
US8856518B2 (en) 2011-09-07 2014-10-07 Microsoft Corporation Secure and efficient offloading of network policies to network interface cards
US20130073743A1 (en) * 2011-09-19 2013-03-21 Cisco Technology, Inc. Services controlled session based flow interceptor
US10200493B2 (en) 2011-10-17 2019-02-05 Microsoft Technology Licensing, Llc High-density multi-tenant distributed cache as a service
US9232342B2 (en) 2011-10-24 2016-01-05 Interdigital Patent Holdings, Inc. Methods, systems and apparatuses for application service layer (ASL) inter-networking
US20130100851A1 (en) 2011-10-25 2013-04-25 Cisco Technology, Inc. Multicast Source Move Detection for Layer-2 Interconnect Solutions
US9015823B2 (en) 2011-11-15 2015-04-21 Nicira, Inc. Firewalls in logical networks
US10089127B2 (en) 2011-11-15 2018-10-02 Nicira, Inc. Control plane interface for logical middlebox services
US8966024B2 (en) 2011-11-15 2015-02-24 Nicira, Inc. Architecture of networks with middleboxes
US10514941B2 (en) 2011-11-15 2019-12-24 Nicira, Inc. Load balancing and destination network address translation middleboxes
US8913611B2 (en) 2011-11-15 2014-12-16 Nicira, Inc. Connection identifier assignment and source network address translation
US9172603B2 (en) 2011-11-15 2015-10-27 Nicira, Inc. WAN optimizer for logical networks
US20130125120A1 (en) 2011-11-15 2013-05-16 Nicira, Inc. Migrating middlebox state for distributed middleboxes
US8966029B2 (en) 2011-11-15 2015-02-24 Nicira, Inc. Network control system for configuring middleboxes
US9195491B2 (en) 2011-11-15 2015-11-24 Nicira, Inc. Migrating middlebox state for distributed middleboxes
US20130136126A1 (en) 2011-11-30 2013-05-30 Industrial Technology Research Institute Data center network system and packet forwarding method thereof
US20130159487A1 (en) 2011-12-14 2013-06-20 Microsoft Corporation Migration of Virtual IP Addresses in a Failover Cluster
US20130160024A1 (en) 2011-12-20 2013-06-20 Sybase, Inc. Dynamic Load Balancing for Complex Event Processing
US20130163594A1 (en) * 2011-12-21 2013-06-27 Cisco Technology, Inc. Overlay-Based Packet Steering
US8830834B2 (en) 2011-12-21 2014-09-09 Cisco Technology, Inc. Overlay-based packet steering
US20130166703A1 (en) 2011-12-27 2013-06-27 Michael P. Hammer System And Method For Management Of Network-Based Services
US20130170501A1 (en) 2011-12-28 2013-07-04 Futurewei Technologies, Inc. Service Router Architecture
US8914406B1 (en) 2012-02-01 2014-12-16 Vorstack, Inc. Scalable network security with fast response protocol
US8868711B2 (en) 2012-02-03 2014-10-21 Microsoft Corporation Dynamic load balancing in a scalable environment
US20130201989A1 (en) 2012-02-08 2013-08-08 Radisys Corporation Stateless load balancer in a multi-node system for transparent processing with packet preservation
US20130227550A1 (en) 2012-02-27 2013-08-29 Computer Associates Think, Inc. System and method for isolated virtual image and appliance communication within a cloud environment
US20130291088A1 (en) 2012-04-11 2013-10-31 Choung-Yaw Michael Shieh Cooperative network security inspection
US20130287026A1 (en) 2012-04-13 2013-10-31 Nicira Inc. Extension of logical networks across layer 3 virtual private networks
US20130297798A1 (en) 2012-05-04 2013-11-07 Mustafa Arisoylu Two level packet distribution with stateless first level packet distribution to a group of servers and stateful second level packet distribution to a server within the group
CN104471899A (en) 2012-05-10 2015-03-25 瑞典爱立信有限公司 802.1AQ support over IETF EVPN
US20130301472A1 (en) 2012-05-10 2013-11-14 David Ian Allan 802.1aq support over ietf evpn
US20130311637A1 (en) 2012-05-15 2013-11-21 International Business Machines Corporation Overlay tunnel information exchange protocol
US8862883B2 (en) 2012-05-16 2014-10-14 Cisco Technology, Inc. System and method for secure cloud service delivery with prioritized services in a network environment
US20130318219A1 (en) 2012-05-23 2013-11-28 Brocade Communications Systems, Inc Layer-3 overlay gateways
US8488577B1 (en) 2012-06-06 2013-07-16 Google Inc. Apparatus for controlling the availability of internet access to applications
US20150244617A1 (en) * 2012-06-06 2015-08-27 Juniper Networks, Inc. Physical path determination for virtual network packet flows
US20130332983A1 (en) 2012-06-12 2013-12-12 TELEFONAKTIEBOLAGET L M ERRICSSON (publ) Elastic Enforcement Layer for Cloud Security Using SDN
US20130336319A1 (en) 2012-06-14 2013-12-19 Liwu Liu Multicast to unicast conversion technique
US20130343378A1 (en) 2012-06-21 2013-12-26 Mark Veteikis Virtual data loopback and/or data capture in a computing system
US20130343174A1 (en) 2012-06-26 2013-12-26 Juniper Networks, Inc. Service plane triggered fast reroute protection
US20140003232A1 (en) 2012-06-27 2014-01-02 Juniper Networks, Inc. Feedback loop for service engineered paths
US20170346764A1 (en) 2012-06-29 2017-11-30 Huawei Technologies Co., Ltd. Method for Processing Information, Forwarding Plane Device and Control Plane Device
US20140003422A1 (en) 2012-06-29 2014-01-02 Jeffrey C. Mogul Implementing a software defined network using event records that are transmitted from a network switch
US20150109901A1 (en) 2012-06-30 2015-04-23 Huawei Technologies Co., Ltd. Method for managing forwarding plane tunnel resource under control and forwarding decoupled architecture
US20140010085A1 (en) 2012-07-09 2014-01-09 Arun Kavunder System and method associated with a service flow router
US20150003455A1 (en) 2012-07-24 2015-01-01 Telefonaktiebolaget L M Ericsson (Publ) System and method for enabling services chaining in a provider network
US20140029447A1 (en) 2012-07-25 2014-01-30 Qualcomm Atheros, Inc. Forwarding tables for hybrid communication networks
US20140046997A1 (en) 2012-08-09 2014-02-13 International Business Machines Corporation Service management roles of processor nodes in distributed node service management
US20140046998A1 (en) 2012-08-09 2014-02-13 International Business Machines Corporation Service management modes of operation in distributed node service management
US20150156035A1 (en) 2012-08-15 2015-06-04 Futurewei Technologies, Inc. Method and System for Creating Software Defined Ordered Service Patterns in a Communications Network
US8989192B2 (en) 2012-08-15 2015-03-24 Futurewei Technologies, Inc. Method and system for creating software defined ordered service patterns in a communications network
US9705702B2 (en) 2012-08-15 2017-07-11 Futurewei Technologies, Inc. Method and system for creating software defined ordered service patterns in a communications network
US20140052844A1 (en) 2012-08-17 2014-02-20 Vmware, Inc. Management of a virtual machine in a storage area network environment
US20140059204A1 (en) 2012-08-24 2014-02-27 Filip Nguyen Systems and methods for providing message flow analysis for an enterprise service bus
US20140059544A1 (en) 2012-08-27 2014-02-27 Vmware, Inc. Framework for networking and security services in virtual networks
US20140068602A1 (en) 2012-09-04 2014-03-06 Aaron Robert Gember Cloud-Based Middlebox Management System
US20150073967A1 (en) 2012-09-12 2015-03-12 Iex Group, Inc. Transmission latency leveling apparatuses, methods and systems
US20160043901A1 (en) 2012-09-25 2016-02-11 A10 Networks, Inc. Graceful scaling in software driven networks
US20140092738A1 (en) 2012-09-28 2014-04-03 Juniper Networks, Inc. Maintaining load balancing after service application with a netwok device
US20140096183A1 (en) 2012-10-01 2014-04-03 International Business Machines Corporation Providing services to virtual overlay network traffic
US9148367B2 (en) 2012-10-02 2015-09-29 Cisco Technology, Inc. System and method for binding flows in a service cluster deployment in a network environment
US20140092914A1 (en) 2012-10-02 2014-04-03 Lsi Corporation Method and system for intelligent deep packet buffering
US20140092906A1 (en) * 2012-10-02 2014-04-03 Cisco Technology, Inc. System and method for binding flows in a service cluster deployment in a network environment
US20160057050A1 (en) * 2012-10-05 2016-02-25 Stamoulis & Weinblatt LLC Devices, methods, and systems for packet reroute permission based on content parameters embedded in packet header or payload
US20140101226A1 (en) 2012-10-08 2014-04-10 Motorola Mobility Llc Methods and apparatus for performing dynamic load balancing of processing resources
US20140101656A1 (en) 2012-10-10 2014-04-10 Zhongwen Zhu Virtual firewall mobility
US20140108665A1 (en) 2012-10-16 2014-04-17 Citrix Systems, Inc. Systems and methods for bridging between public and private clouds through multilevel api integration
US20140115578A1 (en) 2012-10-21 2014-04-24 Geoffrey Howard Cooper Providing a virtual security appliance architecture to a virtual cloud infrastructure
WO2014069978A1 (en) 2012-11-02 2014-05-08 Silverlake Mobility Ecosystem Sdn Bhd Method of processing requests for digital services
US20150288671A1 (en) 2012-11-02 2015-10-08 Silverlake Mobility Ecosystem Sdn Bhd Method of processing requests for digital services
US9104497B2 (en) 2012-11-07 2015-08-11 Yahoo! Inc. Method and system for work load balancing
US20140129715A1 (en) 2012-11-07 2014-05-08 Yahoo! Inc. Method and system for work load balancing
US20150023354A1 (en) 2012-11-19 2015-01-22 Huawei Technologies Co., Ltd. Method and device for allocating packet switching resource
US20140149696A1 (en) 2012-11-28 2014-05-29 Red Hat Israel, Ltd. Virtual machine backup using snapshots and current configuration
US20140169168A1 (en) 2012-12-06 2014-06-19 A10 Networks, Inc. Configuration of a virtual service network
US10341427B2 (en) * 2012-12-06 2019-07-02 A10 Networks, Inc. Forwarding policies on a virtual service network
US20140164477A1 (en) 2012-12-06 2014-06-12 Gary M. Springer System and method for providing horizontal scaling of stateful applications
US9203748B2 (en) 2012-12-24 2015-12-01 Huawei Technologies Co., Ltd. Software defined network-based data processing method, node, and system
US20140207968A1 (en) 2013-01-23 2014-07-24 Cisco Technology, Inc. Server Load Balancer Traffic Steering
US20150372911A1 (en) 2013-01-31 2015-12-24 Hitachi, Ltd. Communication path management method
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US10484334B1 (en) 2013-02-26 2019-11-19 Zentera Systems, Inc. Distributed firewall security system that extends across different cloud computing networks
US20140269724A1 (en) 2013-03-04 2014-09-18 Telefonaktiebolaget L M Ericsson (Publ) Method and devices for forwarding ip data packets in an access network
US20140254591A1 (en) 2013-03-08 2014-09-11 Dell Products L.P. Processing of multicast traffic in computer networks
US20140254374A1 (en) 2013-03-11 2014-09-11 Cisco Technology, Inc. Methods and devices for providing service clustering in a trill network
US20140281029A1 (en) 2013-03-14 2014-09-18 Time Warner Cable Enterprises Llc System and method for automatic routing of dynamic host configuration protocol (dhcp) traffic
US20140269717A1 (en) 2013-03-15 2014-09-18 Cisco Technology, Inc. Ipv6/ipv4 resolution-less forwarding up to a destination
US20140269487A1 (en) 2013-03-15 2014-09-18 Vivint, Inc. Multicast traffic management within a wireless mesh network
US20140280896A1 (en) 2013-03-15 2014-09-18 Achilleas Papakostas Methods and apparatus to credit usage of mobile devices
US20140282526A1 (en) 2013-03-15 2014-09-18 Avi Networks Managing and controlling a distributed network service platform
US20140301388A1 (en) 2013-04-06 2014-10-09 Citrix Systems, Inc. Systems and methods to cache packet steering decisions for a cluster of load balancers
US20140304231A1 (en) 2013-04-06 2014-10-09 Citrix Systems, Inc. Systems and methods for application-state distributed replication table hunting
US20140307744A1 (en) 2013-04-12 2014-10-16 Futurewei Technologies, Inc. Service Chain Policy for Distributed Gateways in Virtual Overlay Networks
US9660905B2 (en) 2013-04-12 2017-05-23 Futurewei Technologies, Inc. Service chain policy for distributed gateways in virtual overlay networks
US20140310418A1 (en) 2013-04-16 2014-10-16 Amazon Technologies, Inc. Distributed load balancer
US20140310391A1 (en) 2013-04-16 2014-10-16 Amazon Technologies, Inc. Multipath routing in a distributed load balancer
US20140317677A1 (en) 2013-04-19 2014-10-23 Vmware, Inc. Framework for coordination between endpoint security and network security services
US10075470B2 (en) 2013-04-19 2018-09-11 Nicira, Inc. Framework for coordination between endpoint security and network security services
US20140321459A1 (en) 2013-04-26 2014-10-30 Cisco Technology, Inc. Architecture for agentless service insertion
US10237379B2 (en) 2013-04-26 2019-03-19 Cisco Technology, Inc. High-efficiency service chaining with agentless service nodes
US20180027101A1 (en) 2013-04-26 2018-01-25 Cisco Technology, Inc. High-efficiency service chaining with agentless service nodes
US9979641B2 (en) 2013-05-09 2018-05-22 Nicira, Inc. Method and system for service switching using service tags
US20160087888A1 (en) 2013-05-09 2016-03-24 Vmware, Inc. Method and system for service switching using service tags
US20140334485A1 (en) 2013-05-09 2014-11-13 Vmware, Inc. Method and system for service switching using service tags
US10693782B2 (en) 2013-05-09 2020-06-23 Nicira, Inc. Method and system for service switching using service tags
US20200322271A1 (en) 2013-05-09 2020-10-08 Nicira, Inc. Method and system for service switching using service tags
US20180262427A1 (en) 2013-05-09 2018-09-13 Nicira, Inc. Method and system for service switching using service tags
US9225638B2 (en) 2013-05-09 2015-12-29 Vmware, Inc. Method and system for service switching using service tags
WO2014182529A1 (en) 2013-05-09 2014-11-13 Vmware, Inc. Method and system for service switching using service tags
US20140334488A1 (en) 2013-05-10 2014-11-13 Cisco Technology, Inc. Data Plane Learning of Bi-Directional Service Chains
US20140341029A1 (en) 2013-05-20 2014-11-20 Telefonaktiebolaget L M Ericsson (Publ) Encoding a payload hash in the da-mac to facilitate elastic chaining of packet processing elements
US20140351452A1 (en) 2013-05-21 2014-11-27 Cisco Technology, Inc. Chaining Service Zones by way of Route Re-Origination
US20160080253A1 (en) 2013-05-23 2016-03-17 Huawei Technologies Co. Ltd. Service routing system, device, and method
US20140362682A1 (en) 2013-06-07 2014-12-11 Cisco Technology, Inc. Determining the Operations Performed Along a Service Path/Service Chain
US20140362705A1 (en) 2013-06-07 2014-12-11 The Florida International University Board Of Trustees Load-balancing algorithms for data center networks
US20140372702A1 (en) 2013-06-12 2014-12-18 Oracle International Corporation Handling memory pressure in an in-database sharded queue
US20160099948A1 (en) 2013-06-14 2016-04-07 Tocario Gmbh Method and system for enabling access of a client device to a remote desktop
US20160149816A1 (en) 2013-06-14 2016-05-26 Haitao Wu Fault Tolerant and Load Balanced Routing
US20140369204A1 (en) 2013-06-17 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Methods of load balancing using primary and stand-by addresses and related load balancers and servers
US20140372567A1 (en) 2013-06-17 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Methods of forwarding data packets using transient tables and related load balancers
US20140372616A1 (en) 2013-06-17 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Methods of forwarding/receiving data packets using unicast and/or multicast communications and related load balancers and servers
US9686192B2 (en) 2013-06-28 2017-06-20 Niciria, Inc. Network service slotting
US20150003453A1 (en) 2013-06-28 2015-01-01 Vmware, Inc. Network service slotting
US20150009995A1 (en) 2013-07-08 2015-01-08 Nicira, Inc. Encapsulating Data Packets Using an Adaptive Tunnelling Protocol
US20150016279A1 (en) 2013-07-09 2015-01-15 Nicira, Inc. Using Headerspace Analysis to Identify Classes of Packets
US20160127306A1 (en) 2013-07-11 2016-05-05 Huawei Technologies Co., Ltd. Packet Transmission Method, Apparatus, and System in Multicast Domain Name System
US20150026362A1 (en) 2013-07-17 2015-01-22 Cisco Technology, Inc. Dynamic Service Path Creation
US20150026345A1 (en) 2013-07-22 2015-01-22 Vmware, Inc. Managing link aggregation traffic in a virtual environment
US20150030024A1 (en) 2013-07-23 2015-01-29 Dell Products L.P. Systems and methods for a data center architecture facilitating layer 2 over layer 3 communication
US9755971B2 (en) 2013-08-12 2017-09-05 Cisco Technology, Inc. Traffic flow redirection between border routers using routing encapsulation
US20150052522A1 (en) 2013-08-14 2015-02-19 Nicira, Inc. Generation of DHCP Configuration Files
US20150052262A1 (en) 2013-08-14 2015-02-19 Nicira, Inc. Providing Services for Logical Networks
US20160197831A1 (en) 2013-08-16 2016-07-07 Interdigital Patent Holdings, Inc. Method and apparatus for name resolution in software defined networking
US20160277294A1 (en) 2013-08-26 2016-09-22 Nec Corporation Communication apparatus, communication method, control apparatus, and management apparatus in a communication system
US20150063102A1 (en) 2013-08-30 2015-03-05 Cisco Technology, Inc. Flow Based Network Service Insertion
US20170142012A1 (en) 2013-09-04 2017-05-18 Nicira, Inc. Multiple Active L3 Gateways for Logical Networks
US9577845B2 (en) 2013-09-04 2017-02-21 Nicira, Inc. Multiple active L3 gateways for logical networks
US20150063364A1 (en) 2013-09-04 2015-03-05 Nicira, Inc. Multiple Active L3 Gateways for Logical Networks
US9407540B2 (en) 2013-09-06 2016-08-02 Cisco Technology, Inc. Distributed service chaining in a network environment
US20150078384A1 (en) 2013-09-15 2015-03-19 Nicira, Inc. Tracking Prefixes of Values Associated with Different Rules to Generate Flows
US20150092564A1 (en) 2013-09-27 2015-04-02 Futurewei Technologies, Inc. Validation of Chained Network Services
US10091276B2 (en) 2013-09-27 2018-10-02 Transvoyant, Inc. Computer-implemented systems and methods of analyzing data in an ad-hoc network for predictive decision-making
US9258742B1 (en) 2013-09-30 2016-02-09 Juniper Networks, Inc. Policy-directed value-added services chaining
US20150092551A1 (en) 2013-09-30 2015-04-02 Juniper Networks, Inc. Session-aware service chaining within computer networks
US20150103645A1 (en) 2013-10-10 2015-04-16 Vmware, Inc. Controller side method of generating and updating a controller assignment list
US20150103679A1 (en) 2013-10-13 2015-04-16 Vmware, Inc. Tracing Host-Originated Logical Network Packets
US20150103827A1 (en) 2013-10-14 2015-04-16 Cisco Technology, Inc. Configurable Service Proxy Mapping
CN103516807A (en) 2013-10-14 2014-01-15 中国联合网络通信集团有限公司 Cloud computing platform server load balancing system and method
US9264313B1 (en) 2013-10-31 2016-02-16 Vmware, Inc. System and method for performing a service discovery for virtual networks
US20150124622A1 (en) 2013-11-01 2015-05-07 Movik Networks, Inc. Multi-Interface, Multi-Layer State-full Load Balancer For RAN-Analytics Deployments In Multi-Chassis, Cloud And Virtual Server Environments
US20150124840A1 (en) * 2013-11-03 2015-05-07 Ixia Packet flow modification
US9397946B1 (en) 2013-11-05 2016-07-19 Cisco Technology, Inc. Forwarding to clusters of service nodes
US20150124608A1 (en) 2013-11-05 2015-05-07 International Business Machines Corporation Adaptive Scheduling of Data Flows in Data Center Networks for Efficient Resource Utilization
US20150138973A1 (en) 2013-11-15 2015-05-21 Cisco Technology, Inc. Shortening of service paths in service chains in a communications network
US20150139041A1 (en) 2013-11-21 2015-05-21 Cisco Technology, Inc. Subscriber dependent redirection between a mobile packet core proxy and a cell site proxy in a network environment
US20150146539A1 (en) 2013-11-25 2015-05-28 Versa Networks, Inc. Flow distribution table for packet flow load balancing
US10104169B1 (en) 2013-12-18 2018-10-16 Amazon Technologies, Inc. Optimizing a load balancer configuration
US20160337189A1 (en) 2013-12-19 2016-11-17 Rainer Liebhart A method and apparatus for performing flexible service chaining
US20150188770A1 (en) 2013-12-27 2015-07-02 Big Switch Networks, Inc. Systems and methods for performing network service insertion
US20160308961A1 (en) 2014-01-06 2016-10-20 Tencent Technology (Shenzhen) Company Limited Methods, Devices, and Systems for Allocating Service Nodes in a Network
US20150195197A1 (en) 2014-01-06 2015-07-09 Futurewei Technologies, Inc. Service Function Chaining in a Packet Network
US20150215819A1 (en) 2014-01-24 2015-07-30 Cisco Technology, Inc. Method for Providing Sticky Load Balancing
US20150213087A1 (en) 2014-01-28 2015-07-30 Software Ag Scaling framework for querying
US20160337249A1 (en) 2014-01-29 2016-11-17 Huawei Technologies Co., Ltd. Communications network, device, and control method
US20150222640A1 (en) 2014-02-03 2015-08-06 Cisco Technology, Inc. Elastic Service Chains
US20150236948A1 (en) 2014-02-14 2015-08-20 Futurewei Technologies, Inc. Restoring service functions after changing a service chain instance path
US20150237013A1 (en) 2014-02-20 2015-08-20 Nicira, Inc. Specifying point of enforcement in a firewall rule
US20150242197A1 (en) 2014-02-25 2015-08-27 Red Hat, Inc. Automatic Installing and Scaling of Application Resources in a Multi-Tenant Platform-as-a-Service (PaaS) System
CN103795805A (en) 2014-02-27 2014-05-14 中国科学技术大学苏州研究院 Distributed server load balancing method based on SDN
US20160373364A1 (en) 2014-03-04 2016-12-22 Nec Corporation Packet processing device, packet processing method and program
US20160378537A1 (en) 2014-03-12 2016-12-29 Huawei Technologies Co., Ltd. Method and Apparatus for Controlling Virtual Machine Migration
US20150263901A1 (en) 2014-03-13 2015-09-17 Cisco Technology, Inc. Service node originated service chains in a network environment
US20150263946A1 (en) 2014-03-14 2015-09-17 Nicira, Inc. Route advertisement by managed gateways
US20150271102A1 (en) 2014-03-21 2015-09-24 Juniper Networks, Inc. Selectable service node resources
US10135636B2 (en) 2014-03-25 2018-11-20 Huawei Technologies Co., Ltd. Method for generating forwarding information, controller, and service forwarding entity
US9602380B2 (en) 2014-03-28 2017-03-21 Futurewei Technologies, Inc. Context-aware dynamic policy selection for load balancing behavior
US9787559B1 (en) 2014-03-28 2017-10-10 Juniper Networks, Inc. End-to-end monitoring of overlay networks providing virtualized network services
US9009289B1 (en) 2014-03-31 2015-04-14 Flexera Software Llc Systems and methods for assessing application usage
US20180262434A1 (en) 2014-03-31 2018-09-13 Nicira, Inc. Processing packets according to hierarchy of flow entry storages
US20150280959A1 (en) 2014-03-31 2015-10-01 Amazon Technologies, Inc. Session management in distributed storage systems
US20150281180A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Method and apparatus for integrating a service virtual machine
US20150281089A1 (en) 2014-03-31 2015-10-01 Sandvine Incorporated Ulc System and method for load balancing in computer networks
US20150281098A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Flow Cache Hierarchy
US9985896B2 (en) 2014-03-31 2018-05-29 Nicira, Inc. Caching of service decisions
US20150281179A1 (en) 2014-03-31 2015-10-01 Chids Raman Migrating firewall connection state for a firewall service virtual machine
US20150281125A1 (en) 2014-03-31 2015-10-01 Nicira, Inc. Caching of service decisions
US9686200B2 (en) 2014-03-31 2017-06-20 Nicira, Inc. Flow cache hierarchy
US20170019341A1 (en) 2014-04-01 2017-01-19 Huawei Technologies Co., Ltd. Service link selection control method and device
US20150288679A1 (en) 2014-04-02 2015-10-08 Cisco Technology, Inc. Interposer with Security Assistant Key Escrow
US20150295831A1 (en) 2014-04-10 2015-10-15 Cisco Technology, Inc. Network address translation offload to network infrastructure for service chains in a network environment
US20150319078A1 (en) 2014-05-02 2015-11-05 Futurewei Technologies, Inc. Computing Service Chain-Aware Paths
US20150319096A1 (en) 2014-05-05 2015-11-05 Nicira, Inc. Secondary input queues for maintaining a consistent network state
US20160164787A1 (en) 2014-06-05 2016-06-09 KEMP Technologies Inc. Methods for intelligent data traffic steering
US20150358235A1 (en) 2014-06-05 2015-12-10 Futurewei Technologies, Inc. Service Chain Topology Map Construction
US20150358294A1 (en) 2014-06-05 2015-12-10 Cavium, Inc. Systems and methods for secured hardware security module communication with web service hosts
US20150365322A1 (en) 2014-06-13 2015-12-17 Cisco Technology, Inc. Providing virtual private service chains in a network environment
US20170099194A1 (en) 2014-06-17 2017-04-06 Huawei Technologies Co., Ltd. Service flow processing method, apparatus, and device
US10013276B2 (en) 2014-06-20 2018-07-03 Google Llc System and method for live migration of a virtualized networking stack
US20150370596A1 (en) 2014-06-20 2015-12-24 Google Inc. System and method for live migration of a virtualized networking stack
US20150370586A1 (en) 2014-06-23 2015-12-24 Intel Corporation Local service chaining with virtual machines and virtualized containers in software defined networking
US20150372840A1 (en) 2014-06-23 2015-12-24 International Business Machines Corporation Servicing packets in a virtual network and a software-defined network (sdn)
US9419897B2 (en) 2014-06-30 2016-08-16 Nicira, Inc. Methods and systems for providing multi-tenancy support for Single Root I/O Virtualization
US20150381493A1 (en) 2014-06-30 2015-12-31 Juniper Networks, Inc. Service chaining across multiple networks
US10445509B2 (en) 2014-06-30 2019-10-15 Nicira, Inc. Encryption architecture
US20150381494A1 (en) 2014-06-30 2015-12-31 Nicira, Inc. Methods and systems to offload overlay network packet encapsulation to hardware
US20150381495A1 (en) 2014-06-30 2015-12-31 Nicira, Inc. Methods and systems for providing multi-tenancy support for single root i/o virtualization
US20150379277A1 (en) 2014-06-30 2015-12-31 Leonard Heyman Encryption Architecture
US20160006654A1 (en) 2014-07-07 2016-01-07 Cisco Technology, Inc. Bi-directional flow stickiness in a network environment
US20160028640A1 (en) 2014-07-22 2016-01-28 Futurewei Technologies, Inc. Service Chain Header and Metadata Transport
US10250501B2 (en) 2014-07-23 2019-04-02 Huawei Technologies Co., Ltd. Service packet forwarding method and apparatus
US20160043952A1 (en) 2014-08-06 2016-02-11 Futurewei Technologies, Inc. Mechanisms to support service chain graphs in a communication network
US20160057687A1 (en) 2014-08-19 2016-02-25 Qualcomm Incorporated Inter/intra radio access technology mobility and user-plane split measurement configuration
US20160065503A1 (en) 2014-08-29 2016-03-03 Extreme Networks, Inc. Methods, systems, and computer readable media for virtual fabric routing
US9442752B1 (en) 2014-09-03 2016-09-13 Amazon Technologies, Inc. Virtual secure execution environments
US20170250869A1 (en) 2014-09-12 2017-08-31 Andreas Richard Voellmy Managing network forwarding configurations using algorithmic policies
US20170250917A1 (en) 2014-09-19 2017-08-31 Nokia Solutions And Networks Oy Chaining of network service functions in a communication network
US20170250902A1 (en) 2014-09-23 2017-08-31 Nokia Solutions And Networks Oy Control of communication using service function chaining
US9804797B1 (en) 2014-09-29 2017-10-31 EMC IP Holding Company LLC Using dynamic I/O load differential for load balancing
US20160094389A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Elastically managing a service node group
WO2016054272A1 (en) 2014-09-30 2016-04-07 Nicira, Inc. Inline service switch
US20160094456A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Method and apparatus for distributing load among a plurality of service nodes
US9755898B2 (en) 2014-09-30 2017-09-05 Nicira, Inc. Elastically managing a service node group
US9531590B2 (en) 2014-09-30 2016-12-27 Nicira, Inc. Load balancing across a group of load balancers
US10341233B2 (en) 2014-09-30 2019-07-02 Nicira, Inc. Dynamically adjusting a data compute node group
US20160094452A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Distributed load balancing systems
US20190288947A1 (en) 2014-09-30 2019-09-19 Nicira, Inc. Inline load balancing
US20160094643A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Dynamically adjusting load balancing
US20160094642A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Dynamically adjusting load balancing
US10516568B2 (en) 2014-09-30 2019-12-24 Nicira, Inc. Controller driven reconfiguration of a multi-layered application or service model
EP3201761A1 (en) 2014-09-30 2017-08-09 Nicira Inc. Load balancing
US10257095B2 (en) 2014-09-30 2019-04-09 Nicira, Inc. Dynamically adjusting load balancing
US20160094633A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Configuring and Operating a XaaS Model in a Datacenter
US20160094454A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Method and apparatus for providing a service with a plurality of service nodes
US20160094631A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Dynamically adjusting a data compute node group
US10225137B2 (en) 2014-09-30 2019-03-05 Nicira, Inc. Service node selection by an inline service switch
US20160094632A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Inline Service Switch
US20160094453A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Load balancer of load balancers
EP3202109A1 (en) 2014-09-30 2017-08-09 Nicira Inc. Inline service switch
US20160094457A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Tunnel-Enabled Elastic Service Model
US10135737B2 (en) 2014-09-30 2018-11-20 Nicira, Inc. Distributed load balancing systems
US10129077B2 (en) 2014-09-30 2018-11-13 Nicira, Inc. Configuring and operating a XaaS model in a datacenter
US20160094455A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Method and apparatus for distributing load among a plurality of service nodes
US10320679B2 (en) 2014-09-30 2019-06-11 Nicira, Inc. Inline load balancing
US20160094451A1 (en) 2014-09-30 2016-03-31 Nicira, Inc Inline load balancing
US20160094384A1 (en) 2014-09-30 2016-03-31 Nicira, Inc. Controller Driven Reconfiguration of a Multi-Layered Application or Service Model
US9774537B2 (en) 2014-09-30 2017-09-26 Nicira, Inc. Dynamically adjusting load balancing
WO2016053373A1 (en) 2014-09-30 2016-04-07 Nicira, Inc. Load balancing
US20170208532A1 (en) 2014-09-30 2017-07-20 Huawei Technologies Co., Ltd. Service path generation method and apparatus
US9825810B2 (en) 2014-09-30 2017-11-21 Nicira, Inc. Method and apparatus for distributing load among a plurality of service nodes
US9935827B2 (en) 2014-09-30 2018-04-03 Nicira, Inc. Method and apparatus for distributing load among a plurality of service nodes
US11075842B2 (en) 2014-09-30 2021-07-27 Nicira, Inc. Inline load balancing
US20160105333A1 (en) 2014-10-10 2016-04-14 Nicira, Inc. Logical network traffic analysis
EP3210345A1 (en) 2014-10-24 2017-08-30 Cisco Technology, Inc. Transparent network service header path proxies
US20160119226A1 (en) 2014-10-24 2016-04-28 Cisco Technology, Inc. Transparent Network Service Header Path Proxies
US20160127564A1 (en) 2014-10-29 2016-05-05 Alcatel-Lucent Usa Inc. Policy decisions based on offline charging rules when service chaining is implemented
US20160134528A1 (en) 2014-11-10 2016-05-12 Juniper Networks, Inc. Signaling aliasing capability in data centers
US9996380B2 (en) 2014-11-11 2018-06-12 Amazon Technologies, Inc. System for managing and scheduling containers
US9256467B1 (en) 2014-11-11 2016-02-09 Amazon Technologies, Inc. System for managing and scheduling containers
US20160162320A1 (en) 2014-11-11 2016-06-09 Amazon Technologies, Inc. System for managing and scheduling containers
US20190108049A1 (en) 2014-11-11 2019-04-11 Amazon Technologies, Inc. System for managing and scheduling containers
US9705775B2 (en) 2014-11-20 2017-07-11 Telefonaktiebolaget Lm Ericsson (Publ) Passive performance measurement for inline service chaining
US20160149784A1 (en) 2014-11-20 2016-05-26 Telefonaktiebolaget L M Ericsson (Publ) Passive Performance Measurement for Inline Service Chaining
US20160149828A1 (en) 2014-11-25 2016-05-26 Netapp, Inc. Clustered storage system path quiescence analysis
US20170264677A1 (en) 2014-11-28 2017-09-14 Huawei Technologies Co., Ltd. Service Processing Apparatus and Method
US20160164826A1 (en) 2014-12-04 2016-06-09 Cisco Technology, Inc. Policy Implementation at a Network Element based on Data from an Authoritative Source
US20170273099A1 (en) 2014-12-09 2017-09-21 Huawei Technologies Co., Ltd. Method and apparatus for processing adaptive flow table
US20160164776A1 (en) 2014-12-09 2016-06-09 Aol Inc. Systems and methods for software defined networking service function chaining
US20170279938A1 (en) 2014-12-11 2017-09-28 Huawei Technologies Co., Ltd. Packet processing method and apparatus
US20160173373A1 (en) 2014-12-11 2016-06-16 Cisco Technology, Inc. Network service header metadata for load balancing
US20170310588A1 (en) 2014-12-17 2017-10-26 Huawei Technologies Co., Ltd. Data forwarding method, device, and system in software-defined networking
US20160344621A1 (en) 2014-12-17 2016-11-24 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for relocating packet processing functions
US9094464B1 (en) 2014-12-18 2015-07-28 Limelight Networks, Inc. Connection digest for accelerating web traffic
US20170339600A1 (en) 2014-12-19 2017-11-23 Telefonaktiebolaget Lm Ericsson (Publ) Method and appratus for relocating packet processing functions
US20160182684A1 (en) 2014-12-23 2016-06-23 Patrick Connor Parallel processing of service functions in service function chains
US20160197839A1 (en) 2015-01-05 2016-07-07 Futurewei Technologies, Inc. Method and system for providing qos for in-band control traffic in an openflow network
US20160205015A1 (en) 2015-01-08 2016-07-14 Openwave Mobility Inc. Software defined network and a communication network comprising the same
US20160212048A1 (en) 2015-01-15 2016-07-21 Hewlett Packard Enterprise Development Lp Openflow service chain data packet routing using tables
US20160212237A1 (en) 2015-01-16 2016-07-21 Fujitsu Limited Management server, communication system and path management method
US20160218918A1 (en) 2015-01-27 2016-07-28 Xingjun Chu Network virtualization for network infrastructure
US20160226762A1 (en) 2015-01-30 2016-08-04 Nicira, Inc. Implementing logical router uplinks
US20190020600A1 (en) 2015-01-30 2019-01-17 Nicira, Inc. Logical router with multiple routing components
US20160226700A1 (en) 2015-01-30 2016-08-04 Nicira, Inc. Transit logical switch within logical router
US20160226754A1 (en) 2015-01-30 2016-08-04 Nicira, Inc. Logical router with multiple routing components
US9787605B2 (en) 2015-01-30 2017-10-10 Nicira, Inc. Logical router with multiple routing components
US10079779B2 (en) 2015-01-30 2018-09-18 Nicira, Inc. Implementing logical router uplinks
US10129180B2 (en) 2015-01-30 2018-11-13 Nicira, Inc. Transit logical switch within logical router
US20170339110A1 (en) 2015-02-13 2017-11-23 Huawei Technologies Co., Ltd. Access Control Apparatus, System, and Method
US20180248713A1 (en) 2015-02-24 2018-08-30 Nokia Solutions And Networks Oy Integrated services processing for mobile networks
US20160248685A1 (en) 2015-02-25 2016-08-25 Cisco Technology, Inc. Metadata augmentation in a service function chain
US20160277210A1 (en) 2015-03-18 2016-09-22 Juniper Networks, Inc. Evpn inter-subnet multicast forwarding
US10594743B2 (en) 2015-04-03 2020-03-17 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US20160294933A1 (en) 2015-04-03 2016-10-06 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US20200213366A1 (en) 2015-04-03 2020-07-02 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US20160294935A1 (en) 2015-04-03 2016-10-06 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US10609091B2 (en) 2015-04-03 2020-03-31 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US20160294612A1 (en) 2015-04-04 2016-10-06 Nicira, Inc. Route Server Mode for Dynamic Routing Between Logical and Physical Networks
US20210029088A1 (en) 2015-04-13 2021-01-28 Nicira, Inc. Method and system of establishing a virtual private network in a cloud service for branch networking
US20160308758A1 (en) 2015-04-17 2016-10-20 Huawei Technologies Co., Ltd Software Defined Network (SDN) Control Signaling for Traffic Engineering to Enable Multi-type Transport in a Data Plane
US20180115471A1 (en) 2015-04-23 2018-04-26 Hewlett Packard Enterprise Development Lp Network infrastructure device to implement pre-filter rules
US20160344565A1 (en) 2015-05-20 2016-11-24 Cisco Technology, Inc. System and method to facilitate the assignment of service functions for service chains in a network environment
US20160352866A1 (en) 2015-05-25 2016-12-01 Juniper Networks, Inc. Selecting and monitoring a plurality of services key performance indicators using twamp
US10645060B2 (en) 2015-05-28 2020-05-05 Xi'an Zhongxing New Software Co., Ltd Method, device and system for forwarding message
US20160366046A1 (en) 2015-06-09 2016-12-15 International Business Machines Corporation Support for high availability of service appliances in a software-defined network (sdn) service chaining infrastructure
US20180102919A1 (en) 2015-06-10 2018-04-12 Huawei Technologies Co., Ltd. Method for implementing service chain, device, and system
US20180184281A1 (en) 2015-06-10 2018-06-28 Soracom, Inc. Communication System And Communication Method For Providing IP Network Access To Wireless Terminals
US20200036629A1 (en) 2015-06-15 2020-01-30 Telefonaktiebolaget Lm Ericsson (Publ) Method and network nodes for scalable mapping of tags to service function chain encapsulation headers
US10742544B2 (en) 2015-06-15 2020-08-11 Telefonaktiebolaget Lm Ericsson (Publ) Method and network nodes for scalable mapping of tags to service function chain encapsulation headers
US10042722B1 (en) 2015-06-23 2018-08-07 Juniper Networks, Inc. Service-chain fault tolerance in service virtualized environments
US20180176294A1 (en) 2015-06-26 2018-06-21 Hewlett Packard Enterprise Development Lp Server load balancing
US10554484B2 (en) 2015-06-26 2020-02-04 Nicira, Inc. Control plane integration with hardware switches
US10609122B1 (en) 2015-06-29 2020-03-31 Amazon Technologies, Inc. Instance backed building or place
US20170005923A1 (en) 2015-06-30 2017-01-05 Vmware, Inc. Dynamic virtual machine network policy for ingress optimization
US20170005988A1 (en) 2015-06-30 2017-01-05 Nicira, Inc. Global objects for federated firewall rule management
US9749229B2 (en) 2015-07-01 2017-08-29 Cisco Technology, Inc. Forwarding packets with encapsulated service chain headers
US20170005920A1 (en) 2015-07-01 2017-01-05 Cisco Technology, Inc. Forwarding packets with encapsulated service chain headers
US20180198705A1 (en) 2015-07-02 2018-07-12 Zte Corporation Method and apparatus for implementing service function chain
US20170019331A1 (en) 2015-07-13 2017-01-19 Futurewei Technologies, Inc. Internet Control Message Protocol Enhancement for Traffic Carried by a Tunnel over Internet Protocol Networks
US20170019329A1 (en) 2015-07-15 2017-01-19 Argela-USA, Inc. Method for forwarding rule hopping based secure communication
US20170026417A1 (en) 2015-07-23 2017-01-26 Cisco Technology, Inc. Systems, methods, and devices for smart mapping and vpn policy enforcement
US20170033939A1 (en) 2015-07-28 2017-02-02 Ciena Corporation Multicast systems and methods for segment routing
US20170064048A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Packet Data Restoration for Flow-Based Forwarding Element
US10397275B2 (en) 2015-08-28 2019-08-27 Nicira, Inc. Creating and using remote device management attribute rule data store
US20170064749A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Associating Service Tags with Remote Data Message Flows Based on Remote Device Management Attributes
US20170063928A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Defining Network Rules Based on Remote Device Management Attributes
US20170063683A1 (en) 2015-08-28 2017-03-02 Nicira, Inc. Traffic forwarding between geographically dispersed sites
US20180191600A1 (en) 2015-08-31 2018-07-05 Huawei Technologies Co., Ltd. Redirection of service or device discovery messages in software-defined networks
US20170078961A1 (en) 2015-09-10 2017-03-16 Qualcomm Incorporated Smart co-processor for optimizing service discovery power consumption in wireless service platforms
US20170078176A1 (en) 2015-09-11 2017-03-16 Telefonaktiebolaget L M Ericsson (Publ) Method and system for delay measurement of a traffic flow in a software-defined networking (sdn) system
US20180205637A1 (en) 2015-09-14 2018-07-19 Huawei Technologies Co., Ltd. Method for obtaining information about service chain in cloud computing system and apparatus
US20170093698A1 (en) 2015-09-30 2017-03-30 Huawei Technologies Co., Ltd. Method and apparatus for supporting service function chaining in a communication network
US20170093758A1 (en) 2015-09-30 2017-03-30 Nicira, Inc. Ip aliases in logical networks with hardware switches
US10853111B1 (en) 2015-09-30 2020-12-01 Amazon Technologies, Inc. Virtual machine instance migration feedback
US20190028384A1 (en) 2015-10-15 2019-01-24 Cisco Technology, Inc. Application identifier in service function chain metadata
US20180248755A1 (en) 2015-10-28 2018-08-30 Huawei Technologies Co., Ltd. Control traffic in software defined networks
US20170126522A1 (en) 2015-10-30 2017-05-04 Oracle International Corporation Methods, systems, and computer readable media for remote authentication dial in user service (radius) message loop detection and mitigation
US20170126497A1 (en) 2015-10-31 2017-05-04 Nicira, Inc. Static Route Types for Logical Routers
US20170126726A1 (en) 2015-11-01 2017-05-04 Nicira, Inc. Securing a managed forwarding element that operates within a data compute node
US20170134538A1 (en) 2015-11-10 2017-05-11 Telefonaktiebolaget L M Ericsson (Publ) Systems and methods of an enhanced state-aware proxy device
US9860079B2 (en) 2015-11-20 2018-01-02 Oracle International Corporation Redirecting packets for egress from an autonomous system using tenant specific routing and forwarding tables
US20170149582A1 (en) 2015-11-20 2017-05-25 Oracle International Corporation Redirecting packets for egress from an autonomous system using tenant specific routing and forwarding tables
US20170149675A1 (en) 2015-11-25 2017-05-25 Huawei Technologies Co., Ltd. Packet retransmission method and apparatus
US20170147399A1 (en) 2015-11-25 2017-05-25 International Business Machines Corporation Policy-based virtual machine selection during an optimization cycle
US20170163724A1 (en) 2015-12-04 2017-06-08 Microsoft Technology Licensing, Llc State-Aware Load Balancing
US20170163531A1 (en) 2015-12-04 2017-06-08 Cisco Technology, Inc. Infrastructure-exclusive service forwarding
US10084703B2 (en) 2015-12-04 2018-09-25 Cisco Technology, Inc. Infrastructure-exclusive service forwarding
US20170171159A1 (en) 2015-12-14 2017-06-15 Nicira, Inc. Packet tagging for improved guest system security
US20170180240A1 (en) 2015-12-16 2017-06-22 Telefonaktiebolaget Lm Ericsson (Publ) Openflow configured horizontally split hybrid sdn nodes
US20180302242A1 (en) 2015-12-31 2018-10-18 Huawei Technologies Co., Ltd. Packet processing method, related apparatus, and nvo3 network system
US20170195255A1 (en) 2015-12-31 2017-07-06 Fortinet, Inc. Packet routing using a software-defined networking (sdn) switch
US20170208000A1 (en) 2016-01-15 2017-07-20 Cisco Technology, Inc. Leaking routes in a service chain
US20170208011A1 (en) 2016-01-19 2017-07-20 Cisco Technology, Inc. System and method for hosting mobile packet core and value-added services using a software defined network and service chains
US20170214627A1 (en) 2016-01-21 2017-07-27 Futurewei Technologies, Inc. Distributed Load Balancing for Network Service Function Chaining
US20170220306A1 (en) 2016-02-03 2017-08-03 Google Inc. Systems and methods for automatic content verification
US20170230333A1 (en) 2016-02-08 2017-08-10 Cryptzone North America, Inc. Protecting network devices by a firewall
US20170230467A1 (en) 2016-02-09 2017-08-10 Cisco Technology, Inc. Adding cloud service provider, could service, and cloud tenant awareness to network service chains
US10547692B2 (en) 2016-02-09 2020-01-28 Cisco Technology, Inc. Adding cloud service provider, cloud service, and cloud tenant awareness to network service chains
US20170237656A1 (en) 2016-02-12 2017-08-17 Huawei Technologies Co., Ltd. Method and apparatus for service function forwarding in a service domain
US20190028577A1 (en) 2016-02-26 2019-01-24 Telefonaktiebolaget Lm Ericsson (Publ) Dynamic re-route in a redundant system of a packet network
US20170251065A1 (en) 2016-02-29 2017-08-31 Cisco Technology, Inc. System and Method for Data Plane Signaled Packet Capture in a Service Function Chaining Network
CN107204941A (en) 2016-03-18 2017-09-26 中兴通讯股份有限公司 The method and apparatus that a kind of flexible Ethernet path is set up
US10187306B2 (en) 2016-03-24 2019-01-22 Cisco Technology, Inc. System and method for improved service chaining
US10812378B2 (en) 2016-03-24 2020-10-20 Cisco Technology, Inc. System and method for improved service chaining
US20170295021A1 (en) 2016-04-07 2017-10-12 Telefonica, S.A. Method to assure correct data packet traversal through a particular path of a network
US20170295100A1 (en) 2016-04-12 2017-10-12 Nicira, Inc. Virtual tunnel endpoints for congestion-aware load balancing
US20170310611A1 (en) 2016-04-26 2017-10-26 Cisco Technology, Inc. System and method for automated rendering of service chaining
US20170317926A1 (en) 2016-04-27 2017-11-02 Cisco Technology, Inc. Generating packets in a reverse direction of a service function chain
US20170317954A1 (en) 2016-04-28 2017-11-02 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US20170317936A1 (en) 2016-04-28 2017-11-02 Cisco Technology, Inc. Selective steering network traffic to virtual service(s) using policy
US20170317887A1 (en) 2016-04-29 2017-11-02 Deutsche Telekom Ag Versioning system for network states in a software-defined network
US20170318097A1 (en) 2016-04-29 2017-11-02 Hewlett Packard Enterprise Development Lp Virtualized network function placements
US20170324651A1 (en) 2016-05-09 2017-11-09 Cisco Technology, Inc. Traceroute to return aggregated statistics in service chains
US20170331672A1 (en) 2016-05-11 2017-11-16 Hewlett Packard Enterprise Development Lp Filter tables for management functions
US20170353387A1 (en) 2016-06-07 2017-12-07 Electronics And Telecommunications Research Institute Distributed service function forwarding system
US10284390B2 (en) 2016-06-08 2019-05-07 Cisco Technology, Inc. Techniques for efficient service chain analytics
US20170366605A1 (en) 2016-06-16 2017-12-21 Alcatel-Lucent Usa Inc. Providing data plane services for applications
US20170364794A1 (en) 2016-06-20 2017-12-21 Telefonaktiebolaget Lm Ericsson (Publ) Method for classifying the payload of encrypted traffic flows
US20170373990A1 (en) 2016-06-23 2017-12-28 Cisco Technology, Inc. Transmitting network overlay information in a service function chain
US10547508B1 (en) 2016-06-29 2020-01-28 Juniper Networks, Inc. Network services using pools of pre-configured virtualized network functions and service chains
US20180006935A1 (en) 2016-06-30 2018-01-04 Juniper Networks, Inc. Auto discovery and auto scaling of services in software-defined network environment
US20180004954A1 (en) 2016-06-30 2018-01-04 Amazon Technologies, Inc. Secure booting of virtualization managers
US20190140950A1 (en) 2016-07-01 2019-05-09 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining sfc
US11075839B2 (en) 2016-07-01 2021-07-27 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining SFC
US20190140947A1 (en) 2016-07-01 2019-05-09 Huawei Technologies Co., Ltd. Service Function Chaining SFC-Based Packet Forwarding Method, Apparatus, and System
US20180026911A1 (en) 2016-07-25 2018-01-25 Cisco Technology, Inc. System and method for providing a resource usage advertising framework for sfc-based workloads
US20190166045A1 (en) 2016-07-27 2019-05-30 Zte Corporation Packet forwarding method and device
US20190124096A1 (en) 2016-07-29 2019-04-25 ShieldX Networks, Inc. Channel data encapsulation system and method for use with client-server data channels
US20180041524A1 (en) 2016-08-02 2018-02-08 Cisco Technology, Inc. Steering of cloned traffic in a service function chain
US20190268384A1 (en) 2016-08-05 2019-08-29 Alcatel Lucent Security-on-demand architecture
US20180041425A1 (en) 2016-08-05 2018-02-08 Huawei Technologies Co., Ltd. Service-based traffic forwarding in virtual networks
US20180041470A1 (en) 2016-08-08 2018-02-08 Talari Networks Incorporated Applications and integrated firewall design in an adaptive private network (apn)
US20220019698A1 (en) 2016-08-11 2022-01-20 Intel Corporation Secure Public Cloud with Protected Guest-Verified Host Control
US20180247082A1 (en) 2016-08-11 2018-08-30 Intel Corporation Secure Public Cloud with Protected Guest-Verified Host Control
US20190173778A1 (en) 2016-08-26 2019-06-06 Telefonaktiebolaget Lm Ericsson (Publ) Improving sf proxy performance in sdn networks
CA3034809A1 (en) 2016-08-27 2018-03-08 Nicira, Inc. Extension of network control system into public cloud
US20180063087A1 (en) 2016-08-27 2018-03-01 Nicira, Inc. Managed forwarding element executing in separate namespace of public cloud data compute node than workload application
US20180063018A1 (en) 2016-08-30 2018-03-01 Cisco Technology, Inc. System and method for managing chained services in a network environment
US20180091420A1 (en) 2016-09-26 2018-03-29 Juniper Networks, Inc. Distributing service function chain data and service function instance data in a network
EP3300319A1 (en) 2016-09-26 2018-03-28 Juniper Networks, Inc. Distributing service function chain data and service function instance data in a network
US10938668B1 (en) 2016-09-30 2021-03-02 Amazon Technologies, Inc. Safe deployment using versioned hash rings
US20180102965A1 (en) 2016-10-07 2018-04-12 Alcatel-Lucent Usa Inc. Unicast branching based multicast
US20180124061A1 (en) 2016-11-03 2018-05-03 Nicira, Inc. Performing services on a host
US20180123950A1 (en) 2016-11-03 2018-05-03 Parallel Wireless, Inc. Traffic Shaping and End-to-End Prioritization
US11055273B1 (en) 2016-11-04 2021-07-06 Amazon Technologies, Inc. Software container event monitoring systems
US20180139098A1 (en) 2016-11-14 2018-05-17 Futurewei Technologies, Inc. Integrating physical and virtual network functions in a service-chained network environment
US20180145899A1 (en) 2016-11-22 2018-05-24 Gigamon Inc. Dynamic Service Chaining and Late Binding
US20180159733A1 (en) 2016-12-06 2018-06-07 Nicira, Inc. Performing context-rich attribute-based services on a host
US20180159943A1 (en) 2016-12-06 2018-06-07 Nicira, Inc. Performing context-rich attribute-based services on a host
US20180159801A1 (en) 2016-12-07 2018-06-07 Nicira, Inc. Service function chain (sfc) data communications with sfc data in virtual local area network identifier (vlan id) data fields
US20180213040A1 (en) 2016-12-15 2018-07-26 Arm Ip Limited Enabling Communications Between Devices
US10623309B1 (en) 2016-12-19 2020-04-14 International Business Machines Corporation Rule processing of packets
US20180176177A1 (en) 2016-12-20 2018-06-21 Thomson Licensing Method for managing service chaining at a network equipment, corresponding network equipment
US10212071B2 (en) * 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US20200364074A1 (en) 2016-12-22 2020-11-19 Nicira, Inc. Collecting and processing contextual attributes on a host
US20180183764A1 (en) 2016-12-22 2018-06-28 Nicira, Inc. Collecting and processing contextual attributes on a host
US20180198791A1 (en) 2017-01-12 2018-07-12 Zscaler, Inc. Systems and methods for cloud-based service function chaining using security assertion markup language (saml) assertion
US20180203736A1 (en) 2017-01-13 2018-07-19 Red Hat, Inc. Affinity based hierarchical container scheduling
US20180219762A1 (en) 2017-02-02 2018-08-02 Fujitsu Limited Seamless service function chaining across domains
US20180227216A1 (en) 2017-02-06 2018-08-09 Silver Peak Systems, Inc. Multi-level Learning For Classifying Traffic Flows From First Packet Data
US20180234360A1 (en) 2017-02-16 2018-08-16 Netscout Systems, Inc Flow and time based reassembly of fragmented packets by ip protocol analyzers
US20180278530A1 (en) 2017-03-24 2018-09-27 Intel Corporation Load balancing systems, devices, and methods
US20180288129A1 (en) 2017-03-29 2018-10-04 Ca, Inc. Introspection driven monitoring of multi-container applications
US20180295036A1 (en) 2017-04-07 2018-10-11 Nicira, Inc. Application/context-based management of virtual networks using customizable workflows
US20180295053A1 (en) 2017-04-10 2018-10-11 Cisco Technology, Inc. Service-function chaining using extended service-function chain proxy for service-function offload
US10158573B1 (en) 2017-05-01 2018-12-18 Barefoot Networks, Inc. Forwarding element with a data plane load balancer
US20180337849A1 (en) 2017-05-16 2018-11-22 Sonus Networks, Inc. Communications methods, apparatus and systems for providing scalable media services in sdn systems
US10333822B1 (en) 2017-05-23 2019-06-25 Cisco Technology, Inc. Techniques for implementing loose hop service function chains price information
US20180351874A1 (en) 2017-05-30 2018-12-06 At&T Intellectual Property I, L.P. Creating Cross-Service Chains of Virtual Network Functions in a Wide Area Network
US20180349212A1 (en) 2017-06-06 2018-12-06 Shuhao Liu System and method for inter-datacenter communication
US20190007382A1 (en) 2017-06-29 2019-01-03 Vmware, Inc. Ssh key validation in a hyper-converged computing environment
US20190020684A1 (en) 2017-07-13 2019-01-17 Nicira, Inc. Systems and methods for storing a security parameter index in an options field of an encapsulation header
US20190020580A1 (en) 2017-07-14 2019-01-17 Nicira, Inc. Asymmetric network elements sharing an anycast address
US20190036819A1 (en) 2017-07-31 2019-01-31 Nicira, Inc. Use of hypervisor for active-active stateful network service cluster
US20190068500A1 (en) 2017-08-27 2019-02-28 Nicira, Inc. Performing in-line service in public cloud
US20190089679A1 (en) 2017-09-17 2019-03-21 Mellanox Technologies, Ltd. NIC with stateful connection tracking
US20190097838A1 (en) 2017-09-26 2019-03-28 Oracle International Corporation Virtual interface system and method for multi-tenant cloud networking
US20190102280A1 (en) 2017-09-30 2019-04-04 Oracle International Corporation Real-time debugging instances in a deployed container platform
US10637750B1 (en) 2017-10-18 2020-04-28 Juniper Networks, Inc. Dynamically modifying a service chain based on network traffic information
US20190121961A1 (en) 2017-10-23 2019-04-25 L3 Technologies, Inc. Configurable internet isolation and security for laptops and similar devices
US20210044502A1 (en) 2017-10-29 2021-02-11 Nicira, Inc. Service operation chaining
US10797966B2 (en) 2017-10-29 2020-10-06 Nicira, Inc. Service operation chaining
US10805181B2 (en) 2017-10-29 2020-10-13 Nicira, Inc. Service operation chaining
US20190132221A1 (en) 2017-10-29 2019-05-02 Nicira, Inc. Service operation chaining
WO2019084066A1 (en) 2017-10-29 2019-05-02 Nicira, Inc. Service operation chaining methods and computer programs
US20190132220A1 (en) 2017-10-29 2019-05-02 Nicira, Inc. Service operation chaining
US20190140863A1 (en) 2017-11-06 2019-05-09 Cisco Technology, Inc. Dataplane signaled bidirectional/symmetric service chain instantiation for efficient load balancing
US20190149518A1 (en) 2017-11-15 2019-05-16 Nicira, Inc. Packet induced revalidation of connection tracker
US10708229B2 (en) 2017-11-15 2020-07-07 Nicira, Inc. Packet induced revalidation of connection tracker
US10757077B2 (en) 2017-11-15 2020-08-25 Nicira, Inc. Stateful connection policy filtering
US11012420B2 (en) 2017-11-15 2021-05-18 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
US20190149516A1 (en) 2017-11-15 2019-05-16 Nicira, Inc. Stateful connection policy filtering
US20190149512A1 (en) 2017-11-15 2019-05-16 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
US10938716B1 (en) 2017-11-29 2021-03-02 Riverbed Technology, Inc. Preserving policy with path selection
US20190173850A1 (en) 2017-12-04 2019-06-06 Nicira, Inc. Scaling gateway to gateway traffic using flow hash
US20190173851A1 (en) 2017-12-04 2019-06-06 Nicira, Inc. Scaling gateway to gateway traffic using flow hash
US20210377160A1 (en) 2018-01-12 2021-12-02 Telefonaktiebolaget Lm Ericsson (Publ) Mechanism for control message redirection for sdn control channel failures
US20190230126A1 (en) 2018-01-24 2019-07-25 Nicira, Inc. Flow-based forwarding element configuration
US20190229937A1 (en) 2018-01-25 2019-07-25 Juniper Networks, Inc. Multicast join message processing by multi-homing devices in an ethernet vpn
US20200366526A1 (en) 2018-01-26 2020-11-19 Nicira, Inc. Specifying and utilizing paths through a network
US10797910B2 (en) 2018-01-26 2020-10-06 Nicira, Inc. Specifying and utilizing paths through a network
US10659252B2 (en) 2018-01-26 2020-05-19 Nicira, Inc Specifying and utilizing paths through a network
US20190238364A1 (en) 2018-01-26 2019-08-01 Nicira, Inc. Specifying and utilizing paths through a network
US20190238363A1 (en) 2018-01-26 2019-08-01 Nicira, Inc. Specifying and utilizing paths through a network
WO2019147316A1 (en) 2018-01-26 2019-08-01 Nicira, Inc. Specifying and utilizing paths through a network
US20200358696A1 (en) 2018-02-01 2020-11-12 Nokia Solutions And Networks Oy Method and device for interworking between service function chain domains
WO2019157955A1 (en) 2018-02-13 2019-08-22 华为技术有限公司 Device access method, related platform and computer storage medium
WO2019168532A1 (en) 2018-03-01 2019-09-06 Google Llc High availability multi-single-tenant services
US20190286475A1 (en) 2018-03-14 2019-09-19 Microsoft Technology Licensing, Llc Opportunistic virtual machine migration
US11038782B2 (en) 2018-03-27 2021-06-15 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US20190306086A1 (en) 2018-03-27 2019-10-03 Nicira, Inc. Incorporating layer 2 service between two interfaces of gateway device
US10728174B2 (en) 2018-03-27 2020-07-28 Nicira, Inc. Incorporating layer 2 service between two interfaces of gateway device
US20190306036A1 (en) 2018-03-27 2019-10-03 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US10805192B2 (en) 2018-03-27 2020-10-13 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US20200366584A1 (en) 2018-03-27 2020-11-19 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US20190306063A1 (en) 2018-03-30 2019-10-03 Yuuta Hamada Communication system and upload method
US20190342175A1 (en) 2018-05-02 2019-11-07 Nicira, Inc. Application of profile setting groups to logical network entities
WO2019226327A1 (en) 2018-05-23 2019-11-28 Microsoft Technology Licensing, Llc Data platform fabric
US20190379578A1 (en) 2018-06-11 2019-12-12 Nicira, Inc. Configuring a compute node to perform services on a host
US20190379579A1 (en) 2018-06-11 2019-12-12 Nicira, Inc. Providing shared memory for access by multiple network service containers executing on single service machine
US20190377604A1 (en) 2018-06-11 2019-12-12 Nuweba Labs Ltd. Scalable function as a service platform
US20200007388A1 (en) 2018-06-29 2020-01-02 Cisco Technology, Inc. Network traffic optimization using in-situ notification system
US10997177B1 (en) 2018-07-27 2021-05-04 Workday, Inc. Distributed real-time partitioned MapReduce for a data fabric
US10645201B2 (en) 2018-07-31 2020-05-05 Vmware, Inc. Packet handling during service virtualized computing instance migration
US20200059761A1 (en) 2018-08-17 2020-02-20 Huawei Technologies Co., Ltd. Systems and methods for enabling private communication within a user equipment group
US11184397B2 (en) 2018-08-20 2021-11-23 Vmware, Inc. Network policy migration to a public cloud
US20200067828A1 (en) 2018-08-23 2020-02-27 Agora Lab, Inc. Large-Scale Real-Time Multimedia Communications
US20200073739A1 (en) 2018-08-28 2020-03-05 Amazon Technologies, Inc. Constraint solver execution service and infrastructure therefor
WO2020046686A1 (en) 2018-09-02 2020-03-05 Vmware, Inc. Service insertion at logical network gateway
US20200076734A1 (en) 2018-09-02 2020-03-05 Vmware, Inc. Redirection of data messages at logical network gateway
US20200076684A1 (en) 2018-09-02 2020-03-05 Vmware, Inc. Service insertion at logical network gateway
US10944673B2 (en) 2018-09-02 2021-03-09 Vmware, Inc. Redirection of data messages at logical network gateway
US20200084141A1 (en) 2018-09-12 2020-03-12 Corsa Technology Inc. Methods and systems for network security universal control point
CN109213573A (en) 2018-09-14 2019-01-15 珠海国芯云科技有限公司 The equipment blocking method and device of virtual desktop based on container
US10834004B2 (en) 2018-09-24 2020-11-10 Netsia, Inc. Path determination method and system for delay-optimized service function chaining
US20200136960A1 (en) 2018-10-27 2020-04-30 Cisco Technology, Inc. Software version aware networking
US20200145331A1 (en) 2018-11-02 2020-05-07 Cisco Technology, Inc., A California Corporation Using In-Band Operations Data to Signal Packet Processing Departures in a Network
US20200162318A1 (en) 2018-11-20 2020-05-21 Cisco Technology, Inc. Seamless automation of network device migration to and from cloud managed systems
US20200195711A1 (en) 2018-12-17 2020-06-18 At&T Intellectual Property I, L.P. Model-based load balancing for network data plane
US20200204492A1 (en) 2018-12-21 2020-06-25 Juniper Networks, Inc. Facilitating flow symmetry for service chains in a computer network
US20200220805A1 (en) 2019-01-03 2020-07-09 Citrix Systems, Inc. Method for optimal path selection for data traffic undergoing high processing or queuing delay
US20200274808A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service path selection in load balanced manner
US20200274945A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service control plane messaging in service data plane
US20200274944A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Specifying service chains
US11003482B2 (en) 2019-02-22 2021-05-11 Vmware, Inc. Service proxy operations
US20200274826A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Providing services with guest vm mobility
US20200272495A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Using service data plane for service control plane messaging
US20200272494A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Segregated service and forwarding planes
US20200272496A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service rule processing and path selection at the source
WO2020171937A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Providing services with guest vm mobility
US20200274779A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service proxy operations
US20200272493A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Providing services with service vm mobility
US11042397B2 (en) 2019-02-22 2021-06-22 Vmware, Inc. Providing services with guest VM mobility
US20200274810A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Distributed forwarding for performing service chain operations
US11074097B2 (en) 2019-02-22 2021-07-27 Vmware, Inc. Specifying service chains
US20200272499A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Creating and distributing service chain descriptions
US20200274809A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Providing services by using service insertion and service transport layers
US20200274769A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Specifying and distributing service chains
US11036538B2 (en) 2019-02-22 2021-06-15 Vmware, Inc. Providing services with service VM mobility
US20200272497A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Configuring distributed forwarding for performing service chain operations
US10929171B2 (en) 2019-02-22 2021-02-23 Vmware, Inc. Distributed forwarding for performing service chain operations
US20200272501A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Specifying service chains
US20200272498A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Distributed forwarding for performing service chain operations
US20200272500A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service path generation in load balanced manner
US20200274801A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service path computation for service insertion
US20200274778A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Providing services by using service insertion and service transport layers
US10949244B2 (en) 2019-02-22 2021-03-16 Vmware, Inc. Specifying and distributing service chains
US20200274757A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Providing services by using multiple service planes
US20200274795A1 (en) 2019-02-22 2020-08-27 Vmware, Inc. Service control plane messaging in service data plane
US20200287962A1 (en) 2019-03-05 2020-09-10 Cisco Technology, Inc. Load balancing in a distributed system
US20200344088A1 (en) 2019-04-29 2020-10-29 Vmware, Inc. Network interoperability support for non-virtualized entities
US20200382420A1 (en) 2019-05-31 2020-12-03 Juniper Networks, Inc. Inter-network service chaining
US20200382412A1 (en) 2019-05-31 2020-12-03 Microsoft Technology Licensing, Llc Multi-Cast Support for a Virtual Network
US20200389401A1 (en) 2019-06-06 2020-12-10 Cisco Technology, Inc. Conditional composition of serverless network functions using segment routing
US20210004245A1 (en) 2019-07-02 2021-01-07 Hewlett Packard Enterprise Development Lp Deploying service containers in an adapter device
CN112181632A (en) 2019-07-02 2021-01-05 慧与发展有限责任合伙企业 Deploying service containers in an adapter device
US20210011812A1 (en) 2019-07-10 2021-01-14 Commvault Systems, Inc. Preparing containerized applications for backup using a backup services container and a backup services container-orchestration pod
US20210011816A1 (en) 2019-07-10 2021-01-14 Commvault Systems, Inc. Preparing containerized applications for backup using a backup services container in a container-orchestration pod
WO2021041440A1 (en) 2019-08-26 2021-03-04 Microsoft Technology Licensing, Llc Computer device including nested network interface controller switches
US20210073736A1 (en) 2019-09-10 2021-03-11 Alawi Holdings LLC Computer implemented system and associated methods for management of workplace incident reporting
US20210120080A1 (en) 2019-10-16 2021-04-22 Vmware, Inc. Load balancing for third party services
US20210136140A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Using service containers to implement service chains
WO2021086462A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Distributed service chain across multiple clouds
US20210135992A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Distributed fault tolerant service chain
US20210136141A1 (en) 2019-10-30 2021-05-06 Vmware, Inc. Distributed service chain across multiple clouds
US20210136147A1 (en) 2019-10-31 2021-05-06 Keysight Technologies, Inc. Methods, systems and computer readable media for self-replicating cluster appliances
US11157304B2 (en) 2019-11-01 2021-10-26 Dell Products L.P. System for peering container clusters running on different container orchestration systems
US20210218587A1 (en) 2020-01-13 2021-07-15 Vmware, Inc. Service insertion for multicast traffic at boundary
US20210227041A1 (en) 2020-01-20 2021-07-22 Vmware, Inc. Method of network performance visualization of service function chains
US20210227042A1 (en) 2020-01-20 2021-07-22 Vmware, Inc. Method of adjusting service function chains to improve network performance
US20210240734A1 (en) 2020-02-03 2021-08-05 Microstrategy Incorporated Deployment of container-based computer environments
US20210266295A1 (en) 2020-02-25 2021-08-26 Uatc, Llc Deterministic Container-Based Network Configurations for Autonomous Vehicles
US20210271565A1 (en) 2020-03-02 2021-09-02 Commvault Systems, Inc. Platform-agnostic containerized application data protection
US20210314310A1 (en) 2020-04-02 2021-10-07 Vmware, Inc. Secured login management to container image registry in a virtualized computer system
US20210311758A1 (en) 2020-04-02 2021-10-07 Vmware, Inc. Management of a container image registry in a virtualized computer system
US20210349767A1 (en) 2020-05-05 2021-11-11 Red Hat, Inc. Migrating virtual machines between computing environments
US20220060467A1 (en) 2020-08-24 2022-02-24 Just One Technologies LLC Systems and methods for phone number certification and verification
US11153190B1 (en) 2021-01-21 2021-10-19 Zscaler, Inc. Metric computation for traceroute probes using cached data to prevent a surge on destination servers

Non-Patent Citations (27)

* Cited by examiner, † Cited by third party
Title
Author Unknown, "Enabling Service Chaining on Cisco Nexus 1000V Series," Month Unknown, 2012, 25 pages, CISCO.
Author Unknown. "AppLogic Features," Jul. 2007, 2 pages, 3TERA, Inc., available at http:web.archive.orgweb2000630051607www.3tera.comapplogic-features.html.
Casado, Martin, et al., "Virtualizing the Network Forwarding Plane," Dec. 2010, 6 pages.
Datagram, Jun. 22, 2012, Wikipedia entry (Year: 2012). *
Dixon, Colin, et al., "An End to the Middle," Proceedings of the 12th conference on Hot topics in operating systems USENIX Association, May 2009, 5 pages, Berkeley, CA, USA.
Dumitriu, Dan Mihai, et al. (U.S. Appl. No. 61/514,990), filed Aug. 4, 2011.
Greenberg, Albert, et al., "VL2: A Scalable and Flexible Data Center Network," SIGCOMM'09, Aug. 17-21, 2009, p. 12 pages, ACM, Barcelona, Spain.
Guichard, J., et al., "Network Service Chaining Problem Statement," Network Working Group, Jun. 13, 2013, 14 pages, Cisco Systems, Inc.
Halpern, J., et al., "Service Function Chaining (SFC) Architecture," draft-ietf-sfc-architecture-02, Sep. 20, 2014, 26 pages, IETF.
International Search Report and Written Opinion of PCT/US2014/072897, dated Aug. 4, 2015, Nicira, Inc.
International Search Report and Written Opinion of PCT/US2015/053332, dated Dec. 17, 2015, Nicira, Inc.
Invitation to Pay Additional Fees of PCT/US2014/072897, dated May 29, 2015, Nicira, Inc.
Joseph, Dilip, et al., "A Policy-aware Switching Layer for Data Centers," Jun. 24, 2008, 26 pages, Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA.
Karakus, Murat, et al., "Quality of Service (QoS) in Software Defined Networking (Sdn): A Survey," Journal of Network and Computer Applications, Dec. 9, 2016, 19 pages, vol. 80, Elsevier, Ltd.
Kumar, S., et al., "Service Function Chaining Use Cases in Data Centers," draft-ietf-sfc-dc-use-cases-01, Jul. 21, 2014, 23 pages, IETF.
Lin, Po-Ching, et al., "Balanced Service Chaining in Software-Defined Networks with Network Function Virtualization," Computer: Research Feature, Nov. 2016, 9 pages, vol. 49, No. 11, IEEE.
Liu, W., et al., "Service Function Chaining (SFC) Use Cases," draft-liu-sfc-use-cases-02, Feb. 13, 2014, 17 pages, IETF.
Non-Published Commonly Owned U.S. Appl. No. 16/005,628, filed Jun. 11, 2018, 44 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/005,636, filed Jun. 11, 2018, 45 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/427,294), filed May 30, 2019, 73 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/816,067, filed Mar. 11, 2020, 55 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/385,809, filed Jul. 26, 2021, 74 pages, Nicira, Inc.
Salsano, Stefano, et al., "Generalized Virtual Networking: An Enabler for Service Centric Networking and Network Function Virtualization," 2014 16th International Telecommunications Network Strategy and Planning Symposium, Sep. 17-19, 2014, 7 pages, IEEE, Funchal, Portugal.
Sekar, Vyas, et al., "Design and Implementation of a Consolidated Middlebox Architecture," 9th USENIX conference on Networked System Design and Implementation, Apr. 25-27, 2012, 14 pages.
Sherry, Justine, et al., "Making Middleboxes Someone Else's Problem: Network Processing as a Cloud Service," SIGCOMM, Aug. 13-17, 2012, 12 pages, ACM, Helsinki, Finland.
Siasi, N., et al., "Container-Based Service Function Chain Mapping," 2019 SoutheastCon, Apr. 11-14, 2019, 6 pages, IEEE, Huntsville, Al, USA.
Xiong, Gang, et al., "A Mechanism for Configurable Network Service Chaining and Its Implementation," KSII Transactions on Internet and Information Systems, Aug. 2016, 27 pages, vol. 10, No. 8, KSII.

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11805056B2 (en) 2013-05-09 2023-10-31 Nicira, Inc. Method and system for service switching using service tags
US11750476B2 (en) 2017-10-29 2023-09-05 Nicira, Inc. Service operation chaining
US11805036B2 (en) 2018-03-27 2023-10-31 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US11609781B2 (en) 2019-02-22 2023-03-21 Vmware, Inc. Providing services with guest VM mobility
US11722559B2 (en) 2019-10-30 2023-08-08 Vmware, Inc. Distributed service chain across multiple clouds
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers

Also Published As

Publication number Publication date
US20160094661A1 (en) 2016-03-31
US20160094384A1 (en) 2016-03-31
CN107005584B (en) 2020-08-11
EP3202109A1 (en) 2017-08-09
EP3202109B1 (en) 2021-03-10
WO2016054272A1 (en) 2016-04-07
US20160094633A1 (en) 2016-03-31
US10129077B2 (en) 2018-11-13
US20160094457A1 (en) 2016-03-31
US20230052818A1 (en) 2023-02-16
CN107005584A (en) 2017-08-01
US10516568B2 (en) 2019-12-24
US11296930B2 (en) 2022-04-05
CN112291294A (en) 2021-01-29
US20160094632A1 (en) 2016-03-31
US10225137B2 (en) 2019-03-05

Similar Documents

Publication Publication Date Title
US20230052818A1 (en) Controller driven reconfiguration of a multi-layered application or service model
US11405431B2 (en) Method, apparatus, and system for implementing a content switch
US11184327B2 (en) Context aware middlebox services at datacenter edges
US20230336413A1 (en) Method and apparatus for providing a service with a plurality of service nodes
US11522764B2 (en) Forwarding element with physical and virtual data planes
EP3549015B1 (en) Performing context-rich attribute-based services on a host
US10341233B2 (en) Dynamically adjusting a data compute node group
US10999220B2 (en) Context aware middlebox services at datacenter edge
WO2020009784A1 (en) Context aware middlebox services at datacenter edges

Legal Events

Date Code Title Description
AS Assignment

Owner name: NICIRA, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAIN, JAYANT;SENGUPTA, ANIRBAN;LUND, RICK;AND OTHERS;SIGNING DATES FROM 20160129 TO 20160201;REEL/FRAME:037866/0469

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE