WO2022132308A1 - Providing stateful services a scalable manner for machines executing on host computers - Google Patents

Providing stateful services a scalable manner for machines executing on host computers Download PDF

Info

Publication number
WO2022132308A1
WO2022132308A1 PCT/US2021/056574 US2021056574W WO2022132308A1 WO 2022132308 A1 WO2022132308 A1 WO 2022132308A1 US 2021056574 W US2021056574 W US 2021056574W WO 2022132308 A1 WO2022132308 A1 WO 2022132308A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
containers
data message
particular machine
host computer
Prior art date
Application number
PCT/US2021/056574
Other languages
French (fr)
Inventor
Jayant Jain
Anirban Sengupta
Rick Lund
Original Assignee
Vmware, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/122,153 external-priority patent/US11611625B2/en
Priority claimed from US17/122,192 external-priority patent/US11734043B2/en
Application filed by Vmware, Inc. filed Critical Vmware, Inc.
Priority to EP21811583.0A priority Critical patent/EP4185959A1/en
Priority to CN202180084495.9A priority patent/CN116710897A/en
Publication of WO2022132308A1 publication Critical patent/WO2022132308A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • Sidecar containers have become popular for micro-services applications, which have one application implemented by many different application components each of which is typically implemented by an individual container. Sidecar containers are often deployed in series with forwarding across the individual service containers.
  • One example is a service mesh that has a proxy container deployed in front of a web server container or application server container to handle services such as authentication, service discovery, encryption, or load balancing.
  • the web server or application server container is configured to send its traffic to the sidecar proxy.
  • the sidecar proxy receives the packet and sends the packet to the web server or application server container.
  • Some embodiments provide a method for performing services on a host computer that executes several machines (e.g., virtual machines (VMs), Pods, containers, etc.) in a datacenter.
  • the method configures a first set of one or more service containers for a first machine executing on the host computer, and a second set of one or more service containers for a second machine executing on the host computer.
  • Each configured service container performs a service operation (e.g., a middlebox service operation, such as firewall, load balancing, encryption, etc.) on data messages associated with a particular machine (e.g., on ingress and/or egress data messages to and/or from the particular machine).
  • a service operation e.g., a middlebox service operation, such as firewall, load balancing, encryption, etc.
  • the method also configures a module along the particular machine’s datapath (e.g., ingress and/or egress datapath) to identify a subset of service operations to perform on a set of data messages associated with the particular machine, and to direct the set of data messages to a set of service containers configured for the particular machine to perform the identified set of service operations on the set of data messages.
  • datapath e.g., ingress and/or egress datapath
  • VPC virtual private cloud
  • the first and second sets of containers in some embodiments can be identical sets of containers (i.e., perform the same middlebox service operations), or can be different sets of containers (i.e., one set of containers performs a middlebox service operation not performed by the other set of containers.
  • the first and second sets of containers respectively operate on first and second Pods.
  • each container operates on its own dedicated Pod.
  • at least two containers in one set of containers execute on two different Pods, but at least one Pod executes two or more containers in the same container set.
  • Each Pod in some embodiments executes (i.e., operates) on a service virtual machine (SVM) in some embodiments.
  • SVM service virtual machine
  • the first set of containers execute on a first Pod that executes on a first SVM on the host computer
  • the second set of containers execute on a second Pod that executes on a second SVM on the host computer.
  • the first and second machines are first and second guest virtual machines (GVMs) or first and second guest containers.
  • GVMs guest virtual machines
  • the SVMs on which the Pods execute are lighter weight VMs (e.g., consume less storage resources and have faster bootup times) than the GVMs.
  • these SVMs in some embodiments support a smaller set of standard specified network interface drivers, while the GVMs support a larger set of network interface drivers.
  • the first and second sets of containers are respectively configured when the first and second machines are configured on the host computer.
  • Each container set in some embodiments is deployed on the host computer when the set’s associated machine is deployed.
  • the containers and/or machines are pre-deployed on the host computer, but the containers are configured for their respective machine when the machines are configured for a particular logical network or VPC.
  • the first and second sets of containers are terminated when the first and second machines are respectively terminated on the host computer.
  • the first and second sets of containers are defined to be part of a resource group of their respective first and second machines. This allows each service container set (e.g., each Pod) to migrate with its machine to another host computer.
  • the migration tools that migrate the machine and its associated service container set in some embodiments not only migrate each service container in the service container set but also the service rules and connection-tracking records of the service containers.
  • the configured module along each machine’s datapath in some embodiments is a classifier that for each data message that passes along the datapath, identifies a subset of service operations that have to be performed on the data message, and passes the data message to a subset of service containers to perform the identified subset of service operations on the data message.
  • the module successively passes the data message to successive service containers in the subset of containers after receiving the data message from each service container in the identified subset of containers (e.g., passes the data message to a second container in the identified container subset after receiving the data message from a first container).
  • the module passes the data message by generating a service identifier that specifies the identified subset of service operations that have to be performed on the data message by a subset of service containers, and providing the service identifier along with the data message so that the data message can be forwarded to successive service containers in the identified subset of service containers.
  • the service operations in the subset of service operations identified by the classifier have a particular order, and the service identifier specifies the particular order.
  • a forwarding element executing on the host computer processes each generated service identifier in order to identify the subset of services that has to be performed on the data message for which the service identifier is generated, and to successively provide the data message to service containers in the subset of service containers to perform the identified subset of service operations.
  • Each particular machine’s classifier in some embodiments can identify different subsets of service operations for different data message flows originating from the particular machine and/or terminating at the particular machine.
  • each particular machine’s classifier is called by a port of a software forwarding element that receives the data messages associated with the particular machine.
  • Figure 1 illustrates an example of a host computer with two different sets of containers that perform service operations for two different guest virtual machines executing on the host computer.
  • Figure 2 illustrates an example wherein each Pod executes on a service virtual machine.
  • Figure 3 illustrates a service processing engine that sequentially calls the service containers in a service chain that it identifies for a data message.
  • Figure 4 illustrates a service processing engine that calls the first service container of the service chain that the service processing engine identifies for a data message.
  • Figure 5 illustrates how some embodiments forward data message through the service containers when the service containers are distributed across multiple Pods.
  • Figure 6 illustrates a process that a service processing engine performs to identify a subset of service operations to perform on a data message associated with its GVM, and to direct the data message to a subset of service containers configured for its GVM to perform the identified subset of service operations on the data message.
  • Figure 7 illustrates a process that a service SFE performs to identify a subset of service operations to perform on a data message received from a service processing engine, and to direct the data message to a group of service containers of its Pod to perform the identified subset of service operations on the data message.
  • Figure 8 illustrates a service processing engine obtaining a set of one or more contextual attributes associated with a data message from a context engine executing on the host.
  • Figure 9 illustrates in three stages (corresponding to three instances in time) the migration of a GVM from one host computer to another host computer, along with the migration of the set of service containers configured for another GVM.
  • Figure 10 illustrates an example of how the service processing engines and service Pods are managed and configured in some embodiments.
  • Figure 11 conceptually illustrates a computer system with which some embodiments of the invention are implemented.
  • Some embodiments provide a method for performing services on a host computer that executes several machines (e.g., virtual machines (VMs), Pods, containers, etc.).
  • the method uses at least two different sets of containers to perform service operations for at least two different machines executing on the same host computer.
  • Figure 1 illustrates an example of a host computer 100 with two different sets of containers 105 and 110 that perform service operations for two different guest virtual machines 115 and 120 executing on the host computer.
  • the first set of service containers 105 are configured to perform a first set of service operations for the first virtual machine 115 executing on the host computer 100, while the second set of service containers 1 10 are configured to perform a second set of service operations for the second virtual machine 120 executing on the host computer.
  • the first set of service containers 105 includes firewall, network address translation (NAT), and load balancing service containers 122, 124 and 126 that perform firewall, NAT and load balancing service operations on ingressing and/or egressing data messages to and/or from the VM 115.
  • NAT network address translation
  • the second set of service containers 110 includes firewall, load balancing service, and intrusion detection system (IDS) containers 132, 134 and 136 that perform firewall, load balancing, and IDS service operations on ingressing and/or egressing data messages to and/or from the VM 120.
  • the set of service containers for each machine e.g., for VM 115 or 120
  • the sets of containers 105 and 110 in some embodiments are identical sets of containers (i.e., include the same containers to perform the same middlebox service operations), while in other embodiments are different sets of containers (i.e., one set of containers has at least one container that is not part of the other container set and that performs one middlebox service operation not performed by the other container set).
  • the host computer 100 includes a service processing engine 150 or 155 to identify different subsets of service operations to perform on different sets of data message flows associated with the particular VM, and to direct the different sets of data message flows to different sets of service containers configured for the particular machine to perform the identified sets of service operations on the set of data messages.
  • the host computer executes a software forwarding element (SFE) 160 (e.g., a software switch) that connects the guest VMs of the host computer 100 to each other and to other VMs, machines, devices and appliances outside of the host computer 100.
  • SFE software forwarding element
  • the SFE has two ports 165 and 170 that connect with (i.e., communicate with) the virtual network interface card (VNIC) 1075 of the GVMs.
  • each port 165 or 170 is configured to re-direct all ingress and egress data messages to and from the port’s associated VM (i.e., VM 115 for port 165, and VM 120 for port 170) to the service processing engine 150 or 155 of the VM.
  • the SFE also has a port 180 that interfaces with a physical network interface controller (not shown) of the host computer to forward and receive all data messages exiting and entering the host computer 100.
  • the SFE 160 implements different logical forwarding elements (e.g., multiple logical switches) for different logical networks with multiple other SFEs executing on other host computers.
  • each LFE spans multiple host computers that execute the SFEs that implement the LFE.
  • the VMs 115 and 120 are part of one logical network, while in other embodiments these VMs are part of two different logical networks.
  • Other embodiments do not employ logical networks but partition the physical network of the datacenter (e.g., the IP address space of the datacenter) into segregated networks that can be treated as virtual private clouds (VPCs).
  • VPCs virtual private clouds
  • the VMs 115 and 120 are part of one VPC, while in other embodiments these VMs are part of two different VPCs.
  • each container set 105 or 110 has all of its containers operate on one Pod (i.e., the containers of set 105 execute on one Pod, while the containers of the set 110 execute on another Pod).
  • each container operates on its own dedicated Pod.
  • at least two containers in one set of containers execute on two different Pods, but at least one Pod executes two or more containers in the same container set.
  • a Pod is a group of one or more containers, with shared storage and network resources.
  • a Pod typically has a specification for how to run the containers, and its contents is typically co-located and co-scheduled and run in a shared context.
  • a Pod models an application-specific “logical host,” and contains one or more application containers.
  • Each Pod in some embodiments executes (i.e., operates) on a service virtual machine (SVM).
  • SVM service virtual machine
  • Figure 2 illustrates a host computer 200 with a first set of containers that execute on a first Pod 205 that executes on a first SVM 215 on the host computer 200, while the second set of containers execute on a second Pod 210 that executes on a second SVM 220 on the host computer 200.
  • the first set of containers includes a firewall 122, a network address translator 124, and a load balancer 126
  • the second set of containers includes a firewall 132, a load balancer 134, and IDS detector 136.
  • the S VMs 215 and 220 on which the Pods execute are lighter weight VMs (e.g., consume less storage resources and have faster bootup times) than the GVMs 105 and 110. Also, these SVMs in some embodiments support a smaller set of standard specified network interface drivers, while the GVMs support a larger set of network interface drivers. In some embodiments, each SVM has a vmxnet3 standard VNTC (not shown) through which the service processing engine 150 communicates with the SVM and its Pod.
  • each Pod 205 or 210 in some embodiments includes a forwarding element 225 or 230 that (1) based on the service identifier supplied by the service processing engine 150 or 155, identifies the service containers that need to perform a service operation on a data message provided by the service processing engine 150 or 155, and (2) successively provides the data message to each identified service container.
  • the set of containers 105 or 110 (e.g., Pod 205 or 210 with its containers) for each GVM 115 or 120 is respectively configured when the GVM 115 or 120 is configured on the host computer.
  • Each container set in some embodiments is deployed on the host computer when the set’s associated machine is deployed.
  • the containers e.g., the Pods 205 or 210) and/or GVMs are pre-deployed on the host computer, but the containers are configured for their respective GVMs 115 or 120 when the GVMs 115 or 120 are configured for a particular logical network or VPC.
  • the set of containers 105 or 110 (e.g., Pod 205 or 210 with its containers) for each GVM 115 or 120 is terminated when the GVM is respectively terminated on the host computer.
  • the set of containers 105 or 110 (e.g., Pod 205 or 210 with its containers) for each GVM 115 or 120 is defined to be part of a resource group of its GVM. This allows each service container set (e.g., each Pod) to migrate with its GVM to another host computer.
  • the migration tools that migrate the GVM and its associated service container set in some embodiments migrate the service rules and connection-tracking records of the service containers in the service container set.
  • each GVM 115 or 120 identifies for a data message a subset of one or more service operations that have to be performed on that flow, and directs a subset of the service containers configured for the GVM to perform the identified subset of service operations on the data message.
  • a subset of two or more service operations or containers are referred to below as a service chain or chain of service operations/containers.
  • Figure 3 illustrates that in some embodiments a service processing engine 350 sequentially calls the service containers in a service chain that it identifies for a data message.
  • each service container returns the service-processed data message back to the service processing engine (assuming that the service container does not determine that the data message should be dropped).
  • the service chain includes first a firewall operation performed by a firewall container 305, next a NAT operation performed by a NAT container 310, and last a load balancing operation performed by a load balancing container 315.
  • Figure 4 illustrates that in other embodiments a service processing engine 450 calls the first service container (a firewall container 405) of the service chain that the service processing engine 450 identifies for a data message.
  • the data message is then passed from one service container to the next service container (e.g., from the firewall container to a NAT container 410, or from NAT container 410 to the load balancing container 415) in the chain, until the last service container (in this example the load balancer 415) returns the service-processed data message to the service processing engine 450.
  • each service container forwards the data message to the next service container in the service chain when there is a subsequent service container in the service chain, or back to the service processing engine when there is no subsequent service container in the service chain.
  • a service forwarding element forwards the data message to the successive service containers.
  • a service SFE 225 or 230 forwards a data message received by its Pod 205 or 210 to successive service containers that are identified by the service identifier supplied by the service processing engine 150 or 155, in an order identified by this service identifier.
  • FIG. 5 illustrates how some embodiments forward data message through the service containers when the service containers are distributed across multiple Pods.
  • each Pod’s service SFE is responsible for forwarding a data message to its service containers that are on the service chain specified by the service identifier provided by the service processing engine.
  • the service SFE 525 of Pod 505 first provides the data message to a firewall container 502 and then to a NAT container 506.
  • the Pod 505 then returns the data message back to the service processing engine 550, which then provides the data message to Pod 510.
  • the service SFE 530 of this Pod provides to the load balancing container 512 and then to the encryption container 516, before returning the data message back to the service processing engine 550.
  • the service processing engine provides the data message along with the service identifier to each Pod.
  • the service processing engine provides different service identifiers to the Pods 505 and 510 as the different Pods have to perform different service operations.
  • the service processing engine provides the same service identifier to each Pod, and each Pod’s service SFE can map the provided service identifier to a group of one or more of its service containers that need to process the data message.
  • the service SFE or the service processing engine adjusts (e g., increments or decrements) a next service value that specifies the next service to perform in a list of service operations identified by the service identifier. The service SFE of each Pod can then use this service value to identify the next service that has to be performed and the service container to perform this next service.
  • FIG. 6 illustrates a process 600 that a service processing engine 150 or 155 performs in some embodiments to identify a subset of service operations to perform on a data message associated with its GVM, and to direct the data message to a subset of service containers configured for its GVM to perform the identified subset of service operations on the data message.
  • the process 600 starts when the service processing engine is called (at 605) by its associated SFE port to process a data message received at this port.
  • the data message in some embodiments can be an egress data message originating from the service processing engine’s associated GVM, or an ingress data message destined to this GVM.
  • the process 600 determines whether it has a record for the received data message’s flow in a connection tracking storage that the process maintains. The process 600 would have this record if it previously analyzed another data message in the same flow. For its determination at 610, the process 600 in some embodiments compares the flow identifier (e.g., the five-tuple identifier, i.e., source and destination IP addresses, source and destination ports and protocol) of the received data message with identifiers of records stored in the connection tracking storage to determine whether the connection tracking storage has a record with a record identifier that matches the flow identifier.
  • the flow identifier e.g., the five-tuple identifier, i.e., source and destination IP addresses, source and destination ports and protocol
  • the process 600 determines that it has not previously processed the received data message’s flow, and transitions to 625 to identify a service chain for the data message and to store in the connection tracker an identifier (i.e., a service chain ID) that specifies the identified service chain.
  • the service processing engine s connection tracker in some embodiments stores CT records that specify service chain identifiers for different data message flows processed by the service processing engine.
  • the process 600 compares the flow identifier (e.g., the five-tuple identifier) of the received data message with identifiers of servicechain specifying records stored in a service rule storage that the process 600 analyzes. Based on this comparison, the process 600 identifies a service-chain specifying record that matches the received data message (i.e., that has a record identifier that matches the data messsage’s flow identifier). For different ingress/egress data message flows, the process 600 can identify the same service chain or different service chain based on the service-chain specifying records stored in the service rule storage.
  • the flow identifier e.g., the five-tuple identifier
  • Each service chain in some embodiments has an associated service chain identifier.
  • each service-chain specifying record stores the service chain identifier along with the identities of the service containers and/or Pods that have to perform the services in the identified service chain.
  • each service-chain specifying record specifies the identities of the service containers and/or Pods that have to perform the services, and the service chain identifier is derived from the specified identities of the service containers and/or Pods.
  • each service-chain specifying record just stores the service chain identifier.
  • the process 600 derives the identities of the service containers and/or Pods that have to perform the services from the service chain identifier stored by the record matching the data message’s flow.
  • the process 600 passes the data message and the service identifier (that specifies a subset of service operations that have to be performed on the data message by a subset of service containers) to a service Pod that contains the first service container in the identified service chain that has to process the data message.
  • the service processing engine 150 or 155 passes data messages and their attributes to its associated service Pod(s) by using shared memory allocated by a hypervisor on which both the service processing engine and the service Pod operate.
  • the service operations in the service chain have to be performed in a particular order, and the service identifier specifies the particular order (e.g., the service identifier in some embodiments is associated with a lookup table record maintained by the service Pod that identifies the order of the service operations, while in other embodiments the service identifier can be deconstructed to obtain the identifiers of the successive service operations or container).
  • a forwarding element of the service Pod processes the service identifier in order to identify the subset of services that has to be performed on the data message for which the service identifier is generated, and to successively provide the data message to service containers in a subset of service containers to perform the identified subset of service operations.
  • the process 600 receives the data message from the service Pod. It then determines (at 640) whether there are any additional services in the identified service chain that still need to be performed. As mentioned above (e.g., by reference to Figure 5), sometimes not all of the service containers for a service chain are implemented on the same service Pods. In such cases, the process 600 has to check (at 640) whether it needs to pass the data message to another service Pod to have its service container(s) process the data message.
  • the process 600 determines (at 640) that additional services need to be performed, it passes the data message and the service identifier to the next service Pod that contains the next service container(s) in the identified service chain for processing the data message.
  • the service processing engine adjusts (e.g., increments or decrements) a next service value that specifies the next service to perform in a list of service operations identified by the service identifier.
  • the service SFE of each Pod uses this service value to identify the next service that has to be performed and the service container to perform this next service.
  • the process 600 does not even need to provide a service identifier with the data message to the next service Pod, as the process 600 just handles the successive calls to the successive service containers that perform the service operations in the service chain.
  • the process determines (at 640) that all of the service operations specified by the identified service chain have been performed on the data message, the process returns (at 650) the data message back to the SFE port that called it, and then ends.
  • the process also transitions to 650 from 620 to which the process 600 transitions when it determines (at 610) that its connection tracker has a record that matches the received data message (e.g., matches the data message’s flow ID).
  • the process retrieves the service chain identifier from the matching connection tracker record, and based on this service chain identifier, performs a set of operations that are similar to the operations 625-640. Once all of these operations are completed, the process transitions to 650 to return the data message back to the SFE port that called it, and then ends.
  • FIG. 7 illustrates a process 700 that a service SFE 225 or 230 performs in some embodiments to identify a subset of service operations to perform on a data message received from a service processing engine, and to direct the data message to a group of service containers of its Pod to perform the identified subset of service operations on the data message.
  • the process 700 starts when the service SFE is called (at 705) to process a data message by its associated service processing engine 150 or 155.
  • the service SFE receives a service chain identifier in some embodiments.
  • the process 700 matches the service chain identifier with a record in a service rule storage that has several records that specify different sequences of service operations for different service chain identifiers.
  • the matching record in some embodiments is the record that has a service chain identifier that matches the service chain identifier received with the data message.
  • the service operations in the service chain have to be performed in a particular order.
  • the matching record identifies the order of the service operations.
  • the service SFE then performs operations 715-730 to successively provide the data message to service containers in a group of one or more service containers on its Pod to perform the identified group of service operations. Specifically, at 715, the process 700 passes the data message to the first service container in this group to perform its service operation on the data message. Next, at 720, the process 700 receives the data message from the service container. It then determines (at 725) whether there are any additional services in the identified group of service operations that still need to be performed.
  • the process 700 determines (at 725) that additional services need to be performed, it passes (at 730) the data message to the next service container in the identified group for processing.
  • the process determines (at 725) that all of the service operations specified by the identified group of service containers have been performed on the data message, the process returns (at 735) the data message back to the service process engine that called it, and then ends.
  • the service containers perform their service operations not only based on the flow identifiers of the data messages that they process, but also based on contextual attributes (e.g., attributes other than layers 2, 3 and 4 header values) associated with these data messages. For instance, for a data message, a service container in some embodiments selects a service rule that specifies the service operation to perform, by using the data message’s flow attributes and one or more contextual attributes associated with the data message.
  • contextual attributes e.g., attributes other than layers 2, 3 and 4 header values
  • the service container in some embodiments compares the data message’s flow attributes (e.g., one or more of the data message’s L2-L4 header values) and one or more of the data message’s contextual attributes with match attributes of the service rules, in order to identify the highest priority service rule with match attributes that match the message’s flow and contextual attributes.
  • flow attributes e.g., one or more of the data message’s L2-L4 header values
  • contextual attributes include source application name, application version, traffic type, resource consumption parameter, threat level, layer 7 parameters, process identifiers, user identifiers, group identifiers, process name, process hash, loaded module identifiers, etc.
  • Figure 8 illustrates that in some embodiments the service processing engine 850 obtains a set of one or more contextual attributes associated with a data message from a context engine 805 executing on the host. It also shows the service processing engine passing the obtained contextual attribute set to a service Pod 810 along with the data message and a service identifier specifying the service operations to perform on the data message.
  • the context engine 805 in some embodiments obtains some or all of the contextual attributes from a guest introspection agent 820 executing on the service processing engine’s GVM 825.
  • Patent 10,802,857 further describes the context engine 805 and the manner that this engine obtains contextual attributes for data message flows from GI agents that execute on the GVMs and from other service engines (such as a deep packet inspector) executing on the host computer.
  • U. S. Patent 10,802,857 is incorporated herein by reference.
  • Figure 9 illustrates in three stages (corresponding to three instances in time) the migration of a GVM 925 from one host computer 905 to another host computer 910, along with the migration of the set of service containers configured for the GVM 925.
  • the set of configured service containers all reside on one Pod 920 that executes on one SVM 922.
  • the SVM 922 (along with its Pod and the Pod’s associated service containers) migrate from host computer 905 to host computer 910 along with the GVM 925.
  • the SVM 922 along with its Pod and the Pod’s associated service containers) are defined to be part of the resource group of the GVM 925, so that VM migration tools on host computers 905 and 910 (e.g., the VM live migration of VMware vSphere) can migrate the SVM 922 to the new host computer 910 when it migrates the GVM 925 to the host computer 910.
  • VM migration tools on host computers 905 and 910 e.g., the VM live migration of VMware vSphere
  • the migration tools in some embodiments migrate a VM (e.g., a GVM or SVM) to a new host computer by migrating from the old VM to the new VM (1 ) the configuration file that includes the definition of the VM, (2) the runtime memory (e.g., RAM data) used by the VM, (3) the device memory (e.g., storage files and data structures) used by the VM. These tools also activate (e.g., instantiate) the VM on the new host computer.
  • a VM e.g., a GVM or SVM
  • each migrating service container moves to the new host computer 910 along with it service rules 932 and its connection tracking records 934.
  • the service-chain identifying rules 936 and the connection tracking records 938 of the migrating GVM’s service processing engine 950 are also migrated to the new host computer 910 from the old host computer 905, so that a service processing engine 955 on the new host computer 910 can use these rules and records for data messages associated with the migrating GVM on this host computer.
  • the service processing engine 955 is terminated once the GVM 925 migrates to host 910.
  • each GVMs associated service Pod serves as an easily constructed and configured sidecar for its GVM. Deploying such a sidecar service Pod for each GVM also eliminates service bottleneck issues, which become problematic as the number of GVMs increases on host computers.
  • This sidecar architecture is also transparent to the guest machines as it is deployed inline in their datapaths without any changes to the configuration of the guest machines.
  • the same service Pod architecture are employed with the same benefits in the embodiments in which the guest machines are guest containers instead of guest virtual machines.
  • FIG 10 illustrates an example of how the service processing engines and service Pods are managed and configured in some embodiments.
  • This figure illustrates multiple hosts 1000 in a datacenter. As shown, each host includes several service Pods 1030, a context engine 1050, several service processing engines 1022, several GVMs 1005, and an SFE 1010.
  • FIG. 10 It also illustrates a set of managers/controllers 1060 for managing the service processing engines 1022 and the service Pods 1030, GVMs 1005, and SFEs 1010.
  • the hosts and managers/controllers communicatively connect to each other through a network 1070, which can be a local area network, a wide area network, a network of networks (such as the Internet), etc.
  • the managers/controllers provides a user interface for the administrators to define service rules for the service processing engines 1022 and the service containers of the service Pods 1030 in terms of flow identifiers and/or contextual attributes, and communicates with the hosts through the network 1070 to provide these service rules.
  • the context engines 1050 collect contextual attributes that are passed to the managers/controllers 1060 through a network 1070 so that these contextual attributes can be used to define service rules.
  • the managers/controllers in some embodiments interact with the discovery engines executing on the host computers 1000 in the datacenter to obtain and refresh inventory of all processes and services that are running on the GVMs on the hosts.
  • the management plane in some embodiments then provides a rule creation interface for allowing administrators to create service rules for the service processing engines 1022, and the service containers of the service Pods 1030. Once the service rules are defined in the management plane, the management plane supplies some or all of these rules to the hosts 1000, through a set of configuring controllers.
  • the term “software” is meant to include firmware residing in readonly memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
  • FIG. 11 conceptually illustrates a computer system 1100 with which some embodiments of the invention are implemented.
  • the computer system 1100 can be used to implement any of the above-described hosts, controllers, and managers. As such, it can be used to execute any of the above described processes.
  • This computer system includes various types of non -transitory machine readable media and interfaces for various other types of machine readable media.
  • Computer system 1100 includes a bus 1105, processing unit(s) 1110, a system memory 1125, a read-only memory 1130, a permanent storage device 1135, input devices 1140, and output devices 1145.
  • the bus 1105 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 1100.
  • the bus 1105 communicatively connects the processing unit(s) 1110 with the read-only memory 1130, the system memory 1125, and the permanent storage device 1135.
  • the processing unit(s) 1110 retrieve instructions to execute and data to process in order to execute the processes of the invention.
  • the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
  • the read- only-memory (ROM) 1130 stores static data and instructions that are needed by the processing unit(s) 1110 and other modules of the computer system.
  • the permanent storage device 1135 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 1100 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1135.
  • the system memory 1125 is a read-and-write memory device. However, unlike storage device 1135, the system memory is a volatile read-and-write memory, such a random access memory.
  • the system memory stores some of the instructions and data that the processor needs at runtime.
  • the invention’s processes are stored in the system memory 1125, the permanent storage device 1135, and/or the read-only memory 1130. From these various memory units, the processing unit(s) 1110 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
  • the bus 1105 also connects to the input and output devices 1140 and 1145.
  • the input devices enable the user to communicate information and select commands to the computer system.
  • the input devices 1140 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
  • the output devices 1145 display images generated by the computer system.
  • the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
  • bus 1105 also couples computer system 1100 to a network 1165 through a network adapter (not shown).
  • the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of computer system 1100 may be used in conjunction with the invention.
  • Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
  • electronic components such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
  • Such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD- ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc ), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, and any other optical or magnetic media.
  • RAM random access memory
  • ROM read-only compact discs
  • CD-R recordable compact discs
  • CD-RW rewritable compact discs
  • read-only digital versatile discs e.g., DVD-ROM, dual-layer DVD- ROM
  • flash memory e.g., SD cards, mini-
  • the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
  • Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • such integrated circuits execute instructions that are stored on the circuit itself.
  • computer “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
  • display or displaying means displaying on an electronic device.
  • computer readable medium As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Some embodiments provide a method for performing services on a host computer that executes several machines in a datacenter. The method configures a first set of one or more service containers for a first machine executing on the host computer, and a second set of one or more service containers for a second machine executing on the host computer. Each configured service container performs a service operation (e.g., a middlebox service operation, such as firewall, load balancing, encryption, etc.) on data messages associated with a particular machine (e.g., on ingress and/or egress data messages to and/or from the particular machine). For each particular machine, the method also configures a module along the particular machine's datapath to identify a subset of service operations to perform on a set of data messages associated with the particular machine, and to direct the set of data messages to a set of service containers configured for the particular machine to perform the identified set of service operations on the set of data messages. In some embodiments, the first and second machines are part of one logical network or one virtual private cloud that is deployed over a common physical network in the datacenter.

Description

PROVIDING STATEFUL SERVICES IN A SCALABLE MANNER FOR MACHINES EXECUTING ON HOST COMPUTERS
BACKGROUND
[0001] Sidecar containers have become popular for micro-services applications, which have one application implemented by many different application components each of which is typically implemented by an individual container. Sidecar containers are often deployed in series with forwarding across the individual service containers. One example is a service mesh that has a proxy container deployed in front of a web server container or application server container to handle services such as authentication, service discovery, encryption, or load balancing. The web server or application server container is configured to send its traffic to the sidecar proxy. In the return path, the sidecar proxy receives the packet and sends the packet to the web server or application server container.
[0002] These services and their orders are fixed and have to be deployed when the web server or application server container is deployed and essentially operates in a non-transparent mode, i.e., the web server or application server container is configured to forward packets to the sidecar proxy. Mobility of such a container is also restricted because of its dependency on the attached sidecar proxy. Moreover, for virtual machines (VMs) running legacy applications, deployment of inline services (e.g., load balancing, intrusion detection system, layer 7 firewall, etc.) in these architectures is still being done through middleboxes as it is not possible or recommended to touch any part of the VM image.
BRIEF SUMMARY
[0003] Some embodiments provide a method for performing services on a host computer that executes several machines (e.g., virtual machines (VMs), Pods, containers, etc.) in a datacenter. The method configures a first set of one or more service containers for a first machine executing on the host computer, and a second set of one or more service containers for a second machine executing on the host computer. Each configured service container performs a service operation (e.g., a middlebox service operation, such as firewall, load balancing, encryption, etc.) on data messages associated with a particular machine (e.g., on ingress and/or egress data messages to and/or from the particular machine).
[0004] For each particular machine, the method also configures a module along the particular machine’s datapath (e.g., ingress and/or egress datapath) to identify a subset of service operations to perform on a set of data messages associated with the particular machine, and to direct the set of data messages to a set of service containers configured for the particular machine to perform the identified set of service operations on the set of data messages. In some embodiments, the first and second machines are part of one logical network or one virtual private cloud (VPC) that is deployed over a common physical network in the datacenter.
[0005] The first and second sets of containers in some embodiments can be identical sets of containers (i.e., perform the same middlebox service operations), or can be different sets of containers (i.e., one set of containers performs a middlebox service operation not performed by the other set of containers. In some embodiments, the first and second sets of containers respectively operate on first and second Pods. In other embodiments, each container operates on its own dedicated Pod. In still other embodiments, at least two containers in one set of containers execute on two different Pods, but at least one Pod executes two or more containers in the same container set.
[0006] Each Pod in some embodiments executes (i.e., operates) on a service virtual machine (SVM) in some embodiments. For instance, in some embodiments, the first set of containers execute on a first Pod that executes on a first SVM on the host computer, while the second set of containers execute on a second Pod that executes on a second SVM on the host computer. In some embodiments, the first and second machines are first and second guest virtual machines (GVMs) or first and second guest containers. In some embodiments where the first and second machines are first and second GVMs, the SVMs on which the Pods execute are lighter weight VMs (e.g., consume less storage resources and have faster bootup times) than the GVMs. Also, these SVMs in some embodiments support a smaller set of standard specified network interface drivers, while the GVMs support a larger set of network interface drivers.
[0007] In some embodiments, the first and second sets of containers (e.g., the first and second Pods) are respectively configured when the first and second machines are configured on the host computer. Each container set in some embodiments is deployed on the host computer when the set’s associated machine is deployed. Alternatively, in other embodiments, the containers and/or machines are pre-deployed on the host computer, but the containers are configured for their respective machine when the machines are configured for a particular logical network or VPC.
[0008] In some embodiments, the first and second sets of containers (e g., the first and second Pods) are terminated when the first and second machines are respectively terminated on the host computer. Also, in some embodiments, the first and second sets of containers (e.g., the first and second Pods) are defined to be part of a resource group of their respective first and second machines. This allows each service container set (e.g., each Pod) to migrate with its machine to another host computer. The migration tools that migrate the machine and its associated service container set in some embodiments not only migrate each service container in the service container set but also the service rules and connection-tracking records of the service containers.
[0009] The configured module along each machine’s datapath (e.g., ingress and/or egress datapath) in some embodiments is a classifier that for each data message that passes along the datapath, identifies a subset of service operations that have to be performed on the data message, and passes the data message to a subset of service containers to perform the identified subset of service operations on the data message. In some embodiments, the module successively passes the data message to successive service containers in the subset of containers after receiving the data message from each service container in the identified subset of containers (e.g., passes the data message to a second container in the identified container subset after receiving the data message from a first container).
[0010] In other embodiments, the module passes the data message by generating a service identifier that specifies the identified subset of service operations that have to be performed on the data message by a subset of service containers, and providing the service identifier along with the data message so that the data message can be forwarded to successive service containers in the identified subset of service containers. The service operations in the subset of service operations identified by the classifier have a particular order, and the service identifier specifies the particular order. In some embodiments, a forwarding element executing on the host computer (e.g., a forwarding element executing on the Pod that executes the service containers) processes each generated service identifier in order to identify the subset of services that has to be performed on the data message for which the service identifier is generated, and to successively provide the data message to service containers in the subset of service containers to perform the identified subset of service operations.
[0011] Each particular machine’s classifier in some embodiments can identify different subsets of service operations for different data message flows originating from the particular machine and/or terminating at the particular machine. In some embodiments, each particular machine’s classifier is called by a port of a software forwarding element that receives the data messages associated with the particular machine.
[0012] The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures. [0014] Figure 1 illustrates an example of a host computer with two different sets of containers that perform service operations for two different guest virtual machines executing on the host computer.
[0015] Figure 2 illustrates an example wherein each Pod executes on a service virtual machine.
[0016] Figure 3 illustrates a service processing engine that sequentially calls the service containers in a service chain that it identifies for a data message.
[0017] Figure 4 illustrates a service processing engine that calls the first service container of the service chain that the service processing engine identifies for a data message.
[0018] Figure 5 illustrates how some embodiments forward data message through the service containers when the service containers are distributed across multiple Pods.
[0019] Figure 6 illustrates a process that a service processing engine performs to identify a subset of service operations to perform on a data message associated with its GVM, and to direct the data message to a subset of service containers configured for its GVM to perform the identified subset of service operations on the data message.
[0020] Figure 7 illustrates a process that a service SFE performs to identify a subset of service operations to perform on a data message received from a service processing engine, and to direct the data message to a group of service containers of its Pod to perform the identified subset of service operations on the data message.
[0021] Figure 8 illustrates a service processing engine obtaining a set of one or more contextual attributes associated with a data message from a context engine executing on the host.
[0022] Figure 9 illustrates in three stages (corresponding to three instances in time) the migration of a GVM from one host computer to another host computer, along with the migration of the set of service containers configured for another GVM.
[0023] Figure 10 illustrates an example of how the service processing engines and service Pods are managed and configured in some embodiments.
[0024] Figure 11 conceptually illustrates a computer system with which some embodiments of the invention are implemented. DETAILED DESCRIPTION
[0025] In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
[0026] Some embodiments provide a method for performing services on a host computer that executes several machines (e.g., virtual machines (VMs), Pods, containers, etc.). In some embodiments, the method uses at least two different sets of containers to perform service operations for at least two different machines executing on the same host computer. Figure 1 illustrates an example of a host computer 100 with two different sets of containers 105 and 110 that perform service operations for two different guest virtual machines 115 and 120 executing on the host computer.
[0027] The first set of service containers 105 are configured to perform a first set of service operations for the first virtual machine 115 executing on the host computer 100, while the second set of service containers 1 10 are configured to perform a second set of service operations for the second virtual machine 120 executing on the host computer. In this example, the first set of service containers 105 includes firewall, network address translation (NAT), and load balancing service containers 122, 124 and 126 that perform firewall, NAT and load balancing service operations on ingressing and/or egressing data messages to and/or from the VM 115.
[0028] The second set of service containers 110 includes firewall, load balancing service, and intrusion detection system (IDS) containers 132, 134 and 136 that perform firewall, load balancing, and IDS service operations on ingressing and/or egressing data messages to and/or from the VM 120. In some embodiments, the set of service containers for each machine (e.g., for VM 115 or 120) includes other types of service containers performing other middlebox service operations (e.g., such as encryption, intrusion prevention, etc.) for one or more data message flows associated with their respective machine. The sets of containers 105 and 110 in some embodiments are identical sets of containers (i.e., include the same containers to perform the same middlebox service operations), while in other embodiments are different sets of containers (i.e., one set of containers has at least one container that is not part of the other container set and that performs one middlebox service operation not performed by the other container set). [0029] For each particular VM 115 and 120, the host computer 100 includes a service processing engine 150 or 155 to identify different subsets of service operations to perform on different sets of data message flows associated with the particular VM, and to direct the different sets of data message flows to different sets of service containers configured for the particular machine to perform the identified sets of service operations on the set of data messages. As shown, the host computer executes a software forwarding element (SFE) 160 (e.g., a software switch) that connects the guest VMs of the host computer 100 to each other and to other VMs, machines, devices and appliances outside of the host computer 100.
[0030] The SFE has two ports 165 and 170 that connect with (i.e., communicate with) the virtual network interface card (VNIC) 1075 of the GVMs. In some embodiments, each port 165 or 170 is configured to re-direct all ingress and egress data messages to and from the port’s associated VM (i.e., VM 115 for port 165, and VM 120 for port 170) to the service processing engine 150 or 155 of the VM. The SFE also has a port 180 that interfaces with a physical network interface controller (not shown) of the host computer to forward and receive all data messages exiting and entering the host computer 100.
[0031] In some embodiments, the SFE 160 implements different logical forwarding elements (e.g., multiple logical switches) for different logical networks with multiple other SFEs executing on other host computers. In some of these embodiments, each LFE spans multiple host computers that execute the SFEs that implement the LFE. In some embodiments, the VMs 115 and 120 are part of one logical network, while in other embodiments these VMs are part of two different logical networks. Other embodiments do not employ logical networks but partition the physical network of the datacenter (e.g., the IP address space of the datacenter) into segregated networks that can be treated as virtual private clouds (VPCs). In some such embodiments, the VMs 115 and 120 are part of one VPC, while in other embodiments these VMs are part of two different VPCs.
[0032] In some embodiments, each container set 105 or 110 has all of its containers operate on one Pod (i.e., the containers of set 105 execute on one Pod, while the containers of the set 110 execute on another Pod). In other embodiments, each container operates on its own dedicated Pod. In still other embodiments, at least two containers in one set of containers execute on two different Pods, but at least one Pod executes two or more containers in the same container set.
[0033] In some embodiments, a Pod is a group of one or more containers, with shared storage and network resources. A Pod typically has a specification for how to run the containers, and its contents is typically co-located and co-scheduled and run in a shared context. In some embodiments, a Pod models an application-specific “logical host,” and contains one or more application containers.
[0034] Each Pod in some embodiments executes (i.e., operates) on a service virtual machine (SVM). For instance, Figure 2 illustrates a host computer 200 with a first set of containers that execute on a first Pod 205 that executes on a first SVM 215 on the host computer 200, while the second set of containers execute on a second Pod 210 that executes on a second SVM 220 on the host computer 200. In this example, the first set of containers includes a firewall 122, a network address translator 124, and a load balancer 126, and the second set of containers includes a firewall 132, a load balancer 134, and IDS detector 136.
[0035] In some embodiments, the S VMs 215 and 220 on which the Pods execute are lighter weight VMs (e.g., consume less storage resources and have faster bootup times) than the GVMs 105 and 110. Also, these SVMs in some embodiments support a smaller set of standard specified network interface drivers, while the GVMs support a larger set of network interface drivers. In some embodiments, each SVM has a vmxnet3 standard VNTC (not shown) through which the service processing engine 150 communicates with the SVM and its Pod.
[0036] As further described below, each Pod 205 or 210 in some embodiments includes a forwarding element 225 or 230 that (1) based on the service identifier supplied by the service processing engine 150 or 155, identifies the service containers that need to perform a service operation on a data message provided by the service processing engine 150 or 155, and (2) successively provides the data message to each identified service container.
[0037] In some embodiments, the set of containers 105 or 110 (e.g., Pod 205 or 210 with its containers) for each GVM 115 or 120 is respectively configured when the GVM 115 or 120 is configured on the host computer. Each container set in some embodiments is deployed on the host computer when the set’s associated machine is deployed. Alternatively, in other embodiments, the containers (e.g., the Pods 205 or 210) and/or GVMs are pre-deployed on the host computer, but the containers are configured for their respective GVMs 115 or 120 when the GVMs 115 or 120 are configured for a particular logical network or VPC.
[0038] In some embodiments, the set of containers 105 or 110 (e.g., Pod 205 or 210 with its containers) for each GVM 115 or 120 is terminated when the GVM is respectively terminated on the host computer. Also, in some embodiments, the set of containers 105 or 110 (e.g., Pod 205 or 210 with its containers) for each GVM 115 or 120 is defined to be part of a resource group of its GVM. This allows each service container set (e.g., each Pod) to migrate with its GVM to another host computer. The migration tools that migrate the GVM and its associated service container set in some embodiments migrate the service rules and connection-tracking records of the service containers in the service container set.
[0039] As mentioned above, the service processing engine 150 or 155 of each GVM 115 or 120 identifies for a data message a subset of one or more service operations that have to be performed on that flow, and directs a subset of the service containers configured for the GVM to perform the identified subset of service operations on the data message. A subset of two or more service operations or containers are referred to below as a service chain or chain of service operations/containers.
[0040] Figure 3 illustrates that in some embodiments a service processing engine 350 sequentially calls the service containers in a service chain that it identifies for a data message. Under this approach, each service container returns the service-processed data message back to the service processing engine (assuming that the service container does not determine that the data message should be dropped). In this example, the service chain includes first a firewall operation performed by a firewall container 305, next a NAT operation performed by a NAT container 310, and last a load balancing operation performed by a load balancing container 315.
[0041] For the same service chain as in Figure 3, Figure 4 illustrates that in other embodiments a service processing engine 450 calls the first service container (a firewall container 405) of the service chain that the service processing engine 450 identifies for a data message. The data message is then passed from one service container to the next service container (e.g., from the firewall container to a NAT container 410, or from NAT container 410 to the load balancing container 415) in the chain, until the last service container (in this example the load balancer 415) returns the service-processed data message to the service processing engine 450.
[0042] Different embodiments implement the data message forwarding of Figure 4 differently. For instance, in some embodiments, each service container forwards the data message to the next service container in the service chain when there is a subsequent service container in the service chain, or back to the service processing engine when there is no subsequent service container in the service chain. In other embodiments, a service forwarding element forwards the data message to the successive service containers. For example, in the example illustrated in Figure 2, a service SFE 225 or 230 forwards a data message received by its Pod 205 or 210 to successive service containers that are identified by the service identifier supplied by the service processing engine 150 or 155, in an order identified by this service identifier.
[0043] Figure 5 illustrates how some embodiments forward data message through the service containers when the service containers are distributed across multiple Pods. As shown in this figure, each Pod’s service SFE is responsible for forwarding a data message to its service containers that are on the service chain specified by the service identifier provided by the service processing engine. In this example, the service SFE 525 of Pod 505 first provides the data message to a firewall container 502 and then to a NAT container 506. The Pod 505 then returns the data message back to the service processing engine 550, which then provides the data message to Pod 510. The service SFE 530 of this Pod provides to the load balancing container 512 and then to the encryption container 516, before returning the data message back to the service processing engine 550.
[0044] In this example, the service processing engine provides the data message along with the service identifier to each Pod. In some embodiments, the service processing engine provides different service identifiers to the Pods 505 and 510 as the different Pods have to perform different service operations. In other embodiments, the service processing engine provides the same service identifier to each Pod, and each Pod’s service SFE can map the provided service identifier to a group of one or more of its service containers that need to process the data message. In some of these embodiments, the service SFE or the service processing engine adjusts (e g., increments or decrements) a next service value that specifies the next service to perform in a list of service operations identified by the service identifier. The service SFE of each Pod can then use this service value to identify the next service that has to be performed and the service container to perform this next service.
[0045] Figure 6 illustrates a process 600 that a service processing engine 150 or 155 performs in some embodiments to identify a subset of service operations to perform on a data message associated with its GVM, and to direct the data message to a subset of service containers configured for its GVM to perform the identified subset of service operations on the data message. As shown, the process 600 starts when the service processing engine is called (at 605) by its associated SFE port to process a data message received at this port. The data message in some embodiments can be an egress data message originating from the service processing engine’s associated GVM, or an ingress data message destined to this GVM.
[0046] At 610, the process 600 determines whether it has a record for the received data message’s flow in a connection tracking storage that the process maintains. The process 600 would have this record if it previously analyzed another data message in the same flow. For its determination at 610, the process 600 in some embodiments compares the flow identifier (e.g., the five-tuple identifier, i.e., source and destination IP addresses, source and destination ports and protocol) of the received data message with identifiers of records stored in the connection tracking storage to determine whether the connection tracking storage has a record with a record identifier that matches the flow identifier.
[0047] If not, the process 600 determines that it has not previously processed the received data message’s flow, and transitions to 625 to identify a service chain for the data message and to store in the connection tracker an identifier (i.e., a service chain ID) that specifies the identified service chain. The service processing engine’s connection tracker in some embodiments stores CT records that specify service chain identifiers for different data message flows processed by the service processing engine.
[0048] To identify the service chain, the process 600 in some embodiments compares the flow identifier (e.g., the five-tuple identifier) of the received data message with identifiers of servicechain specifying records stored in a service rule storage that the process 600 analyzes. Based on this comparison, the process 600 identifies a service-chain specifying record that matches the received data message (i.e., that has a record identifier that matches the data messsage’s flow identifier). For different ingress/egress data message flows, the process 600 can identify the same service chain or different service chain based on the service-chain specifying records stored in the service rule storage.
[0049] Each service chain in some embodiments has an associated service chain identifier. In some of these embodiments, each service-chain specifying record stores the service chain identifier along with the identities of the service containers and/or Pods that have to perform the services in the identified service chain. In other embodiments, each service-chain specifying record specifies the identities of the service containers and/or Pods that have to perform the services, and the service chain identifier is derived from the specified identities of the service containers and/or Pods. In still other embodiments, each service-chain specifying record just stores the service chain identifier. In these embodiments, the process 600 derives the identities of the service containers and/or Pods that have to perform the services from the service chain identifier stored by the record matching the data message’s flow.
[0050] Next, at 630, the process 600 passes the data message and the service identifier (that specifies a subset of service operations that have to be performed on the data message by a subset of service containers) to a service Pod that contains the first service container in the identified service chain that has to process the data message. In some embodiments, the service processing engine 150 or 155 passes data messages and their attributes to its associated service Pod(s) by using shared memory allocated by a hypervisor on which both the service processing engine and the service Pod operate.
[0051] The service operations in the service chain have to be performed in a particular order, and the service identifier specifies the particular order (e.g., the service identifier in some embodiments is associated with a lookup table record maintained by the service Pod that identifies the order of the service operations, while in other embodiments the service identifier can be deconstructed to obtain the identifiers of the successive service operations or container). As mentioned above and further described below by reference to Figure 7, a forwarding element of the service Pod processes the service identifier in order to identify the subset of services that has to be performed on the data message for which the service identifier is generated, and to successively provide the data message to service containers in a subset of service containers to perform the identified subset of service operations.
[0052] At 635, the process 600 receives the data message from the service Pod. It then determines (at 640) whether there are any additional services in the identified service chain that still need to be performed. As mentioned above (e.g., by reference to Figure 5), sometimes not all of the service containers for a service chain are implemented on the same service Pods. In such cases, the process 600 has to check (at 640) whether it needs to pass the data message to another service Pod to have its service container(s) process the data message.
[0053] If the process 600 determines (at 640) that additional services need to be performed, it passes the data message and the service identifier to the next service Pod that contains the next service container(s) in the identified service chain for processing the data message. In some embodiments, the service processing engine adjusts (e.g., increments or decrements) a next service value that specifies the next service to perform in a list of service operations identified by the service identifier. The service SFE of each Pod then uses this service value to identify the next service that has to be performed and the service container to perform this next service. Alternatively, in the embodiments that have each service Pod contain only one service container, the process 600 does not even need to provide a service identifier with the data message to the next service Pod, as the process 600 just handles the successive calls to the successive service containers that perform the service operations in the service chain.
[0054] When the process determines (at 640) that all of the service operations specified by the identified service chain have been performed on the data message, the process returns (at 650) the data message back to the SFE port that called it, and then ends. The process also transitions to 650 from 620 to which the process 600 transitions when it determines (at 610) that its connection tracker has a record that matches the received data message (e.g., matches the data message’s flow ID). At 620, the process retrieves the service chain identifier from the matching connection tracker record, and based on this service chain identifier, performs a set of operations that are similar to the operations 625-640. Once all of these operations are completed, the process transitions to 650 to return the data message back to the SFE port that called it, and then ends.
[0055] Figure 7 illustrates a process 700 that a service SFE 225 or 230 performs in some embodiments to identify a subset of service operations to perform on a data message received from a service processing engine, and to direct the data message to a group of service containers of its Pod to perform the identified subset of service operations on the data message. As shown, the process 700 starts when the service SFE is called (at 705) to process a data message by its associated service processing engine 150 or 155. Along with this data message, the service SFE receives a service chain identifier in some embodiments.
[0056] At 710, the process 700 matches the service chain identifier with a record in a service rule storage that has several records that specify different sequences of service operations for different service chain identifiers. The matching record in some embodiments is the record that has a service chain identifier that matches the service chain identifier received with the data message. The service operations in the service chain have to be performed in a particular order. In some embodiments, the matching record identifies the order of the service operations.
[0057] The service SFE then performs operations 715-730 to successively provide the data message to service containers in a group of one or more service containers on its Pod to perform the identified group of service operations. Specifically, at 715, the process 700 passes the data message to the first service container in this group to perform its service operation on the data message. Next, at 720, the process 700 receives the data message from the service container. It then determines (at 725) whether there are any additional services in the identified group of service operations that still need to be performed.
[0058] If the process 700 determines (at 725) that additional services need to be performed, it passes (at 730) the data message to the next service container in the identified group for processing. When the process determines (at 725) that all of the service operations specified by the identified group of service containers have been performed on the data message, the process returns (at 735) the data message back to the service process engine that called it, and then ends.
[0059] In some embodiments, the service containers perform their service operations not only based on the flow identifiers of the data messages that they process, but also based on contextual attributes (e.g., attributes other than layers 2, 3 and 4 header values) associated with these data messages. For instance, for a data message, a service container in some embodiments selects a service rule that specifies the service operation to perform, by using the data message’s flow attributes and one or more contextual attributes associated with the data message.
[0060] Specifically, to select the service rule, the service container in some embodiments compares the data message’s flow attributes (e.g., one or more of the data message’s L2-L4 header values) and one or more of the data message’s contextual attributes with match attributes of the service rules, in order to identify the highest priority service rule with match attributes that match the message’s flow and contextual attributes. Examples of contextual attributes in some embodiments include source application name, application version, traffic type, resource consumption parameter, threat level, layer 7 parameters, process identifiers, user identifiers, group identifiers, process name, process hash, loaded module identifiers, etc.
[0061] Figure 8 illustrates that in some embodiments the service processing engine 850 obtains a set of one or more contextual attributes associated with a data message from a context engine 805 executing on the host. It also shows the service processing engine passing the obtained contextual attribute set to a service Pod 810 along with the data message and a service identifier specifying the service operations to perform on the data message. As shown in Figure 8, the context engine 805 in some embodiments obtains some or all of the contextual attributes from a guest introspection agent 820 executing on the service processing engine’s GVM 825. U.S. Patent 10,802,857 further describes the context engine 805 and the manner that this engine obtains contextual attributes for data message flows from GI agents that execute on the GVMs and from other service engines (such as a deep packet inspector) executing on the host computer. U. S. Patent 10,802,857 is incorporated herein by reference.
[0062] Figure 9 illustrates in three stages (corresponding to three instances in time) the migration of a GVM 925 from one host computer 905 to another host computer 910, along with the migration of the set of service containers configured for the GVM 925. In this example, the set of configured service containers all reside on one Pod 920 that executes on one SVM 922. As shown, the SVM 922 (along with its Pod and the Pod’s associated service containers) migrate from host computer 905 to host computer 910 along with the GVM 925.
[0063] In some embodiments, the SVM 922 along with its Pod and the Pod’s associated service containers) are defined to be part of the resource group of the GVM 925, so that VM migration tools on host computers 905 and 910 (e.g., the VM live migration of VMware vSphere) can migrate the SVM 922 to the new host computer 910 when it migrates the GVM 925 to the host computer 910. The migration tools in some embodiments migrate a VM (e.g., a GVM or SVM) to a new host computer by migrating from the old VM to the new VM (1 ) the configuration file that includes the definition of the VM, (2) the runtime memory (e.g., RAM data) used by the VM, (3) the device memory (e.g., storage files and data structures) used by the VM. These tools also activate (e.g., instantiate) the VM on the new host computer.
[0064] As shown in Figure 9, each migrating service container moves to the new host computer 910 along with it service rules 932 and its connection tracking records 934. Also, the service-chain identifying rules 936 and the connection tracking records 938 of the migrating GVM’s service processing engine 950 are also migrated to the new host computer 910 from the old host computer 905, so that a service processing engine 955 on the new host computer 910 can use these rules and records for data messages associated with the migrating GVM on this host computer. As shown, the service processing engine 955 is terminated once the GVM 925 migrates to host 910.
[0065] By deploying on fast, lightweight SVMs and easily migrating with their GVMs, each GVMs associated service Pod serves as an easily constructed and configured sidecar for its GVM. Deploying such a sidecar service Pod for each GVM also eliminates service bottleneck issues, which become problematic as the number of GVMs increases on host computers. This sidecar architecture is also transparent to the guest machines as it is deployed inline in their datapaths without any changes to the configuration of the guest machines. The same service Pod architecture are employed with the same benefits in the embodiments in which the guest machines are guest containers instead of guest virtual machines.
[0066] Figure 10 illustrates an example of how the service processing engines and service Pods are managed and configured in some embodiments. This figure illustrates multiple hosts 1000 in a datacenter. As shown, each host includes several service Pods 1030, a context engine 1050, several service processing engines 1022, several GVMs 1005, and an SFE 1010.
[0067] It also illustrates a set of managers/controllers 1060 for managing the service processing engines 1022 and the service Pods 1030, GVMs 1005, and SFEs 1010. The hosts and managers/controllers communicatively connect to each other through a network 1070, which can be a local area network, a wide area network, a network of networks (such as the Internet), etc. The managers/controllers provides a user interface for the administrators to define service rules for the service processing engines 1022 and the service containers of the service Pods 1030 in terms of flow identifiers and/or contextual attributes, and communicates with the hosts through the network 1070 to provide these service rules.
[0068] Tn some embodiments, the context engines 1050 collect contextual attributes that are passed to the managers/controllers 1060 through a network 1070 so that these contextual attributes can be used to define service rules. The managers/controllers in some embodiments interact with the discovery engines executing on the host computers 1000 in the datacenter to obtain and refresh inventory of all processes and services that are running on the GVMs on the hosts. The management plane in some embodiments then provides a rule creation interface for allowing administrators to create service rules for the service processing engines 1022, and the service containers of the service Pods 1030. Once the service rules are defined in the management plane, the management plane supplies some or all of these rules to the hosts 1000, through a set of configuring controllers.
[0069] Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
[0070] In this specification, the term “software” is meant to include firmware residing in readonly memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
[0071] Figure 11 conceptually illustrates a computer system 1100 with which some embodiments of the invention are implemented. The computer system 1100 can be used to implement any of the above-described hosts, controllers, and managers. As such, it can be used to execute any of the above described processes. This computer system includes various types of non -transitory machine readable media and interfaces for various other types of machine readable media. Computer system 1100 includes a bus 1105, processing unit(s) 1110, a system memory 1125, a read-only memory 1130, a permanent storage device 1135, input devices 1140, and output devices 1145.
[0072] The bus 1105 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 1100. For instance, the bus 1105 communicatively connects the processing unit(s) 1110 with the read-only memory 1130, the system memory 1125, and the permanent storage device 1135.
[0073] From these various memory units, the processing unit(s) 1110 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read- only-memory (ROM) 1130 stores static data and instructions that are needed by the processing unit(s) 1110 and other modules of the computer system. The permanent storage device 1135, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 1100 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1135. [0074] Other embodiments use a removable storage device (such as a flash drive, etc.) as the permanent storage device. Like the permanent storage device 1135, the system memory 1125 is a read-and-write memory device. However, unlike storage device 1135, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention’s processes are stored in the system memory 1125, the permanent storage device 1135, and/or the read-only memory 1130. From these various memory units, the processing unit(s) 1110 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
[0075] The bus 1105 also connects to the input and output devices 1140 and 1145. The input devices enable the user to communicate information and select commands to the computer system. The input devices 1140 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1145 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
[0076] Finally, as shown in Figure 11, bus 1105 also couples computer system 1100 to a network 1165 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of computer system 1100 may be used in conjunction with the invention.
[0077] Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD- ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc ), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, and any other optical or magnetic media. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
[0078] While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. [0079] As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
[0080] While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, several figures conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims

CLAIMS We claim:
1. A method for providing services on a host computer that executes a plurality of machines, the method comprising: configuring a first plurality of service containers for a first machine executing on the host computer; configuring a second plurality of containers for a second machine executing on the host computer, each service container configured for each particular machine for performing a service operation on data messages associated with the particular machine; configuring, for each particular machine, a module along the particular machine’s datapath to identify a set of service operations to perform on a set of data messages associated with the particular machine, and to direct the set of data messages to a set of service containers configured for the particular machine to perform the identified set of service operations on the set of data messages.
2. The method of claim 1, wherein the first and second pluralities of containers are respectively configured when the first and second machines are configured on the host computers.
3. The method of claim 1 further comprising: configuring a first Pod on which the first plurality of service containers for the first machine are configured; configuring a second Pod on which the second plurality of service containers for the second machine are configured.
4. The method of claim 1, wherein at least two service containers in the first plurality of service containers are configured on two separate Pods.
5. The method of claim 1, wherein the first and second machines belong to one logical network implemented over a physical network on which a plurality of logical networks are defined.
6. The method of claim 1 , wherein each particular machine’ s configured module is a classifier that for a data message that it processes, identifies a set of service operations that have to be performed on the data message, and passes the data message to a set of service containers to perform the identified set of service operations on the data message.
7. The method of claim 6, wherein the module successively passes the data message to successive service containers in the identified set of containers after receiving the data message from each service container in the identified set of containers.
8. The method of claim 6, wherein the module passes the data message by generating a service identifier that specifies the identified set of service operations that have to be performed on the data message by a set of service containers, and providing the service identifier along with the data message so that the data message can be forwarded to successive service containers in the identified set of service containers.
9. The method of claim 8, wherein service operations in the set of service operations identified by the classifier have a particular order, and the service identifier specifies the particular order.
10. The method of claim 8, wherein a forwarding element executing on the host computer processes each generated service identifier in order to identify the set of services that has to be performed on the data message for which the service identifier is generated, and to successively provide the data message to service containers in the set of service containers to perform the identified set of service operations.
11. A method for providing services on a host computer that executes a plurality of machines, the method comprising: configuring first and second Pods respectively for first and second machines executing on the host computer; on each particular machine’s respective Pod, configuring a set of two or more service containers for performing a set of two or more services on data messages associated with the particular machine; configuring, for each particular machine, a module along the particular machine’s datapath to direct data messages associated with the particular machine to the particular machine’s Pod for at least a subset of the services to be performed by the set of service containers of the Pod.
12. The method of claim 11, wherein the first and second machines belong to one logical network implemented over a physical network on which a plurality of logical networks are defined.
13. The method of claim 12, wherein the first and second Pods execute the same set of service containers.
14. The method of claim 12, wherein the first and second Pods execute different sets of service containers.
15. The method of claim 11, wherein the first and second machines belong to first and second logical networks implemented over a physical network on which a plurality of logical networks are defined.
16. The method of claim 11, wherein the first and second Pods execute on first and second service virtual machines (SVMs) that execute on the host computer.
17. The method of claim 16, wherein the first and second machines are first and second guest virtual machines (GVMs), and the SVMs consume less storage resources and have faster bootup times than the GVMs.
18. The method of claim 17, wherein the SVMs support a smaller set of standard specified network interface drivers, while the GVMs support a larger set of network interface drivers.
19. The method of claim 11, wherein each particular machine’s configured module comprises a classifier that for each data message that it processes, identifies the subset of service operations that have to be performed on the data message, and provides the data message with a service identifier to the particular machine’s configured Pod in order to specify the identified subset of service operations that have to be performed on the data message by a subset of service containers of the Pod.
20. The method of claim 19, wherein service operations in the subset of services identified by the classifier have a particular order, and the service identifier specifies the particular order.
21. The method of claim 19, wherein a forwarding element executes on each particular machine’s Pod to process each provided service identifier in order to identify the subset of services that has to be performed on the provided data message, and to successively provide the data message to service containers in the subset of service containers to perform the subset of service operations.
22. The method of claim 19, wherein each particular machine’s classifier identifies at least two different subsets of service operations for at least two different data message flows originating from the particular machine.
23. The method of claim 19, wherein each particular machine’s classifier is called by a port of a software forwarding element that receives the data messages associated with the particular machine.
24. The method of claim 11, wherein the first and second Pods are configured when the first and second machines are configured to operate on the host computer, and the first and second Pods are terminated when the first and second machines are respectively terminated on the host computer.
25. The method of claim 11 further comprising: identifying the first and second Pods respectively as being part of first and second resource groups respectively of the first and second machines, in order to allow the first Pod to be migrated with the first machine to another host computer and the second Pod to be migrated with the second machine to another host computer.
26. A machine readable medium storing a program which when implemented by at least one processing unit implements the method according to any one of claims 1-25.
27. An electronic device comprising: a set of processing units; and a machine readable medium storing a program which when implemented by at least one of the processing units implements the method according to any one of claims 1-25.
28. A system comprising means for implementing the method according to any one of claims 1-25.
29. A computer program product comprising instructions which when executed by a computer cause the computer to perform the method according to any one of claims 1-25.
PCT/US2021/056574 2020-12-15 2021-10-26 Providing stateful services a scalable manner for machines executing on host computers WO2022132308A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21811583.0A EP4185959A1 (en) 2020-12-15 2021-10-26 Providing stateful services a scalable manner for machines executing on host computers
CN202180084495.9A CN116710897A (en) 2020-12-15 2021-10-26 Providing stateful services in a scalable manner for machines executing on a host computer

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17/122,153 2020-12-15
US17/122,153 US11611625B2 (en) 2020-12-15 2020-12-15 Providing stateful services in a scalable manner for machines executing on host computers
US17/122,192 US11734043B2 (en) 2020-12-15 2020-12-15 Providing stateful services in a scalable manner for machines executing on host computers
US17/122,192 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022132308A1 true WO2022132308A1 (en) 2022-06-23

Family

ID=78725631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/056574 WO2022132308A1 (en) 2020-12-15 2021-10-26 Providing stateful services a scalable manner for machines executing on host computers

Country Status (2)

Country Link
EP (1) EP4185959A1 (en)
WO (1) WO2022132308A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11528219B2 (en) 2020-04-06 2022-12-13 Vmware, Inc. Using applied-to field to identify connection-tracking records for different interfaces
US11609781B2 (en) 2019-02-22 2023-03-21 Vmware, Inc. Providing services with guest VM mobility
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11722559B2 (en) 2019-10-30 2023-08-08 Vmware, Inc. Distributed service chain across multiple clouds
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11750476B2 (en) 2017-10-29 2023-09-05 Nicira, Inc. Service operation chaining
US11805056B2 (en) 2013-05-09 2023-10-31 Nicira, Inc. Method and system for service switching using service tags
US11805036B2 (en) 2018-03-27 2023-10-31 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150370586A1 (en) * 2014-06-23 2015-12-24 Intel Corporation Local service chaining with virtual machines and virtualized containers in software defined networking
US20190132220A1 (en) * 2017-10-29 2019-05-02 Nicira, Inc. Service operation chaining
US20200274809A1 (en) * 2019-02-22 2020-08-27 Vmware, Inc. Providing services by using service insertion and service transport layers
US10802857B2 (en) 2016-12-22 2020-10-13 Nicira, Inc. Collecting and processing contextual attributes on a host

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150370586A1 (en) * 2014-06-23 2015-12-24 Intel Corporation Local service chaining with virtual machines and virtualized containers in software defined networking
US10802857B2 (en) 2016-12-22 2020-10-13 Nicira, Inc. Collecting and processing contextual attributes on a host
US20190132220A1 (en) * 2017-10-29 2019-05-02 Nicira, Inc. Service operation chaining
US20200274809A1 (en) * 2019-02-22 2020-08-27 Vmware, Inc. Providing services by using service insertion and service transport layers

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11805056B2 (en) 2013-05-09 2023-10-31 Nicira, Inc. Method and system for service switching using service tags
US11750476B2 (en) 2017-10-29 2023-09-05 Nicira, Inc. Service operation chaining
US11805036B2 (en) 2018-03-27 2023-10-31 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US11609781B2 (en) 2019-02-22 2023-03-21 Vmware, Inc. Providing services with guest VM mobility
US11722559B2 (en) 2019-10-30 2023-08-08 Vmware, Inc. Distributed service chain across multiple clouds
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11528219B2 (en) 2020-04-06 2022-12-13 Vmware, Inc. Using applied-to field to identify connection-tracking records for different interfaces
US11743172B2 (en) 2020-04-06 2023-08-29 Vmware, Inc. Using multiple transport mechanisms to provide services at the edge of a network
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers

Also Published As

Publication number Publication date
EP4185959A1 (en) 2023-05-31

Similar Documents

Publication Publication Date Title
US11611625B2 (en) Providing stateful services in a scalable manner for machines executing on host computers
US11734043B2 (en) Providing stateful services in a scalable manner for machines executing on host computers
WO2022132308A1 (en) Providing stateful services a scalable manner for machines executing on host computers
US11811680B2 (en) Provisioning network services in a software defined data center
US11347537B2 (en) Logical processing for containers
US20210409453A1 (en) Method and apparatus for distributing firewall rules
US11824863B2 (en) Performing services on a host
US11336486B2 (en) Selection of managed forwarding element for bridge spanning multiple datacenters
US10148696B2 (en) Service rule console for creating, viewing and updating template based service rules
US11178051B2 (en) Packet key parser for flow-based forwarding elements
US10305858B2 (en) Datapath processing of service rules with qualifiers defined in terms of dynamic groups
US20170180319A1 (en) Datapath processing of service rules with qualifiers defined in terms of template identifiers and/or template matching criteria
US11736436B2 (en) Identifying routes with indirect addressing in a datacenter
US10374827B2 (en) Identifier that maps to different networks at different datacenters
US10721338B2 (en) Application based egress interface selection
US10791092B2 (en) Firewall rules with expression matching
US20240007386A1 (en) Route aggregation for virtual datacenter gateway

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21811583

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021811583

Country of ref document: EP

Effective date: 20230221

WWE Wipo information: entry into national phase

Ref document number: 202180084495.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE