EP3815312A1 - Insertion de service au niveau d'une passerelle de réseau logique - Google Patents
Insertion de service au niveau d'une passerelle de réseau logiqueInfo
- Publication number
- EP3815312A1 EP3815312A1 EP19762642.7A EP19762642A EP3815312A1 EP 3815312 A1 EP3815312 A1 EP 3815312A1 EP 19762642 A EP19762642 A EP 19762642A EP 3815312 A1 EP3815312 A1 EP 3815312A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- logical
- service
- data
- network
- interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003780 insertion Methods 0.000 title description 2
- 230000037431 insertion Effects 0.000 title description 2
- 238000000034 method Methods 0.000 claims abstract description 75
- 238000012545 processing Methods 0.000 claims description 75
- 230000008569 process Effects 0.000 claims description 35
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 230000009471 action Effects 0.000 description 6
- 230000010354 integration Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 239000002184 metal Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005204 segregation Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 239000010410 layer Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/42—Centralised routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/64—Routing or path finding of packets in data switching networks using an overlay routing layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/38—Flow based routing
Definitions
- Some embodiments provide a network management and control system that enables integration of third-party service machines for processing data traffic entering and/or exiting a logical network.
- third-party services may include various types of non-packet-forwarding services, such as firewalls, virtual private network (VPN) service, network address translation (NAT), load balancing, etc.
- VPN virtual private network
- NAT network address translation
- the network management and control system manages the integration of these service machines, but does not manage the life cycle of the machines themselves.
- the logical network includes at least one logical switch to which logical network endpoints (e.g., data compute nodes such as virtual machines, containers, etc.) connect as well as a logical router for handling data traffic entering and/or exiting the logical network.
- the logical network may include multiple logical switches that logically connect to each other through either the aforementioned logical router or another logical router.
- the logical network includes multiple tiers of logical routers. Logical routers in a first tier connect groups of logical switches (e.g., the logical switches of a particular tenant).
- first-tier logical routers connect to logical routers in a second tier for data traffic sent to and from the logical network (e.g., data traffic from external clients connecting to web servers hosted in the logical network, etc.).
- the second-tier logical routers are implemented at least partly in a centralized manner for handling the connections to the external networks, and in some embodiments the third-party service machines attach to the centralized components of these logical routers.
- the logical networks of other embodiments include only a single tier of logical routers, to which the third-party services attach.
- the network management and control system receives both (i) configuration data defining the logical network (i.e., the logical switches, attachment of data compute nodes to the logical switches, logical routers, etc.) as well as (ii) configuration data attaching a third-party service to a logical router (i.e., the logical router that handles connections to external networks). Based on this configuration data, the network control system configures various managed forwarding elements to implement the logical forwarding elements (the logical switches, distributed aspects of the logical routers, etc.) as well as other packet processing operations for the logical network (e.g., distributed firewall rules).
- configuration data defining the logical network
- configuration data attaching a third-party service to a logical router i.e., the logical router that handles connections to external networks.
- the network control system configures various managed forwarding elements to implement the logical forwarding elements (the logical switches, distributed aspects of the logical routers, etc.) as well as other packet processing operations for the logical network (e
- some embodiments configure a particular managed forwarding element operating on a gateway machine to implement a centralized logical routing component that handles the connection of the logical network to one or more external networks.
- This managed forwarding element on the gateway machine is also configured to redirect (e.g., using policy-based routing) at least a subset of this ingress and/or egress data traffic between the logical network and the external networks to the attached third-party service via a separate interface of the gateway.
- receiving the configuration data to attach the third-party service includes several separate configuration inputs (e.g., from an administrator).
- the logical router After the logical router is configured, some embodiments receive configuration data (i) defining a service attachment interface for the logical router, (ii) defining a logical switch to which the service attachment interface connects, (iii) defining the service interface (e.g., the interface of the service machine to which data traffic is redirected), and (iv) connecting the service attachment interface of the logical router and the service interface to the logical switch.
- the administrator defines a rule or set of rules specifying which ingress and/or egress traffic is redirected to the service interface.
- Some embodiments enable multiple services to be connected to the logical router, using various different topologies. For instance, multiple services may be connected to the same logical switch, in which case these services all have interfaces in the same subnet and can send data traffic directly between each other if configured to do so.
- the logical router can have a single interface that connects to the logical switch (for traffic to all of the services) or a separate interface connected to the logical switch for each attached service.
- separate logical switches can be defined for each service (with separate logical router interfaces connected to each of the logical switches).
- multiple interfaces can be defined for each service machine, for handling different sets of traffic (e.g., traffic to/from different external networks or different logical network subnets).
- the service machines may be connected to the logical router via different types of connections in some embodiments.
- some embodiments allow for service machines to be connected in either (i) an L2 bump-in-the-wire mode or (ii) a L3 one-arm mode.
- L2 mode two interfaces of the logical router are connected to two separate interfaces of the service machine via two separate logical switches, and data traffic sent to the service machine via one of the interfaces and received back from the service machine via the other interface.
- Data traffic may be sent to the service machine via one interface for traffic entering the logical network and via the other interface for traffic exiting the logical network.
- the L3 mode a single interface is used on the logical router for each connection with the service machine.
- the gateway redirects some or all of the data traffic between the logical network and external networks to the service machine.
- some embodiments use a set of policy-based routing (PBR) rules to determine whether or not to redirect each data message.
- PBR policy-based routing
- the gateway applies these PBR rules to outgoing data messages after performing logical routing for the data messages, and applies the PBR rules to incoming data messages prior to performing logical routing and/or switching for incoming data messages.
- the gateway performs logical switching (if required), then logical routing for the routing component that connects to the external network to determine that the data message is in fact directed outside of the logical network, then applies the PBR rules to determine whether to redirect the data message to a service. If the data message is redirected, then upon its return from the service (if the data message is not dropped/blocked by the service) the gateway forwards the data message to the external network.
- the gateway applies the PBR rules to determine whether to redirect the data message to a service before processing the data message through any of the logical forwarding elements. If the data message is redirected, then upon its return from the service (if the data message is not dropped/blocked by the service) the gateway then performs logical routing and switching, etc. to the data message to determine how to forward the data message to the logical network.
- the PBR rules use a two-stage lookup to determine whether to redirect a data message (and to which interface to redirect the data message). Specifically, rather than the PBR rules (i.e., routing rules based on header fields other than destination network address) providing the redirection details, each rule specifies a unique identifier. Each identifier corresponds to a service machine, and the gateway stores a dynamically-updated data structure for each identifier.
- These data structures indicate the type of connection to the service (e.g., L2 bump-in-the-wire or L3 one-arm), a network address for the interface of the service to which the data message is redirected (for L2 mode, some embodiments use a dummy network address that corresponds to the data link layer address of the return service attachment interface of the gateway), dynamically-updated status data, and a failover policy.
- the status data is dynamically updated based on the health/reachability of the service, which may be tested using a heartbeat protocol such as bidirectional forwarding detection (BFD).
- BFD bidirectional forwarding detection
- the failover policy specifies what to do with the data message if the service is not reachable. These failover policy options may include, e.g., drop the data message, forward the data message to its destination without redirection to the service, redirect to a backup service machine, etc.
- Figure 1 conceptually illustrates an example logical network of some embodiments to which third-party services can be connected.
- Figure 2 conceptually illustrates an example of connecting a third-party service machine to a centralized router.
- Figure 3 conceptually illustrates a process of some embodiments for configuring a gateway machine of a logical network to redirect ingress and/or egress data traffic to a third-party service machine.
- Figure 4 conceptually illustrates a centralized routing component with two service attachment interfaces that connect to two separate service endpoint interfaces of a third-party service machine via two separate logical switches.
- Figure 5 conceptually illustrates a centralized routing component with one service attachment interface that connects to two separate interfaces of a third-party service machine via one logical switch.
- Figure 6 conceptually illustrates a centralized routing component with one service attachment interface that connects to interfaces of two different third-party service machines via one logical switch.
- Figure 7 conceptually illustrates a centralized routing component with two service attachment interfaces that each connect to a different service machine of two service machines via separate logical switches.
- Figure 8 illustrates the path of an ingress data message through multiple stages of logical processing implemented by a gateway managed forwarding element and a third-party service machine connected in L3 one-arm mode.
- Figure 9 illustrates the path of an egress data message through the multiple stages of logical processing implemented by the gateway MFE and the third-party service machine of
- Figure 10 illustrates the path of an ingress data message through multiple stages of logical processing implemented by a gateway MFE and a third-party service machine connected in L2 bump-in-the-wire mode.
- Figure 11 illustrates the path of an egress data message through the multiple stages of logical processing implemented by the gateway MFE and the third-party service machine of
- Figure 12 conceptually illustrates a process of some embodiments for applying policy -based routing redirection rules to a data message.
- Figure 13 illustrates a table of policy -based routing rules.
- Figure 14 conceptually illustrates the data structure being dynamically updated based on a change in the connection status of the service machine to which the data structure redirects data messages.
- Figure 15 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
- Some embodiments provide a network management and control system that enables integration of third-party service machines for processing data traffic entering and/or exiting a logical network.
- third-party services may include various types of non-packet-forwarding services, such as firewalls, virtual private network (VPN) service, network address translation (NAT), load balancing, etc.
- VPN virtual private network
- NAT network address translation
- the network management and control system manages the integration of these service machines, but does not manage the life cycle of the machines themselves (hence referring to these service machines as third-party services).
- the logical network includes at least one logical switch to which logical network endpoints (e.g., data compute nodes such as virtual machines, containers, etc.) connect as well as a logical router for handling data traffic entering and/or exiting the logical network.
- logical network endpoints e.g., data compute nodes such as virtual machines, containers, etc.
- the logical network may include multiple logical switches that logically connect to each other through either the aforementioned logical router or another logical router.
- FIG. 1 conceptually illustrates an example logical network 100 of some embodiments, to which third-party services can be connected.
- this logical network 100 includes a tier-0 logical router 105 (also referred to as a provider logical router), a tier-l logical router 110 (also referred to as a tenant logical router), and two logical switches 115 and 120.
- Data compute nodes (DCNs) 125-140 e.g., virtual machines, containers, etc.
- DCNs data compute nodes
- These data compute nodes 125 exchange data messages with each other and with one or more external networks 145 through a physical network that implements this logical network (e.g., within a datacenter).
- the logical network 100 represents an abstraction of a network as configured by a user of the network management and control system of some embodiments. That is, in some embodiments, a network administrator configures the logical network 100 as a conceptual set of logical switches, routers, etc., with policies applied to these logical forwarding elements.
- the network management and control system generates configuration data for physical managed forwarding elements (e.g., software virtual switches operating in the virtualization software of host machines, virtual machines and/or bare metal machines operating as logical network gateways, etc.) to implement these logical forwarding elements.
- physical managed forwarding elements e.g., software virtual switches operating in the virtualization software of host machines, virtual machines and/or bare metal machines operating as logical network gateways, etc.
- a managed forwarding element executing in the virtualization software of the host machine processes the data message to implement the logical network.
- the managed forwarding element would apply the logical switch configuration for the logical switch to which the DCN attaches, then the tier-l logical router configuration, etc. to determine the destination of the data message.
- the logical network includes multiple tiers of logical routers.
- Logical routers in a first tier e.g., the tier-l logical router 110
- groups of logical switches e.g., the logical switches of a particular tenant.
- These first-tier logical routers connect to logical routers in a second tier (e.g., the tier-0 logical router 105) for data traffic sent to and from the logical network (e.g., data traffic from external clients connecting to web servers hosted in the logical network, etc.).
- the network management and control system of some embodiments defines multiple routing components for at least some of the logical routers.
- the tier-0 logical router 105 in this example has a distributed routing component 150 (“distributed router”) and a centralized routing component 155, which are connected by an internal logical switch 160 referred to as a transit logical switch.
- distributed router distributed routing component 150
- centralized routing component 155 which are connected by an internal logical switch 160 referred to as a transit logical switch.
- multiple centralized routers are defined for a tier-0 logical router, each of which connects to the transit logical switch 160.
- some embodiments define two centralized routers, one active and one standby.
- the distributed router 150 and the transit logical switch 160 are implemented in a distributed manner (as with the logical switches 115 and 120, and the tier-l logical router 110), meaning that the first-hop managed forwarding element for a data message applies the policies of those logical forwarding elements to the data message.
- the centralized router 155 is implemented in a centralized manner (i.e., a single host machine implements each such centralized router). These centralized routers handle the connections of the logical network to external networks (e.g., to other logical networks implemented at the same or other datacenters, to external web clients, etc.).
- the centralized router may perform various stateful services (e.g., network address translation, load balancing, etc.) as well as exchange routes with one or more external routers (using, e.g., BGP or OSPF).
- stateful services e.g., network address translation, load balancing, etc.
- external routers using, e.g., BGP or OSPF.
- Different embodiments may implement the centralized router using a bare metal machine, a virtual machine, a virtual switch executing in virtualization software of a host machine, or other contexts.
- some embodiments allow the administrator to use the network control system to attach third-party services to the logical routers.
- these third-party services are attached to centralized routers that handle data traffic between logical network endpoints and external networks (e.g., the centralized router 155 of a tier-0 router). While the subsequent discussion primarily relates to connection of the third-party services to tier-0 logical routers, in some embodiments the third-party services may also be connected to tier-l logical routers.
- FIG. 2 conceptually illustrates an example of connecting a third-party service machine 200 to a centralized router 205.
- a network administrator defines a service attachment interface 210 on the logical router, a service endpoint 215 for the third-party service machine, a specific logical switch 220 for the service attachment, and attaches both the service attachment interface 210 and the service endpoint 215 to the logical switch 220.
- an administrator provides this information through application programming interfaces (APIs) of a management plane of the network control system (e.g., using a network management application user interface that translates user interactions into API calls to the management plane).
- APIs application programming interfaces
- the management plane receives both (i) configuration data defining the logical network (i.e., the logical switches, attachment of data compute nodes to the logical switches, logical routers, etc.) as well as the configuration data attaching one or more third- party services to the logical router that handles connections of the logical network to external networks.
- the network control system configures various managed forwarding elements to implement the logical forwarding elements (the logical switches, distributed aspects of the logical routers, etc.) as well as other packet processing operations for the logical network (e.g., distributed firewall rules).
- the management plane generates configuration data based on the inputs and provides this configuration data to a central control plane (e.g., a set of centralized controllers).
- the central control plane identifies the managed forwarding elements that require each atomic piece of configuration data, and distributes the configuration data to local controllers for each identified managed forwarding element.
- These local controllers are then responsible for configuring the managed forwarding elements (including the gateway machine that implements the centralized router) to implement the logical forwarding elements of the logical network, including redirecting appropriate data messages to the third-party services (e.g., according to policy-based routing rules provided by the administrator).
- receiving the configuration data to attach the third-party service includes several separate configuration inputs (e.g., from an administrator).
- Figure 3 conceptually illustrates a process 300 of some embodiments for configuring a gateway machine of a logical network to redirect ingress and/or egress data traffic to a third-party service machine.
- the process 300 is performed by the management plane of a network control system, which receives input through API calls.
- a logical network has already been configured, and that this logical network includes a logical router with at least one centralized component configured to handle data traffic entering and exiting the logical network.
- Some embodiments configure particular managed forwarding elements operating on gateway machines to implement these centralized logical routing components that handle the connection of the logical network to one or more external networks.
- the process 300 begins by receiving (at 305) input to define a service attachment interface for a logical router.
- a service attachment interface is a specialized type of interface for the logical router.
- the administrator either defines this service attachment interface on a particular centralized component or on the logical router generally.
- the management plane either applies the interface to a specific one of the components (e.g., if the administrator defines that the service attachment interface will only handle traffic sent to or from a particular uplink interface of the logical router that is assigned to a particular centralized component) or creates separate interfaces for each of the centralized components of the logical router. For instance, in some embodiments, active and standby centralized routing components are defined, and interfaces are created on each of these components.
- the process 300 receives (at 310) input to define a logical switch for connecting the logical router to third-party services.
- the process receives (at 315) input to attach the service attachment interface to this logical switch.
- this logical switch is created similarly to the logical switches of the logical network, to which data compute nodes (e.g., VMs, etc.) attach.
- the logical switch is defined by the administrator as a specific service attachment logical switch.
- This logical switch has a privately allocated subnet that (i) includes the network address of the service attachment interface that is attached to the logical switch and (ii) only needs to include enough network addresses for any interfaces of third-party services and any service attachment interfaces that connect to the logical switch. For instance, as shown below, using Classless Inter-Domain Routing (CIDR) notation, a logical switch that connects a single logical router interface to a single third-party service interface could be a‘731” subnet.
- CIDR Classless Inter-Domain Routing
- the logical router performs route advertisement to external physical routers (e.g., using BGP or OSPF) for logical network subnets, the subnets for the service attachment logical switches are not advertised (or entered into the routing tables for the various logical router tiers) in some embodiments.
- external physical routers e.g., using BGP or OSPF
- the logical router includes multiple centralized components (e.g., active and standby components) and a service attachment interface corresponds to interfaces on each of these components, then attaching the service attachment interface actually attaches each of these interfaces to the logical switch.
- each of the centralized component interfaces has a separate network address in the subnet of the logical switch.
- the process 300 receives (at 320) input to define a service endpoint interface, and receives (at 325) input to attach this service endpoint interface to the logical switch (to which the service attachment interface of the logical router is attached).
- this service endpoint interface represents an interface on a third-party service machine.
- these interfaces can either be service endpoint interfaces (also referred to as logical endpoint interfaces, that correspond to service machines and connect to service attachment interfaces through a logical switch) or external interfaces (also referred to as virtual endpoint interfaces, which correspond to network addresses reachable from the centralized component. External router interfaces are examples of these latter interfaces.
- some embodiments require the administrator to define the third-party service machine (either through the network control system or through a separate datacenter compute manager). For example, in some embodiments the network administrator defines both a service type as well as a service instance (e.g., an instance of that service type). As noted above, the service endpoint interface should also have a network address within the subnet of the logical switch to which that interface is attached. [0047] It should be understood that operations 305-325 need not occur in the specific order shown in Figure 3. For instance, a network administrator could initially create both of the interfaces (the service attachment interface on the logical router as well as the service endpoint interface representing the third-party service), then subsequently create the logical switch and attach the interfaces to this logical switch.
- the process 300 receives (at 330) one or more rules for redirecting data messages to the service endpoint interface.
- these are policy-based routing rules that (i) specify which ingress and/or egress traffic will be redirected to the service interface and (ii) are applied by the gateway machine separately from its usual routing operations.
- the administrator defines the redirection rules in terms of one or more data message header fields, such as the source and/or destination network addresses, source and/or destination transport layer ports, transport protocol, interface on which a data message is received, etc.
- an administrator may create one redirection rule or multiple rules.
- the redirected data messages could include all incoming and/or outgoing data messages for a particular uplink, only data messages sent from or to a specific logical switch subnet, etc.
- the process 300 configures (at 335) the gateway machine to implement the centralized logical router and the redirection to the service endpoint interface.
- the process 300 then ends. If multiple centralized routing components have interfaces attached to the logical switch for the service endpoint, then the gateway machine for each of these components is configured.
- the management plane generates configuration data for the service attachment interface and the redirection rules and provides this information to the central control plane.
- the central control plane identifies each gateway machine that requires the information and provides the appropriate configuration data to the local controller for that gateway machine.
- the local controller converts this configuration data to a format readable by the gateway machine (if it is not already in such a format) and directly configures the gateway machine to implement the policy-based routing rules.
- Some embodiments enable multiple services to be connected to the logical router, using various different topologies. For instance, multiple services may be connected to the same logical switch, in which case these services all have interfaces in the same subnet and can send data traffic directly between each other if configured to do so.
- the logical router can have a single interface that connects to the logical switch (for traffic to all of the services) or a separate interface connected to the logical switch for each attached service.
- separate logical switches can be defined for each service (with separate logical router interfaces connected to each of the logical switches).
- multiple interfaces can be defined for each service machine, for handling different sets of traffic (e.g., traffic to/from different external networks or different logical network subnets).
- Figures 4-7 conceptually illustrate several different such topologies for connecting a centralized routing component of a logical router to one or more service machines. Each of these figures illustrates one centralized router connected to one or more logical switches to which one or more service machines are also connected. It should be understood that these figures represent a logical view of the connections, and that the gateway machine implementing the centralized router would also implement the logical switch(es) in some embodiments.
- FIG 4 conceptually illustrates a centralized routing component 400 with two service attachment interfaces that connect to two separate service endpoint interfaces of a third- party service machine 405 via two separate logical switches 410 and 415.
- This topology essentially uses a separate service attachment interface and separate logical switch for each connection to the third-party service.
- each of the logical switches 410 and 415 is assigned a‘731” subnet, which includes two network addresses. Because each of the logical switches is specifically created for connecting one service attachment interface of the centralized routing component 400 to the service machine 405, only two addresses are needed for each switch.
- the redirection rules for the router redirect data messages sent to and from each of the uplinks to a different interface of the third-party service machine (and thus use a different one of the service attachment interfaces).
- Figure 5 conceptually illustrates a centralized routing component 500 with one service attachment interface that connects to two separate interfaces of a third-party service machine 505 via one logical switch 510.
- the administrator creates one logical switch for each third-party service machine with one service attachment interface on the centralized router component, but defines multiple service endpoint interfaces for that third-party service machine.
- the logical switch subnet accommodates a larger number of network addresses (in the present example, a‘724” subnet is used).
- the redirection rules are set up to redirect data messages sent to and from each of the uplinks to a different interface of the third-party service machine via the same service attachment interface and logical switch.
- using a setup with multiple service endpoint interfaces on the service machine that attach to the same logical switch requires that the third-party service machine use separate routing tables (e.g., virtual routing and forwarding instances) for each interface.
- FIG. 6 conceptually illustrates a centralized routing component 600 with one service attachment interface that connects to interfaces of two different third-party service machines 605 and 610 via one logical switch 615.
- the service machines 605 and 610 in this scenario could provide two separate services (e.g., a firewall and a cloud extension service) or be master and standby machines for a single high-availability service.
- the interfaces of the service machines 605 and 610 are on the same logical switch, data messages can also be sent from one service to the other.
- the centralized routing component 600 has a single uplink; some embodiments using this configuration would include two service attachments and two logical switches that each connect to (different) interfaces of both service machines to handle data messages received or destined for two different uplinks.
- FIG. 7 conceptually illustrates a centralized routing component 700 with two service attachment interfaces that each connect to a different service machine of two service machines 705 and 710 via separate logical switches 715 and 720.
- these two service machines could provide two separate services or be master and standby machines for a single high-availability service.
- the centralized routing component has a single uplink; some embodiment using this configuration would include two additional service attachments corresponding to each additional uplink that connect via separate logical switches to separate interfaces on each of the service machines.
- using separate interfaces on the service machines corresponding to each different uplink allows the service machines to apply specific processing configurations to data messages sent to or received from each different uplink.
- the third-party service machines may be connected to the centralized routing component via different types of connections in some embodiments.
- some embodiments allow for service machines to be connected in either (i) an L2 bump-in-the-wire mode or (ii) a L3 one-arm mode.
- L2 mode shown in Figures 10 and 11
- two interfaces of the logical router are connected to two separate interfaces of the service machine via two separate logical switches, and data traffic sent to the service machine via one of the interfaces and received back from the service machine via the other interface.
- Data traffic may be sent to the service machine via one interface for traffic entering the logical network and via the other interface for traffic exiting the logical network.
- the gateway redirects some or all of the data traffic between the logical network and external networks to the service machine.
- PBR policy-based routing
- some embodiments use a set of policy-based routing (PBR) rules to determine whether or not to redirect each data message.
- the gateway applies these PBR rules to outgoing data messages after performing logical routing for the data messages, and applies the PBR rules to incoming data messages prior to performing logical routing and/or switching for incoming data messages.
- Figure 8 illustrates the path of an ingress data message (represented by the dashed line) through multiple stages of logical processing implemented by a gateway managed forwarding element 800 and a third-party service machine 805.
- the third-party service machine is connected in an L3 one-arm mode. In this mode, data messages are transmitted to the network address of the third-party service machine, which transmits the data messages back to the network address of the logical router service attachment interface.
- the gateway MFE 800 implements several stages of logical network processing, including policy -based routing (PBR) redirection rules 810, centralized routing component processing 815, the service attachment logical switch processing 820, and additional logical processing 825 (e.g., transit logical switch processing, distributed routing component processing, processing for other tiers of logical routers and/or logical switches to which network endpoints connect, etc.
- PBR policy -based routing
- the gateway MFE 800 is a datapath in a bare metal computer or a virtual machine (e.g., a data plane development kit (DPDK)-based datapath).
- DPDK data plane development kit
- the gateway MFE of other embodiments executes a datapath in virtualization software of a host machine.
- Yet other embodiments implement a portion of the logical processing in such a datapath while implementing the centralized routing component in a virtual machine, namespace, or similar construct.
- the gateway MFE 800 applies the PBR rules 810 to determine whether to redirect the data message before processing the data message through any of the logical forwarding elements.
- the gateway MFE also performs additional operations before applying the PBR rules, such as IPSec and/or other locally- applied services.
- the PBR rules described in further detail below, identify whether a given data message will be redirected (e.g., based on various data message header fields, such as the source and/or destination IP addresses), how to redirect the data messages that match specific sets of header field values, etc.
- the PBR rules 810 specify to redirect the data message to the interface of the third-party service machine 805.
- the centralized routing component processing 815 identifies that the redirection interface corresponds to the service attachment logical switch, so the gateway MFE 800 then executes this logical switch processing 820. Based on this logical switch processing, the gateway MFE transmits the data message (e.g., with encapsulation) to the third- party service machine 805.
- This service machine 805 performs its service processing (e.g., firewall, NAT, cloud extension, etc.) and returns the data message to the gateway MFE (unless the service drops/blocks the data message).
- the gateway MFE Upon return of the data message from the service, the gateway MFE then performs the centralized routing component processing 815 (e.g., routing based on the destination network address) and, in turn, the additional logical processing operations 825. In some embodiments, data messages returning from the third-party service machine are marked with a flag to indicate that the PBR rules do not need to be applied again. Based on these operations, the gateway MFE 800 transmits the data message to its destination in the logical network (e.g., by encapsulating the data message and transmitting the data message to a host machine in the data center).
- the centralized routing component processing 815 e.g., routing based on the destination network address
- the additional logical processing operations 825 e.g., data messages returning from the third-party service machine are marked with a flag to indicate that the PBR rules do not need to be applied again.
- the gateway MFE 800 transmits the data message to its destination in the logical network (e.g., by encapsulating the data message and
- FIG. 9 illustrates the path of an egress data message (represented by the dashed line) through the multiple stages of logical processing implemented by the gateway MFE 800 and the third-party service machine 805.
- the gateway MFE 800 Upon receipt of the data message, the gateway MFE 800 first applies any logical network processing 825 required before the centralized routing component, such as the transit logical switch (between the distributed routing component and the centralized routing component).
- the centralized routing component such as the transit logical switch (between the distributed routing component and the centralized routing component).
- a tier-l logical router will also have a centralized routing component implemented on the gateway MFE, in which case the additional logical processing may include this centralized routing component, the distributed routing component of the tier-0 logical router, the transit logical switches between them, etc.
- the centralized routing component processing 815 identifies the uplink interface as its output interface, which leads to application of the PBR rules 810. These rules, in this case, also redirect outgoing data messages to the service machine 805, so the gateway MFE 800 applies the centralized routing component processing 815 again and subsequently the service attachment logical switch processing 820, and transmits the data message to the third-party service machine 805. Assuming the data message is not dropped by the service machine 805, the gateway MFE 800 receives the data message via its interface corresponding to the service attachment logical switch. At this point, the centralized routing component processing 815 again identifies the uplink as the output interface for that component, and the gateway MFE transmits the data message to the external physical network router associated with the uplink. As mentioned, the data message is marked with a flag upon being received from the service machine 805 so that the gateway MFE does not apply the PBR rules 810 again in some embodiments.
- the PBR rules are applied (for egress data messages) after the tier-l logical router processing, and before the tier-0 logical router processing.
- the gateway MFE Upon return from the service machine, the gateway MFE then applies the tier-0 distributed routing component, transit logical switch, and tier-0 centralized routing component. Ingress traffic is handled similarly, with the application of the PBR rules after the tier-0 distributed routing component and prior to application of the tier-l centralized routing component.
- Figures 10 and 11 illustrate the connection of a service machine to a centralized routing component using L2 bump-in-the-wire mode.
- Figure 10 illustrates the path of an ingress data message (represented by the dashed line) through multiple stages of logical processing implemented by a gateway MFE 1000 and a third-party service machine 1005.
- a gateway MFE 1000 In the L2 bump-in-the-wire mode, two interfaces of the logical router are associated with each connection to the service machine 1005. Data messages are transmitted to the service machine via one of the interfaces and returned via the other interface.
- the gateway MFE 1000 implements PBR redirection rules 1010, centralized routing component processing 1015, and additional logical processing 1030. Because there are two separate interfaces for the connection to the service machine 1005, the gateway MFE 1000 also implements two separate service attachment logical switches 1020 and 1025. In some embodiments, the interface associated with the first logical switch 1020 is an“untrusted” interface, while the interface associated with the second logical switch 1025 is a“trusted” interface. In this figure, each of the centralized routing component service attachment interfaces is associated with a separate interface of the gateway MFE 1000. In other embodiments, however, these service attachment interfaces share one gateway MFE interface.
- the gateway MFE 1000 applies the
- the gateway MFE also performs additional operations before applying the PBR rules, such as IPSec and/or other locally-applied services.
- the PBR rules identify whether a given data message will be redirected (e.g., based on various data message header fields, such as the source and/or destination IP addresses), how to redirect the data messages that match specific sets of header field values, etc.
- the PBR rules 1010 specify to redirect the data message to the interface of the third-party service machine 805 that is associated with the first logical switch 1020
- the centralized routing component processing 815 identifies that the redirection interface corresponds to the first service attachment logical switch 1020. Because the service machine 1005 is connected in L2 bump-in-the-wire mode, the centralized routing component uses the MAC address of this interface as the source address for the redirected data message and the MAC address of the other service attachment interface (connected to the second logical switch 1025) as the destination address). This causes the data message to be returned by the service machine 1005 to this second (trusted) interface.
- the gateway MFE 1000 then executes the logical switch processing 1020 and, based on this logical switch processing, transmits the data message to the third-party service machine 1005.
- This service machine 1005 performs its service processing (e.g., firewall, NAT, cloud extension, etc.) and returns the data message to the gateway MFE (unless the service drops/blocks the data message).
- the gateway MFE identifies the second logical switch 1025 for processing based on the destination address of the data message and/or the gateway MFE interface on which the message is received, then performs the processing for the centralized routing component 1015 (e.g., routing based on the destination network address) and, in turn, the additional logical processing operations 1030.
- the centralized routing component 1015 e.g., routing based on the destination network address
- data messages returning from the third-party service machine are marked with a flag to indicate that the PBR rules do not need to be applied again.
- the gateway MFE 800 transmits the data message to its destination in the logical network (e.g., by encapsulating the data message and transmitting the data message to a host machine in the data center).
- FIG 11 illustrates the path of an egress data message (represented by the dashed line) through the multiple stages of logical processing implemented by the gateway MFE 1000 and the third-party service machine 1005, connected in L2 bump-in-the-wire mode.
- the gateway MFE 1000 first applies any logical network processing 1030 required before the centralized routing component, such as the transit logical switch (between the distributed routing component and the centralized routing component).
- a tier-l logical router will also have a centralized routing component implemented on the gateway MFE, in which case the additional logical processing 1030 may include this centralized routing component, the distributed routing component of the tier-0 logical router, the transit logical switches between them, etc.
- the centralized routing component processing 1015 then identifies the uplink interface as its output interface, which leads to application of the PBR rules 1010. These rules, in this case, redirect outgoing data messages to the service machine 805 via the trusted interface attached to the second logical switch 1025.
- the gateway MFE 800 applies the centralized routing component processing 1015 again and subsequently the processing for the second service attachment logical switch 1025, and transmits the data message to the third-party service machine 1005.
- the data message has the trusted interface MAC address as its source address and the untrusted interface MAC address as its destination address, traversing the opposite path from the centralized routing component 1015 to the service machine 1005 and back as for an ingress data message.
- the gateway MFE 800 receives the data message via its interface corresponding to the first service attachment logical switch 1020.
- the centralized routing component processing 1015 again identifies the uplink as the output interface, and the gateway MFE transmits the data message to the external physical network router associated with the uplink.
- the data message is marked with a flag upon being received from the service machine 1005 so that the gateway MFE does not apply the PBR rules 1010 again in some embodiments.
- the PBR rules use a two-stage lookup to determine whether to redirect a data message (and to which interface to redirect the data message).
- each rule specifies a unique identifier.
- Each identifier corresponds to a service machine, and the gateway stores a dynamically-updated data structure for each identifier that provides details about how to redirect data messages.
- FIG 12 conceptually illustrates a process 1200 of some embodiments for applying policy-based routing redirection rules to a data message.
- the process 300 is performed by a gateway MFE such as those shown in Figures 8-11, when applying the PBR rules to either an incoming (from an external network) or outgoing (from the logical network) data message.
- This process 1200 will be described in part by reference to Figure 13, which illustrates a set of PBR rules and data structures for some of these rules.
- the process 1200 begins by receiving (at 1205) a data message for PBR processing. This may be a data message received via a logical router uplink from an external network or a data message sent by a logical network endpoint for which the gateway MFE has already identified the uplink as the egress port for the centralized routing component. In some embodiments, the process 1200 is not applied to data messages for which a flag is set indicating that the data message is received from a third-party service machine. These data messages are
- the process 1200 then performs (at 1210) a lookup into a set of PBR rules.
- these rules are organized as a set of flow entries, with match conditions and actions for data messages that match each set of match conditions.
- the PBR rules of some embodiments use a hash table (or set of hash tables) using one or more hashes of sets of data message header fields. Other embodiments use other techniques to identify a matching PBR rule.
- Figure 13 illustrates a table of PBR rules 1300.
- the rules all match on the source and destination IP addresses, but PBR rules of some embodiments can also match on other header fields (and combinations of other header fields with source and/or destination IP addresses).
- the first two match conditions are inverses of each other, one for handling ingress data messages (from 70.70.70.0/24 in an external network to the 60.60.60.0/24 subnet in the logical network), and the other for handling the corresponding egress data messages.
- the third match condition matches on any data message sent from the source subnet 20.20.20.0/24 (i.e., irrespective of the destination address).
- the actions specify unique policy identifiers rather than specific redirection actions.
- the process 1200 determines (at 1215) whether the data message matches any of the PBR rules based on the PBR lookup.
- the PBR rules table includes a default (lowest priority) rule (or set of rules) for data messages that do not match any of the other rules. If the data message does not match any PBR rules (or only matches a default rule), the process forwards (at 1220) the data message to its destination without any redirection. Thus, outgoing data messages are transmitted to the appropriate physical router (after performing any additional IPSec or other local service processing), while incoming data messages begin logical processing at the centralized logical router.
- the process looks up (at 1225) a data structure for a unique identifier specified by the matched PBR rule.
- the actions for each of the PBR rules do not directly specify to redirect matching data messages to a particular next hop address. Instead, these actions specify unique policy identifiers, which in turn map to corresponding dynamically-updated data structures. That is, the gateway MFE is configured to store a data structure for each unique identifier specified in a PBR action. These data structures may be database table entries or any other type of modifiable data structure. In some embodiments, the gateway MFE is configured to some or all fields of the data structures based on, e.g., current network conditions.
- These data structures indicate the type of connection to the service (e.g., L2 bump-in-the-wire or L3 one-arm), a network address for the interface of the service to which the data message is redirected, dynamically-updated status data, and a failover policy.
- the status data is dynamically updated based on the health/reachability of the service, which may be tested using a heartbeat protocol such as bidirectional forwarding detection (BFD).
- BFD bidirectional forwarding detection
- the failover policy specifies what to do with the data message if the service is not reachable.
- Figure 13 illustrates the contents of two of these data structures.
- the data structure 1305 indicates that the service machine to which this policy redirects is connected in L2 bump-in-the-wire mode (such that opposite direction data messages that match the second PBR rule would be redirected to the same service machine in the opposite direction).
- the data structure 1305 also indicates a dummy IP address to use for redirection. This dummy IP is not actually the address of the service machine, but instead resolves to the MAC address of the service attachment interface of the centralized routing component via which the data message will return (e.g., for ingress data messages, the trusted interface of the centralized routing component). This address resolution may be performed with statically configured ARP entries in some embodiments.
- the data structure 1305 specifies the current BFD status of the connection to the service machine (the connection is currently up) as well as a failover policy indicating how to handle the data message if the BFD status is down.
- BFD the current BFD status of the connection to the service machine
- a failover policy indicates how to handle the data message if the BFD status is down.
- the failover policy indicates that data messages should be dropped if the service machine is not available.
- Other failover policy options may include, e.g. forwarding the data message to its destination without redirection to the service, redirection to a backup service machine, etc.
- the data structure 1310 indicates that the service machine to which this policy redirects is connected in L3 one-arm mode, and thus the redirection IP address provides the address of the service machine interface (rather than a dummy IP).
- the BFD status of this connection is also up, but in this case the failover policy provides for redirection to a backup service machine at a different IP address on a different subnet (i.e., connected to a different logical switch).
- the process 1200 processes (at 1230) the data message according to the instructions in the data structure for the unique identifier. This may include redirecting the data message to the next hop IP address specified by the data structure, dropping the data message if the connection is down and the failure policy specifies to drop the data message, or forwarding the data message according to the logical network processing if the connection is down and the failure policy specifies to ignore the redirection.
- a BFD thread executes on the gateway machine to (i) send BFD messages to the service machine and (ii) receive BFD messages from the service machine.
- the service machines For service machines connected in L3 one-arm mode, the service machines also execute a BFD thread that sends BFD messages to the gateway.
- the BFD thread sends BFD messages out one of the interfaces connecting the centralized routing component to the service machine and receives these messages back on the other interface.
- one BFD thread executes on each gateway MFE and exchanges messages with all of the connected service machines, while in other embodiments separate BFD threads execute on a gateway MFE to exchange messages with each connected service machine.
- the gateway MFE modifies the data structure for that service machine
- Figure 14 conceptually illustrates the data structure 1310 being dynamically updated based on a change in the connection status of the service machine to which the data structure redirects data messages. This figure illustrates both the data structure 1310 as well as connections between the gateway machine 1400 and two service machines 1415 and 1420 over two stages 1405 and 1410.
- the data structure 1310 is in the same state as in Figure 13, indicating that the connection to the service machine endpoint interface 169.254.10.1 is currently up as per the BFD status.
- the gateway machine 1400 in addition to operating the gateway MFE with its logical network processing, PBR rules, etc. also executes a BFD thread 1425.
- This BFD thread 1425 sends BFD messages to both the first service machine 1415 at its interface with IP address 169.254.10.1 and the second service machine 1420 at its interface with IP address 169.254.11.1 at regular intervals.
- each of these service machines 1415 and 1420 execute their own BFD threads 1430 and 1435, respectively, which send BFD messages to the gateway machine at regular intervals.
- the connection between the gateway machine 1400 and the first service machine 1415 goes down. This could occur due to a physical connection issue, an issue with the service machine 1415 crashing, etc. As a result, the BFD thread 1425 would no longer receive BFD messages from the service machine 1415.
- the connection between the gateway machine 1400 and the service machine 1415 is no longer present.
- the data structure 1305 has been dynamically updated by the gateway MFE to indicate that the BFD status is down.
- data messages with a source IP in the subnet 20.20.20.0/24 would be redirected to the 169.254.11.1 interface of the second service machine 1420 until the connection to the first service machine 1415 comes back up.
- multiple threads can write to the data structures 1305 and
- some embodiments allow the BFD thread as well as a configuration receiver thread to both write to these data structures (e.g., to modify the BFD status as well as to make any configuration changes received from the network control system).
- one or more packet processing threads are able to read these data structures for performing packet lookups. Some embodiments enable these packet processing threads to read from the data structures even if one of the writer threads is currently accessing the structures, so that packet processing is not interrupted by the writer threads.
- FIG. 15 conceptually illustrates an electronic system 1500 with which some embodiments of the invention are implemented.
- the electronic system 1500 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device.
- Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.
- Electronic system 1500 includes a bus 1505, processing unit(s) 1510, a system memory 1525, a read-only memory 1530, a permanent storage device 1535, input devices 1540, and output devices 1545.
- the bus 1505 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1500.
- the bus 1505 communicatively connects the processing unit(s) 1510 with the read-only memory 1530, the system memory 1525, and the permanent storage device 1535.
- the processing unit(s) 1510 retrieve instructions to execute and data to process in order to execute the processes of the invention.
- the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
- the read-only-memory (ROM) 1530 stores static data and instructions that are needed by the processing unit(s) 1510 and other modules of the electronic system.
- the permanent storage device 1535 is a read-and-write memory device. This device is a non volatile memory unit that stores instructions and data even when the electronic system 1500 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1535. [0094] Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1535, the system memory 1525 is a read-and-write memory device.
- the system memory is a volatile read-and-write memory, such a random-access memory.
- the system memory stores some of the instructions and data that the processor needs at runtime.
- the invention’s processes are stored in the system memory 1525, the permanent storage device 1535, and/or the read-only memory 1530. From these various memory units, the processing unit(s) 1510 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
- the bus 1505 also connects to the input and output devices 1540 and 1545.
- the input devices enable the user to communicate information and select commands to the electronic system.
- the input devices 1540 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
- the output devices 1545 display images generated by the electronic system.
- the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
- bus 1505 also couples electronic system 1500 to a network 1565 through a network adapter (not shown).
- the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 1500 may be used in conjunction with the invention.
- Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer- readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
- electronic components such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer- readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
- Such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks.
- RAM random access memory
- ROM read-only compact discs
- CD-R recordable compact discs
- CD-RW rewritable compact discs
- read-only digital versatile discs e.g., DVD-ROM, dual-layer DVD-ROM
- flash memory e.g., SD
- the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
- Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- integrated circuits execute instructions that are stored on the circuit itself.
- “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
- display or displaying means displaying on an electronic device.
- the terms“computer readable medium,”“computer readable media,” and“machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
- DCNs data compute nodes
- addressable nodes may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
- VMs in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.).
- the tenant i.e., the owner of the VM
- Some containers are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system.
- the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers.
- This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers.
- Such containers are more lightweight than VMs.
- Hypervisor kernel network interface modules in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads.
- a hypervisor kernel network interface module is the vmknic module that is part of the ESXiTM hypervisor of VMware, Inc.
- VMs virtual machines
- examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules.
- the example networks could include combinations of different types of DCNs in some embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/120,281 US10944673B2 (en) | 2018-09-02 | 2018-09-02 | Redirection of data messages at logical network gateway |
US16/120,283 US11595250B2 (en) | 2018-09-02 | 2018-09-02 | Service insertion at logical network gateway |
PCT/US2019/047586 WO2020046686A1 (fr) | 2018-09-02 | 2019-08-21 | Insertion de service au niveau d'une passerelle de réseau logique |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3815312A1 true EP3815312A1 (fr) | 2021-05-05 |
Family
ID=67841276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19762642.7A Pending EP3815312A1 (fr) | 2018-09-02 | 2019-08-21 | Insertion de service au niveau d'une passerelle de réseau logique |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3815312A1 (fr) |
CN (2) | CN112673596B (fr) |
WO (1) | WO2020046686A1 (fr) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9225638B2 (en) | 2013-05-09 | 2015-12-29 | Vmware, Inc. | Method and system for service switching using service tags |
US11296930B2 (en) | 2014-09-30 | 2022-04-05 | Nicira, Inc. | Tunnel-enabled elastic service model |
US9935827B2 (en) | 2014-09-30 | 2018-04-03 | Nicira, Inc. | Method and apparatus for distributing load among a plurality of service nodes |
US10135737B2 (en) | 2014-09-30 | 2018-11-20 | Nicira, Inc. | Distributed load balancing systems |
US10609091B2 (en) | 2015-04-03 | 2020-03-31 | Nicira, Inc. | Method, apparatus, and system for implementing a content switch |
US10805181B2 (en) | 2017-10-29 | 2020-10-13 | Nicira, Inc. | Service operation chaining |
US11012420B2 (en) | 2017-11-15 | 2021-05-18 | Nicira, Inc. | Third-party service chaining using packet encapsulation in a flow-based forwarding element |
US10797910B2 (en) | 2018-01-26 | 2020-10-06 | Nicira, Inc. | Specifying and utilizing paths through a network |
US10805192B2 (en) | 2018-03-27 | 2020-10-13 | Nicira, Inc. | Detecting failure of layer 2 service using broadcast messages |
US10944673B2 (en) | 2018-09-02 | 2021-03-09 | Vmware, Inc. | Redirection of data messages at logical network gateway |
US11595250B2 (en) | 2018-09-02 | 2023-02-28 | Vmware, Inc. | Service insertion at logical network gateway |
US11042397B2 (en) | 2019-02-22 | 2021-06-22 | Vmware, Inc. | Providing services with guest VM mobility |
US11140218B2 (en) | 2019-10-30 | 2021-10-05 | Vmware, Inc. | Distributed service chain across multiple clouds |
US11283717B2 (en) | 2019-10-30 | 2022-03-22 | Vmware, Inc. | Distributed fault tolerant service chain |
US11223494B2 (en) | 2020-01-13 | 2022-01-11 | Vmware, Inc. | Service insertion for multicast traffic at boundary |
US11153406B2 (en) | 2020-01-20 | 2021-10-19 | Vmware, Inc. | Method of network performance visualization of service function chains |
US11659061B2 (en) | 2020-01-20 | 2023-05-23 | Vmware, Inc. | Method of adjusting service function chains to improve network performance |
US11212356B2 (en) | 2020-04-06 | 2021-12-28 | Vmware, Inc. | Providing services at the edge of a network using selected virtual tunnel interfaces |
US11734043B2 (en) | 2020-12-15 | 2023-08-22 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11611625B2 (en) | 2020-12-15 | 2023-03-21 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7239639B2 (en) * | 2001-12-27 | 2007-07-03 | 3Com Corporation | System and method for dynamically constructing packet classification rules |
US8634418B2 (en) * | 2011-07-01 | 2014-01-21 | Juniper Networks, Inc. | Providing extended administrative groups in computer networks |
CN103891209B (zh) * | 2011-10-25 | 2017-05-03 | Nicira股份有限公司 | 网络控制系统中的控制器 |
US10135732B2 (en) * | 2012-12-31 | 2018-11-20 | Juniper Networks, Inc. | Remotely updating routing tables |
EP3117561B1 (fr) * | 2014-03-14 | 2018-10-17 | Nicira Inc. | Annonce de route par des passerelles gérées |
EP3192213A1 (fr) * | 2014-09-12 | 2017-07-19 | Voellmy, Andreas R. | Gestion de configurations de transmission dans des réseaux à l'aide de politiques algorithmiques |
US11296930B2 (en) * | 2014-09-30 | 2022-04-05 | Nicira, Inc. | Tunnel-enabled elastic service model |
EP3026851B1 (fr) * | 2014-11-27 | 2017-08-23 | Alcatel Lucent | Appareil, passerelle de réseau, procédé et programme informatique pour fournir des informations relatives à un itinéraire spécifique à un service dans un réseau |
US10341188B2 (en) * | 2015-01-27 | 2019-07-02 | Huawei Technologies Co., Ltd. | Network virtualization for network infrastructure |
US10129180B2 (en) * | 2015-01-30 | 2018-11-13 | Nicira, Inc. | Transit logical switch within logical router |
US10038628B2 (en) * | 2015-04-04 | 2018-07-31 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US10095535B2 (en) * | 2015-10-31 | 2018-10-09 | Nicira, Inc. | Static route types for logical routers |
US10305858B2 (en) * | 2015-12-18 | 2019-05-28 | Nicira, Inc. | Datapath processing of service rules with qualifiers defined in terms of dynamic groups |
US10333849B2 (en) * | 2016-04-28 | 2019-06-25 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10841273B2 (en) * | 2016-04-29 | 2020-11-17 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US11277338B2 (en) * | 2016-09-26 | 2022-03-15 | Juniper Networks, Inc. | Distributing service function chain data and service function instance data in a network |
-
2019
- 2019-08-21 CN CN201980057472.1A patent/CN112673596B/zh active Active
- 2019-08-21 EP EP19762642.7A patent/EP3815312A1/fr active Pending
- 2019-08-21 WO PCT/US2019/047586 patent/WO2020046686A1/fr unknown
- 2019-08-21 CN CN202310339981.1A patent/CN116319541A/zh active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2020046686A1 (fr) | 2020-03-05 |
CN112673596B (zh) | 2023-05-02 |
WO2020046686A9 (fr) | 2020-05-22 |
CN116319541A (zh) | 2023-06-23 |
CN112673596A (zh) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230179474A1 (en) | Service insertion at logical network gateway | |
US10944673B2 (en) | Redirection of data messages at logical network gateway | |
CN112673596B (zh) | 逻辑网关处的服务插入方法、设备和系统 | |
US10601705B2 (en) | Failover of centralized routers in public cloud logical networks | |
US11805036B2 (en) | Detecting failure of layer 2 service using broadcast messages | |
US11368431B2 (en) | Implementing logical network security on a hardware switch | |
US11115465B2 (en) | Accessing endpoints in logical networks and public cloud service providers native networks using a single network interface and a single routing table | |
US10862753B2 (en) | High availability for stateful services in public cloud logical networks | |
US10491516B2 (en) | Packet communication between logical networks and public cloud service providers native networks using a single network interface and a single routing table | |
US12074731B2 (en) | Transitive routing in public cloud | |
US11902050B2 (en) | Method for providing distributed gateway service at host computer | |
EP3669504B1 (fr) | Disponibilité élevée pour des services dynamiques dans des réseaux logiques en nuage public | |
US20190306086A1 (en) | Incorporating layer 2 service between two interfaces of gateway device | |
US11451413B2 (en) | Method for advertising availability of distributed gateway service and machines at host computer | |
US20230179475A1 (en) | Common connection tracker across multiple logical switches | |
WO2019040720A1 (fr) | Accès à des points d'extrémité dans des réseaux logiques et des réseaux natifs de prestataires de services en nuage publics à l'aide d'une seule interface réseau et d'une seule table de routage | |
US20220038379A1 (en) | Route advertisement to support distributed gateway services architecture | |
US10491483B2 (en) | Using application programming interface calls for communication between network functions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210127 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230217 |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: VMWARE LLC |