CN112673596B - Service insertion method, device and system at logic gateway - Google Patents

Service insertion method, device and system at logic gateway Download PDF

Info

Publication number
CN112673596B
CN112673596B CN201980057472.1A CN201980057472A CN112673596B CN 112673596 B CN112673596 B CN 112673596B CN 201980057472 A CN201980057472 A CN 201980057472A CN 112673596 B CN112673596 B CN 112673596B
Authority
CN
China
Prior art keywords
logical
service
network
data
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980057472.1A
Other languages
Chinese (zh)
Other versions
CN112673596A (en
Inventor
A·纳温
K·蒙达拉吉
R·米施拉
F·卡瓦迪亚
R·科甘蒂
P·罗兰多
冯勇
J·贾殷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VMware LLC
Original Assignee
VMware LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/120,281 external-priority patent/US10944673B2/en
Priority claimed from US16/120,283 external-priority patent/US11595250B2/en
Application filed by VMware LLC filed Critical VMware LLC
Priority to CN202310339981.1A priority Critical patent/CN116319541A/en
Publication of CN112673596A publication Critical patent/CN112673596A/en
Application granted granted Critical
Publication of CN112673596B publication Critical patent/CN112673596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing

Abstract

Many companies and other entities use software-defined data centers (e.g., local data centers and/or public cloud data centers) to host their networks. The provider of a software-defined data center typically provides various network security options, but some entities wish to incorporate existing third party security services (or other services) into their hosted network. Thus, techniques for more easily incorporating these services into virtual networks would be useful.

Description

Service insertion method, device and system at logic gateway
Background
Many companies and other entities use software-defined data centers (e.g., local data centers and/or public cloud data centers) to host their networks. The provider of a software-defined data center typically provides various network security options, but some entities want to incorporate existing third party security services (or other services) into their hosted network. Thus, techniques for more easily incorporating these services into virtual networks would be useful.
Disclosure of Invention
Some embodiments provide a network management and control system that enables integration of third party service machines for handling data traffic entering and/or exiting a logical network. These third party services may include various types of non-packet forwarding services such as firewalls, virtual Private Network (VPN) services, network Address Translation (NAT), load balancing, and the like. In some embodiments, the network management and control system manages the integration of these service machines, but does not manage the lifecycle of the machines themselves.
In some embodiments, the logical network includes at least one logical switch to which logical network endpoints (e.g., data computing nodes such as virtual machines, containers, etc.) are connected, and a logical router for processing data traffic entering and/or exiting the logical network. Further, the logical network may include a plurality of logical switches logically connected to each other through the aforementioned logical router or another logical router. In some embodiments, the logical network includes a multi-level (tier) of logical routers. The logical routers in the first hierarchy connect groups of logical switches (e.g., logical switches of a particular tenant). These first tier logical routers connect to logical routers in the second tier for data traffic sent to and from the logical network (e.g., data traffic from external clients connected to web servers hosted in the logical network, etc.). The second tier of logical routers is implemented at least in part in a centralized manner for handling connections to external networks, and in some embodiments, third party service machines are attached to the centralized components of these logical routers. The logical network of other embodiments includes only a single level of logical routers to which the third party service is attached.
In some embodiments, a network management and control system (hereafter referred to as a network control system) receives both (i) configuration data defining a logical network (i.e., a logical switch, attachment of a data computing node to a logical switch, a logical router, etc.) and (ii) configuration data attaching a third party service to a logical router (i.e., a logical router handling connections to an external network). Based on the configuration data, the network control system configures the various managed forwarding elements (managed forwarding element) to implement logical forwarding elements (logical switches, distributed aspects of logical routers, etc.) as well as other packet processing operations of the logical network (e.g., distributed firewall rules). Further, some embodiments configure specific managed forwarding elements operating on the gateway machine to implement a centralized logical routing component that handles the connection of the logical network to one or more external networks. Such managed forwarding elements on the gateway machine are further configured to redirect (e.g., using policy-based routing) at least a subset of the ingress and/or egress data traffic between the logical network and the external network to the attached third party service via a separate interface of the gateway.
In some embodiments, receiving configuration data for attaching a third party service includes a number of separate configuration inputs (e.g., from an administrator). After configuring the logical router, some embodiments receive (i) service attachment interfaces defining the logical router, (ii) logical switches to which the service attachment interfaces are connected, (iii) configuration data defining service interfaces (e.g., interfaces of service machines to which data traffic is redirected), and (iv) service attachment interfaces and service interfaces connecting the logical router to the logical switches. Furthermore, in some embodiments, an administrator defines a rule or set of rules that specify which ingress and/or egress traffic is redirected to the service interface.
Some embodiments use a variety of different topologies to enable multiple services to connect to a logical router. For example, multiple services may be connected to the same logical switch, in which case the services all have interfaces in the same subnet, and may send data traffic directly between each other (if configured to do so). In this arrangement, the logical router may have a single interface to the logical switch (for traffic to all services), or a separate interface to the logical switch for each attached service. In other cases, separate logical switches may be defined for each service (with separate logical router interfaces connected to each logical switch). Further, multiple interfaces may be defined for each service machine for handling different sets of traffic (e.g., traffic to/from different external networks or different logical network subnets).
Further, in some embodiments, the service machine may be connected to the logical router via a different type of connection. In particular, some embodiments allow for connecting a service machine in any of the following ways: (i) an L2 line card mode or (ii) an L3 single arm mode. In L2 mode, the two interfaces of the logical router are connected to the two separate interfaces of the service machine via the two separate logical switches, and data traffic is sent to the service machine via one of the interfaces and received back from the service machine via the other interface. Data traffic may be sent to the service machine via one interface for traffic entering the logical network and via another interface for traffic exiting the logical network. In L3 mode, a single interface is used on the logical router for each connection with the service machine.
Once configured, the gateway redirects some or all of the data traffic between the logical network and the external network to the service machine. As described above, some embodiments use a set of policy-based routing (PBR) rules to determine whether to redirect each data message. In some embodiments, the gateway applies these PBR rules to outgoing data messages after performing logical routing of the data messages and applies the PBR rules to incoming data messages before performing logical routing and/or switching of the incoming data messages.
That is, for outgoing data messages, the gateway performs logical switching (if needed) and then performs logical routing for routing components connected to the external network to determine that the data message is actually directed outside the logical network, and then applies the PBR rules to determine whether to redirect the data message to a service. If the data message is redirected, the gateway forwards the data message to the external network when it is returned from the service (if the data message is not dropped/blocked by the service).
For incoming data messages, the gateway applies PBR rules to determine whether to redirect the data message to a service before processing the data message by any logical forwarding element. If the data message is redirected, upon its return from the service (if the data message is not dropped/blocked by the service), the gateway then performs logical routing and switching, etc., on the data message to determine how to forward the data message to the logical network.
In some embodiments, the PBR rules use a two-stage lookup to determine whether to redirect the data message (and to which interface to redirect the data message). In particular, each rule specifies a unique identifier, rather than the PBR rule (i.e., a routing rule based on a header field other than the destination network address) providing redirection details. Each identifier corresponds to a service machine and the gateway stores a dynamically updated data structure for each identifier. In some embodiments, these data structures indicate the connection type to the service (e.g., L2 line plugin or L3 single arm), the network address of the interface of the service to which the data message is redirected (for L2 mode, some embodiments use a virtual network address corresponding to the data link layer address of the return service attachment interface of the gateway), dynamically updated state data, and the failover policy. The status data is dynamically updated based on the health/reachability of the service, which may be tested using a heartbeat protocol such as Bidirectional Forwarding Detection (BFD). In some embodiments, the failover policy specifies how to process the data message in the event that the service is not reachable. These failover policy options may include, for example, discarding the data message, forwarding the data message to its destination without redirection to a service, redirection to a backup service machine, and so on.
The foregoing summary is intended to serve as a brief description of some embodiments of the invention. It is not meant to be an introduction or overview of all subject matter disclosed in this document. The embodiments described in the summary of the invention, as well as other embodiments, will be further described in the detailed description that follows and the drawings referred to in the detailed description. Accordingly, a full appreciation of the summary, the detailed description, and the accompanying drawings is required in order to understand all embodiments described herein. Furthermore, the claimed subject matter is not limited to the details of the description, the description and the illustrations of the drawings, but rather is defined by the appended claims, as the claimed subject matter may be embodied in other specific forms without departing from the spirit of the subject matter.
Drawings
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
Figure 1 conceptually illustrates an example logical network to which a third party service of some embodiments may be connected.
Figure 2 conceptually illustrates an example of connecting a third party service machine to a centralized router.
Figure 3 conceptually illustrates a process of some embodiments for configuring a gateway machine of a logical network to redirect incoming and/or outgoing data traffic to a third party service machine.
Figure 4 conceptually illustrates a centralized routing component having two service attachment interfaces that connect to two separate service endpoint interfaces of a third party service machine via two separate logical switches.
Figure 5 conceptually illustrates a centralized routing component having one service attachment interface that connects to two separate interfaces of a third party service machine via one logical switch.
Figure 6 conceptually illustrates a centralized routing component having one service attachment interface that connects to interfaces of two different third party service machines via one logical switch.
Figure 7 conceptually illustrates a centralized routing component having two service attachment interfaces, each connected to a different one of two service machines via a separate logical switch.
Fig. 8 illustrates the path of an incoming data message through multiple logical processing stages implemented by a third party service machine and gateway managed forwarding element connected in an L3 single arm mode.
Fig. 9 shows the path of outgoing data messages through multiple logical processing stages implemented by the gateway MFE and third party service machine of fig. 8.
Fig. 10 illustrates the path of an incoming data message through a plurality of logical processing stages implemented by a third party service machine and gateway MFE connected in L2 line plug-in mode.
Fig. 11 shows the path of outgoing data messages through multiple logical processing stages implemented by the gateway MFE and third party service machine of fig. 10.
Figure 12 conceptually illustrates a process of some embodiments for applying policy-based routing redirection rules to data messages.
Fig. 13 shows a table of policy-based routing rules.
Figure 14 conceptually illustrates a data structure that is dynamically updated based on changes in the connection state of the service machine to which the data structure redirects data messages.
Figure 15 conceptually illustrates an electronic system for implementing some embodiments of the invention.
Detailed Description
In the following detailed description of the present invention, numerous details, examples, and embodiments of the present invention are set forth and described. It will be apparent, however, to one skilled in the art that the invention is not limited to the illustrated embodiments, and that the invention may be practiced without some of the specific details and examples that are discussed.
Some embodiments provide a network management and control system that enables integration of third party service machines for handling data traffic entering and/or exiting a logical network. These third party services may include various types of non-packet forwarding services such as firewalls, virtual Private Network (VPN) services, network Address Translation (NAT), load balancing, and the like. In some embodiments, the network management and control system manages the integration of these service machines, but does not manage the lifecycle of these machines themselves (hence the term third party services for these service machines).
In some embodiments, the logical network includes at least one logical switch to which logical network endpoints (e.g., data computing nodes such as virtual machines, containers, etc.) are connected, and a logical router for processing data traffic entering and/or exiting the logical network. Further, the logical network may include a plurality of logical switches logically connected to each other through the aforementioned logical router or another logical router.
Figure 1 conceptually illustrates an example logical network 100 to which a third party service of some embodiments may be connected. As shown, the logical network 100 includes a level 0 logical router 105 (also referred to as a provider logical router), a level 1 logical router 110 (also referred to as a tenant logical router), and two logical switches 115 and 120. Data Computing Nodes (DCNs) 125-140 (e.g., virtual machines, containers, etc.) are attached to each of the logical switches 115 and 120. These data computing nodes 125 exchange data messages with each other and with one or more external networks 145 through a physical network implementing the logical network (e.g., within a data center).
Logical network 100 represents an abstraction of a network configured by a user of a network management and control system of some embodiments. That is, in some embodiments, a network administrator configures logical network 100 as a conceptual collection of logical switches, routers, etc., to which policies are applied. The network management and control system generates configuration data for physical managed forwarding elements (e.g., software virtual switches operating in a host, virtual machine, and/or bare metal machine's virtualization software operating as a logical network gateway, etc.) to implement these logical forwarding elements. For example, when a DCN 125-140 hosted on a physical host sends a data message, in some embodiments, a managed forwarding element executing in the host's virtualization software processes the data message to implement a logical network. The managed forwarding element will apply a logical switch configuration for the logical switch to which the DCN is attached, then apply a level 1 logical router configuration, etc. to determine the destination of the data message.
In some embodiments, as in this example, the logical network includes a multi-layer logical router. The logical routers in the first layer (e.g., level 1 logical router 110) connect groups of logical switches (e.g., logical switches of a particular tenant). These first tier logical routers connect to logical routers in the second tier (e.g., tier 0 logical router 105) for data traffic sent to and from the logical network (e.g., data traffic from external clients connected to web servers hosted in the logical network, etc.).
The network management and control system of some embodiments (hereinafter referred to as a network control system) defines a plurality of routing components for at least some of the logical routers. In particular, the level 0 logical router 105 in this example has a distributed routing component 150 ("distributed router") and a centralized routing component 155 that are connected by an internal logical switch 160, referred to as a transit logical switch. In some cases, a plurality of centralized routers are defined for the level 0 logical router, each connected to the transit logical switch 160. For example, some embodiments define two centralized routers, one active and one standby.
In some embodiments, distributed router 150 and transit logical switch 160 are implemented in a distributed manner (e.g., using logical switches 115 and 120 and first tier logical router 110), meaning that the first hop managed forwarding elements of the data message apply the policies of those logical forwarding elements to the data message. However, the centralized router 155 is implemented in a centralized manner (i.e., a single host implements each such centralized router). These centralized routers handle the connection of the logical network to external networks (e.g., to other logical networks implemented in the same or other data centers, to external web clients, etc.). The centralized router may perform various stateful services (e.g., network address translation, load balancing, etc.) as well as exchange routes (using, for example, BGP or OSPF) with one or more external routers. Different embodiments may implement a centralized router using bare metal machines, virtual switches executing in the host's virtualization software, or other contexts.
As mentioned, some embodiments allow an administrator to attach third party services to logical routers using a network control system. In some such embodiments, these third party services are attached to a centralized router (e.g., the centralized router 155 of the level 0 router) that handles data traffic between the logical network endpoints and the external network. Although the discussion that follows primarily refers to the connection of a third party service to a level 0 logical router, in some embodiments, a third party service may also be connected to a level 1 logical router.
Figure 2 conceptually illustrates an example of connecting a third party service machine 200 to a centralized router 205. Specifically, in some embodiments, a network administrator defines a service attachment interface 210 on a logical router, a service endpoint 215 of a third party service machine, a particular logical switch 220 for service attachment, and attaches both the service attachment interface 210 and the service endpoint 215 to the logical switch 220. In some embodiments, the administrator provides this information through an Application Programming Interface (API) of the management plane of the network control system (e.g., using a network management application user interface that translates user interactions into API calls to the management plane).
In some embodiments, the management plane receives (i) configuration data defining a logical network (i.e., logical switch, attachment of data compute node to logical switch, logical router, etc.), and (ii) configuration data attaching one or more third party services to a logical router handling connection of the logical network to an external network. Based on the configuration data, the network control system configures the various managed forwarding elements to implement logical forwarding elements (distributed aspects of logical switches, logical routers, etc.) as well as other packet processing operations for the logical network (e.g., distributed firewall rules). In some embodiments, the management plane generates configuration data based on the input and provides the configuration data to a central control plane (e.g., a set of centralized controllers). The central control plane identifies the managed forwarding elements that require configuration data for each atom and distributes the configuration data to the local controllers for each identified managed forwarding element. These local controllers are then responsible for configuring managed forwarding elements (including gateway machines implementing centralized routers) to implement logical forwarding elements of the logical network, including redirecting appropriate data messages to third party services (e.g., according to policy-based routing rules provided by an administrator).
In some embodiments, receiving configuration data for attaching a third party service includes several separate configuration inputs (e.g., from an administrator). Figure 3 conceptually illustrates a process 300 of some embodiments for configuring a gateway machine of a logical network to redirect incoming and/or outgoing data traffic to a third party service machine. In some embodiments, process 300 is performed by a management plane of a network control system that receives input through an API call.
In the description of this process, it is assumed that a logical network has been configured and includes a logical router having at least one centralized component configured to process data traffic entering and exiting the logical network. Some embodiments configure specific managed forwarding elements operating on gateway machines to implement these centralized logical routing components that handle the connection of a logical network to one or more external networks.
As shown, process 300 begins by receiving (at 305) an input to define a service attachment interface for a logical router. In some embodiments, the service attachment interface is a dedicated type of interface for the logical router. In various embodiments, the administrator defines the service attachment interface either on a particular centralized component or, typically, on a logical router. In the latter case, the management plane either applies the interface to a particular one of the components (e.g., if an administrator defines that the service attachment interface will only handle traffic sent to or from a particular uplink interface of the logical router assigned to a particular centralized component), or creates a separate interface for each centralized component of the logical router. For example, in some embodiments, active and standby centralized routing components are defined and interfaces are created on each of these components.
Next, process 300 receives (at 310) an input to define a logical switch for connecting the logical router to the third party service. In addition, the process receives (at 315) an input to attach a service attachment interface to the logical switch. In some embodiments, the creation of the logical switch is similar to the creation of a logical switch of a logical network to which a data compute node (e.g., VM, etc.) is attached. In other embodiments, the logical switch is defined by an administrator as a particular service attachment logical switch. The logical switch has a private assigned subnet that (i) includes network addresses of service attachment interfaces attached to the logical switch, and (ii) need only include sufficient network addresses for any interfaces of third party services and any service attachment interfaces connected to the logical switch. For example, as shown below, a logical switch connecting a single logical router interface to a single third party service interface may be a "/31" subnet using a classless inter-domain routing (CIDR) representation. Even though the logical router performs route advertising (e.g., using BGP or OSPF) to the logical network subnetwork to the external physical router, the subnetwork for the service attachment logical switch is not advertised (or entered into a routing table for the various logical router layers) in some embodiments.
In some embodiments, if the logical router includes multiple centralized components (e.g., active and standby components), and the service attachment interface corresponds to an interface on each of these components, the attachment service attachment interface actually attaches each of these interfaces to the logical switch. In this case, each centralized component interface has a separate network address in the subnetwork of the logical switch.
Next, process 300 receives (at 320) an input to define a service endpoint interface and receives (at 325) an input to attach the service endpoint interface to a logical switch (to which the service attachment interface of the logical router is attached). In some embodiments, the service endpoint interface represents an interface on a third party service machine. In some embodiments, when an administrator defines endpoint interfaces to which a centralized routing component is to connect, these interfaces may be service endpoint interfaces (also referred to as logical endpoint interfaces, which correspond to service machines and connect to service attachment interfaces through logical switches) or external interfaces (also referred to as virtual endpoint interfaces, which correspond to network addresses reachable from the centralized component). An external router interface is an example of this latter interface.
In addition, some embodiments require an administrator to define third party service machines (either through a network control system or through a separate data center computing manager). For example, in some embodiments, a network administrator defines both a service type and a service instance (e.g., an instance of the service type). As described above, the service endpoint interface should also have a network address within the subnet of the logical switch to which the interface is attached.
It should be appreciated that operations 305-325 need not occur in the particular order shown in FIG. 3. For example, a network administrator may initially create two interfaces (a service attachment interface on a logical router and a service endpoint interface representing a third party service), then subsequently create a logical switch and attach the interfaces to the logical switch.
In addition, process 300 receives (at 330) one or more rules for redirecting the data message to the service endpoint interface. In some embodiments, these are policy-based routing rules that (i) specify which ingress and/or egress traffic is to be redirected to the service interface, and (ii) are applied by the gateway machine independent of its usual routing operations. In some embodiments, the administrator defines the redirection rules based on one or more data message header fields, such as source and/or destination network addresses, source and/or destination transport layer ports, transport protocols, interfaces to receive data messages, and the like. For each service interface, an administrator may create a redirection rule or rules. For example, the redirected data message may comprise all incoming and/or outgoing data messages for a particular uplink, data messages sent only from or to a particular logical switched subnet, and so on.
Finally, upon receiving the configuration data, process 300 configures (at 335) the gateway machine to implement a centralized logical router and redirection to the service endpoint interface. Process 300 then ends. If multiple centralized routing components have logical switch interfaces attached to the service endpoints, a gateway machine for each of these components is configured. In some embodiments, the management plane generates configuration data for the service attachment interface and redirection rules and provides this information to the central control plane. The central control plane identifies each gateway machine that needs this information and provides the appropriate configuration data to the local controller of that gateway machine. The local controller of some embodiments converts this configuration data into a gateway machine-readable format (if it is not already in such a format) and directly configures the gateway machine to implement policy-based routing rules.
Some embodiments use a variety of different topologies to enable multiple services to connect to a logical router. For example, multiple services may be connected to the same logical switch, in which case the services all have interfaces in the same subnet, and may send data traffic directly between each other (if configured to do so). In this arrangement, the logical router may have a single interface to the logical switch (for traffic to all services), or a separate interface to the logical switch for each attached service. In other cases, separate logical switches may be defined for each service (with separate logical router interfaces connected to each logical switch). Further, multiple interfaces may be defined for each service machine for handling different sets of traffic (e.g., traffic to/from different external networks or different logical network subnets).
Figures 4-7 conceptually illustrate several different such topologies for connecting a centralized routing component of a logical router to one or more service machines. Each of these figures shows a centralized router connected to one or more logical switches to which one or more service machines are also connected. It should be understood that these figures represent logical views of connections, and that in some embodiments, a gateway machine implementing a centralized router will also implement logical switch(s).
Figure 4 conceptually illustrates a centralized routing component 400 having two service attachment interfaces that connect to two separate service endpoint interfaces of a third party service machine 405 via two separate logical switches 410 and 415. This topology essentially uses a separate service attachment interface and a separate logical switch for each connection to the third party service. In this example, each of the logical switches 410 and 415 is assigned a "/31" subnet that includes two network addresses. Since each logical switch is created specifically for connecting one service attachment interface of the centralized routing component 400 to the service machine 405, only two addresses are required for each switch. In some embodiments, the redirection rules for the router redirect data messages sent to and from each uplink to different interfaces of the third party service machine (thereby using different service attachment interfaces).
Figure 5 conceptually illustrates a centralized routing component 500 having one service attachment interface that connects to two separate interfaces of a third party service machine 505 via one logical switch 510. In some embodiments, for each third party service machine, the administrator creates one logical switch with one service attachment interface on the centralized router component, but defines multiple service endpoint interfaces for that third party service machine. In this case, the logical switch subnetwork accommodates a greater number of network addresses (in this example, the "/24" subnetwork is used). In some embodiments, the redirection rules are arranged to redirect data messages sent to and from each uplink to a different interface of the third party service machine via the same service attachment interface and the logical switch. In some embodiments, the use of a setup on a service machine with multiple service endpoint interfaces attached to the same logical switch requires a third party service machine to use separate routing tables (e.g., virtual routing and forwarding instances) for each interface.
Figure 6 conceptually illustrates a centralized routing component 600 having one service attachment interface that connects to the interfaces of two different third party service machines 605 and 610 via one logical switch 615. Service machines 605 and 610 in this scenario may provide two separate services (e.g., firewall and cloud extension services), or a primary machine and a backup machine for a single high availability service. In some embodiments, because the interfaces of service machines 605 and 610 are on the same logical switch, data messages may also be sent from one service to another. In this example, the centralized routing component 600 has a single uplink; some embodiments using this configuration would include two service attachments and two logical switches each connected to a (different) interface of two service machines to process data messages received for or destined for two different uplinks.
Fig. 7 conceptually illustrates a centralized routing component 700 having two service attachment interfaces, each connected to a different one of two service machines 705 and 710 via separate logical switches 715 and 720. As with the previous examples, the two service machines may provide two separate services, or may act as a master and a standby for a single high availability service. In this example, the centralized routing component has a single uplink; some embodiments using this configuration would include two additional service attachments corresponding to each additional uplink that are connected to separate interfaces on each service machine via separate logical switches. In these examples, using a separate interface on the serving machine corresponding to each different uplink allows the serving machine to apply a particular processing configuration to data messages sent to or received from each different uplink.
In addition to these various topologies, in some embodiments, third party service machines may also be connected to a centralized routing component via different types of connections. In particular, some embodiments allow a service machine to connect in either (i) an L2 line card mode or (ii) an L3 single arm mode. In the L2 mode shown in fig. 10 and 11, the two interfaces of the logical router are connected to the two separate interfaces of the service machine via the two separate logical switches, and data traffic is sent to the service machine via one of the interfaces and received back from the service machine via the other interface. For traffic entering the logical network, data traffic may be sent to the service machine via one interface and for traffic exiting the logical network, data traffic is sent to the service machine via another interface.
In the L3 mode shown in fig. 8 and 9, a single interface is used on the logical router for each connection with the service machine. Once configured, the gateway redirects some or all of the data traffic between the logical network and the external network to the service machine. As described above, some embodiments use a set of policy-based routing (PBR) rules to determine whether to redirect each data message. In some embodiments, the gateway applies these PBR rules to outgoing data messages after performing logical routing of the data messages and applies the PBR rules to incoming data messages before performing logical routing and/or switching of the incoming data messages.
Fig. 8 illustrates the path (represented by dashed lines) of an incoming data message through multiple logical processing stages implemented by gateway management forwarding element 800 and third party service machine 805. As described above, in this example, the third party service machine is connected in an L3 single arm mode. In this mode, the data message is sent to the network address of the third party service machine, which sends the data message back to the network address of the logical router service attachment interface.
Gateway MFE 800 implements several stages of logical network processing, including policy-based routing (PBR) redirection rules 810, centralized routing component processing 815, service attachment logical switch processing 820, and additional logical processing 825 (e.g., transit logical switch processing, distributed routing component processing, processing for logical routers and/or logical switches of other tiers to which network endpoints are connected, etc. in some embodiments gateway MFE 800 is a data path in a bare metal computer or virtual machine (e.g., a data path based on a Data Plane Development Kit (DPDK)), while other embodiments gateway MFE implements a portion of the logical processing in such a data path while implementing a centralized routing component in a virtual machine, namespace, or similar construct.
For the incoming data message in fig. 8, gateway MFE 800 applies PBR rules 810 to determine whether to redirect the data message before processing the data message through any logical forwarding elements. In some embodiments, the gateway MFE also performs additional operations, such as IPSec and/or other locally applied services, before applying the PBR rules. The PBR rules described in more detail below identify whether a given data message is to be redirected (e.g., based on various data message header fields, such as source and/or destination IP addresses), how to redirect data messages that match a particular set of header field values, etc. In this case, the PBR rules 810 specify an interface to redirect the data message to the third party service machine 805.
Based on this determination, the centralized routing component process 815 identifies that the redirect interface corresponds to a service attachment logical switch, and thus the gateway MFE 800 then performs this logical switch process 820. Based on this logical switch processing, the gateway MFE sends a data message (e.g., with encapsulation) to the third party service machine 805. The service machine 805 performs its service processing (e.g., firewall, NAT, cloud extension, etc.) and returns data messages to the gateway MFE (unless the service discards/blocks the data messages). Upon return of the data message from the service, the gateway MFE then performs a centralized routing component process 815 (e.g., routing based on the destination network address), and in turn performs additional logic processing operations 825. In some embodiments, the data message returned from the third party service machine is marked with a flag to indicate that the PBR rules do not need to be applied again. Based on these operations, gateway MFE 800 sends the data message to its destination in the logical network (e.g., by encapsulating the data message and sending the data message to a host in the data center).
Fig. 9 shows the path (indicated by the dashed lines) of outgoing data messages through multiple logical processing stages implemented by gateway MFE 800 and third party service machine 805. Upon receipt of the data message, gateway MFE 800 first applies any logical network processing 825 required prior to the centralized routing component, such as transit logical switches (between the distributed routing component and the centralized routing component). In some cases, the layer-1 logical router will also have a centralized routing component implemented on the gateway MFE, in which case the additional logical processing may include the centralized routing component, the distributed routing component of the layer-0 logical router, the transit logical switch between them, and so on.
The centralized routing component process 815 identifies the uplink interface as its output interface, which results in the application of PBR rules 810. In this case, these rules also redirect outgoing data messages to the service machine 805, so the gateway MFE 800 again applies the centralized routing component process 815 and then applies the service attachment logic switch process 820 and sends the data message to the third party service machine 805. Assuming that the data message is not discarded by the service machine 805, the gateway MFE 800 receives the data message via its interface corresponding to the service attachment logic switch. At this point, the centralized routing component process 815 again identifies the uplink as the output interface for that component, and the gateway MFE sends the data message to the external physical network router associated with the uplink. As described above, upon receipt of the data message from the service machine 805, the data message is marked with a flag so that in some embodiments the gateway MFE will not apply the PBR rules 810 again.
If the service machine is logically connected to the first level logical router, in some embodiments, the PBR rules (for outgoing data messages) are applied after the first level logical router processing and before the 0 level logical router processing. The gateway MFE then applies the level 0 distributed routing component, transit logical switch, and level 0 centralized routing component when returning from the service machine. Incoming traffic is similarly processed by applying the PBR rules after the level 0 distributed routing component and before applying the level 1 centralized routing component.
As described above, fig. 10 and 11 illustrate connecting a service machine to a centralized routing component using an L2 line plugin mode. Fig. 10 shows the path (represented by dashed lines) of an incoming data message through multiple logical processing stages implemented by gateway MFE 1000 and third party service machine 1005. In the L2 line card mode, two interfaces of the logical router are associated with each connection to the service machine 1005. The data message is sent to the service machine via one of the interfaces and returned via the other interface.
As shown in the examples of fig. 8 and 9, gateway MFE 1000 implements PBR redirection rules 1010, centralized routing component processing 1015, and additional logic processing 1030. Because there are two separate interfaces for connecting to the service machine 1005, the gateway MFE 1000 also implements two separate service attachment logic switches 1020 and 1025. In some embodiments, the interface associated with first logical switch 1020 is an "untrusted" interface, while the interface associated with second logical switch 1025 is a "trusted" interface. In this figure, each centralized routing component service attachment interface is associated with a separate interface of gateway MFE 1000. However, in other embodiments, the service attachment interfaces share a gateway MFE interface.
For the incoming data message in fig. 10, gateway MFE 1000 applies PBR rules 1010 to determine whether to redirect the data message before processing the data message through any logical forwarding elements. In some embodiments, the gateway MFE also performs additional operations, such as IPSec and/or other locally applied services, before applying the PBR rules. The PBR rules described in more detail below identify whether a given data message is to be redirected (e.g., based on various data message header fields, such as source and/or destination IP addresses), how to redirect data messages that match a particular set of header field values, etc. In this case, the PBR rules 1010 specify an interface to redirect the data message to the third party service machine 805 associated with the first logical switch 1020.
Based on this determination, the centralized routing component process 815 identifies that the redirect interface corresponds to the first service attachment logical switch 1020. Because the service machine 1005 is connected in L2 line plugin mode, the centralized routing component uses the MAC address of that interface as the source address for redirecting data messages and uses the MAC address of another service attachment interface (connected to the second logical switch 1025) as the destination address. This causes a data message to be returned by the service machine 1005 to the second (trusted) interface.
Gateway MFE 1000 then performs a logical exchange process 1020 and sends data messages to third party service machine 1005 based on the logical exchange process. The service machine 1005 performs its service processing (e.g., firewall, NAT, cloud extension, etc.) and returns the data message to the gateway MFE (unless the service discards/blocks the data message). Upon return of the data message from the service, the gateway MFE identifies the second logical switch 1025 for processing based on the destination address of the data message and/or the gateway MFE interface on which the message was received, then performs processing of the centralized routing component 1015 (e.g., routing based on the destination network address), and in turn performs additional logical processing operation 1030. In some embodiments, the data message returned from the third party service machine is marked with a flag to indicate that the PBR rules do not need to be applied again. Based on these operations, gateway MFE 800 sends the data message to its destination in the logical network (e.g., by encapsulating the data message and sending the data message to a host in the data center).
Fig. 11 shows the path (indicated by dashed lines) of outgoing data messages through multiple logical processing stages implemented by gateway MFE 1000 and third party service machine 1005 connected in L2 line plug-in mode. Upon receipt of the data message, gateway MFE 1000 first applies any logical network processing 1030 required prior to the centralized routing component, such as transit logical switches (between the distributed routing component and the centralized routing component). In some cases, the first level logical router will also have a centralized routing component implemented on the gateway MFE, in which case the additional logic process 1030 may include the centralized routing component, the distributed routing components of the level 0 logical router, the transit logical switch between them, and so on.
The centralized routing component process 1015 then identifies the uplink interface as its output interface, which results in the application of the PBR rules 1010. In this case, these rules redirect outgoing data messages to service machine 805 via a trusted interface attached to second logical switch 1025. Thus, gateway MFE 800 again applies centralized routing component processing 1015, then applies the processing of second service attachment logic switch 1025, and sends the data message to third party service machine 1005. In this direction, the data message has the trusted interface MAC address as its source address and the untrusted interface MAC address as its destination address, the reverse path from the centralized routing component 1015 to the service machine 1005 and back is traversed for the incoming data message.
Assuming that the data message is not dropped by the service machine 1005, the gateway MFE 800 receives the data message via an interface corresponding to the first service attachment logic switch 1020. At this point, the centralized routing component process 1015 again identifies the uplink as the output interface, and the gateway MFE sends the data message to the external physical network router associated with the uplink. As described above, in some embodiments, the data message is marked with a flag upon receipt from the service machine 1005, such that the gateway MFE will not apply the PBR rules 1010 again.
In some embodiments, the PBR rules use a two-stage lookup to determine whether to redirect the data message (and to which interface to redirect the data message). Specifically, rather than the PBR rules directly providing redirection detail information, each rule specifies a unique identifier. Each identifier corresponds to a service machine and the gateway stores a dynamically updated data structure for each identifier that provides detailed information about how to redirect the data message.
Figure 12 conceptually illustrates a process 1200 of some embodiments for applying policy-based routing redirection rules to data messages. In some embodiments, the process 300 is performed by a gateway MFE such as those shown in fig. 8-11 when applying PBR rules to incoming (from an external network) or outgoing (from a logical network) data messages. The process 1200 will be described in part with reference to fig. 13, which illustrates a set of PBR rules and data structures for some of these rules.
As shown, process 1200 begins by receiving (at 1205) a data message for PBR processing. This may be a data message received from an external network via a logical router uplink or a data message sent by a logical network endpoint for which the gateway MFE has identified the uplink as an egress port of the centralized routing component. In some embodiments, the process 1200 is not applied to a data message that sets a flag indicating that the data message is received from a third party service machine.
Process 1200 then performs (at 1210) a lookup of a set of PBR rules. In some embodiments, the rules are organized as a set of flow entries, with matching conditions and actions for data messages that match each set of matching conditions. Depending on the context of the gateway data path, the PBR rules of some embodiments use a hash table (or set of hash tables) that uses one or more hashes of the set of data message header fields. Other embodiments use other techniques to identify matching PBR rules.
Fig. 13 shows a table of PBR rules 1300. In this case, the rules are both matched on the source and destination IP addresses, but the PBR rules of some embodiments may also be matched on other header fields (and combinations of other header fields with the source and/or destination IP addresses). For example, the first two matching conditions are opposite to each other, one for processing incoming data messages (from 70.70.70.0/24 in the external network to 60.60.60.0/24 subnetwork in the logical network) and the other for processing corresponding outgoing data messages. The third match condition matches on any data messages sent from the source subnetwork 20.20.20.0/24 (i.e., independent of the destination address). As described further below, the actions specify a unique policy identifier, rather than a specific redirect action.
Returning to fig. 12, process 1200 determines (at 1215) whether the data message matches any PBR rules based on the PBR lookup. In some embodiments, the PBR rule table includes a default (lowest priority) rule (or set of rules) for data messages that do not match any other rule. If the data message does not match any PBR rules (or only matches default rules), the process forwards (at 1220) the data message to its destination without any redirection. Thus, outgoing data messages are sent to the appropriate physical router (after performing any additional IPSec or other local service processing), while incoming data messages begin logical processing at the centralized logical router.
On the other hand, if the data message matches one of the PBR rules, the process looks up (at 1225) the data structure for the unique identifier specified by the matched PBR rule. As shown in fig. 13, the action of each PBR rule does not directly specify that the matching data message be redirected to a particular next-hop address. Rather, these actions specify a unique policy identifier, which in turn maps to a corresponding dynamically updated data structure. That is, the gateway MFE is configured to store a data structure for each unique identifier specified in the PBR action. These data structures may be database table entries or any other type of modifiable data structure. In some embodiments, the gateway MFE is configured to be some or all of the fields of the data structure based on, for example, current network conditions.
In some embodiments, these data structures indicate the type of connection to the service (e.g., L2 patch or L3 single arm), the network address of the interface of the service to which the data message is redirected, dynamically updated status data, and the failover policy. The status data is dynamically updated based on the health/reachability of the service, which may be tested using a heartbeat protocol such as Bidirectional Forwarding Detection (BFD). In some embodiments, the failover policy specifies how to process the data message in the event that the service is not reachable.
Fig. 13 shows the contents of two of these data structures. The data structure 1305 for the unique identifier ABCDE indicates that the service machine to which the policy is redirected is connected in L2 line plug-in mode (such that data messages of the opposite direction that match the second PBR rule will be redirected to the same service machine in the opposite direction). The data structure 1305 also indicates a pseudo IP address for redirection. This pseudo IP is not actually the address of the service machine, but rather is resolved to the MAC address of the service attachment interface of the centralized routing component via which the data message is to be returned (e.g., a trusted interface of the centralized routing component for incoming data messages). In some embodiments, this address resolution may be performed using statically configured ARP entries.
In addition, data structure 1305 specifies the current BFD state of the connection to the service machine (connection is currently on (up)) and a failover policy that indicates how to handle the data message if the BFD state is off (down). It should be noted that while these examples use BFD, other mechanisms for monitoring the reachability of the service machine (e.g., other heartbeat protocols, other measurements of connection status, etc.) may also be used. In this case, the failover policy indicates that the data message should be discarded if the service computer is not available. Other failover policy options may include, for example, forwarding the data message to its destination without redirection to a service, redirection to a backup service machine, etc.
The data structure 1310 for the unique identifier ZYXWV indicates that the service machine to which the policy is redirected is connected in L3 single-arm mode, so that the redirect IP address provides the address of the service machine interface (rather than pseudo IP). The BFD state of this connection is also up, but in this case the failover policy provides redirection to a backup service machine located at a different IP address on a different subnet (i.e., connected to a different logical switch).
Returning to fig. 12, process 1200 processes (at 1230) the data message according to instructions in the data structure for the unique identifier. This may include redirecting the data message to a next hop IP address specified by the data structure; discarding the data message if the connection is down and the failure policy specifies discarding the data message; or if the connection is down and the failure policy specifies ignore redirection, the forward data message is processed according to the logical network.
As described above, the data structure for each redirection policy is dynamically updated by the gateway MFE. In some embodiments, the BFD thread executes on the gateway machine to (i) send BFD messages to the serving machine and (ii) receive BFD messages from the serving machine. For a serving machine connected in L3 single arm mode, the serving machine also executes a BFD thread that sends BFD messages to the gateway. On the other hand, in the L2 line plugin mode, the BFD thread sends BFD messages to the serving machine from one of the interfaces connecting the centralized routing component to the serving machine, and receives these messages back on the other interface. Some such embodiments send BFD messages out through two interfaces (where the BFD messages are sent from a trusted interface, received at an untrusted interface, and vice versa). This process is described in more detail in U.S. patent application 15/937,615, which is incorporated herein by reference. In some embodiments, one BFD thread executes on each gateway MFE and exchanges messages with all connected service machines, while in other embodiments, a separate BFD thread executes on the gateway MFE to exchange messages with each connected service machine. When the BFD thread detects that BFD messages are no longer being received from a particular serving machine, the gateway MFE modifies the data structure for that serving machine.
Figure 14 conceptually illustrates a data structure 1310 that is dynamically updated based on changes in the connection state of the service machine to which the data structure redirects data messages. The figure shows the connection between the gateway machine 1400 and the two service machines 1415 and 1420 along with the data structure 1310 at two stages 1405 and 1410.
In the first stage 1405, the data structure 1310 is in the same state as in FIG. 13, indicating that the connection to the serving machine endpoint interface 169.254.10.1 is currently up according to the BFD state. Gateway machine 1400 executes BFD threads 1425 in addition to operating gateway MFE with its logical network processing, PBR rules, and the like. The BFD thread 1425 periodically sends BFD messages to both the first serving machine 1415 (at its interface with an IP address of 169.254.10.1) and the second serving machine 1420 (at its interface with an IP address of 169.254.11.1). In addition, each of these service machines 1415 and 1420 execute their own BFD threads 1430 and 1435, respectively, with BFD threads 1430 and 1435 periodically sending BFD messages to gateway machines. As shown by large X, at this stage 1405, the connection between gateway machine 1400 and first service machine 1415 becomes down. This may occur due to physical connection problems, problems with the service machine 1415 crashing, etc. Thus, BFD thread 1425 will no longer receive BFD messages from serving machine 1415.
In the second stage 1410, the connection between gateway machine 1400 and service machine 1415 no longer exists. In addition, data structure 1305 has been dynamically updated by gateway MFE to indicate BFD state as down. As a result of the failover policy specified by this data structure 1305, data messages with source IP in the subnet 20.20.20.0/24 will be redirected to the 169.254.11.1 interface of the second serving machine 1420 until the connection with the first serving machine 1415 is restored to up.
In some embodiments, multiple threads may write to data structures 1305 and 1310. For example, some embodiments allow BFD threads and configuration receiver threads to both write to these data structures (e.g., to modify BFD states, as well as make any configuration changes received from the network control system). In addition, one or more packet processing threads can read these data structures to perform packet lookups. Some embodiments enable these packet processing threads to read from the data structures even if one of the writer threads is currently accessing the structures so that packet processing is not interrupted by the writer threads.
Fig. 15 conceptually illustrates an electronic system 1500 for implementing some embodiments of the present invention. Electronic system 1500 may be a computer (e.g., desktop computer, personal computer, tablet computer, server computer, mainframe, blade computer, etc.), telephone, PDA, or any other type of electronic device. Such electronic systems include interfaces to various types of computer-readable media and to various other types of computer-readable media. Electronic system 1500 includes bus 1505, processing unit(s) 1510, system memory 1525, read-only memory 1530, persistent storage 1535, input devices 1540, and output devices 1545.
Bus 1505 generally represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of electronic system 1500. For example, bus 1505 communicatively connects processing unit(s) 1510 with read-only memory 1530, system memory 1525, and persistent storage 1535.
Processing unit(s) 1510 retrieve instructions to be executed and data to be processed from these different memory units in order to perform the processes of the present invention. In different embodiments, the processing unit(s) may be a single processor or a multi-core processor.
Read Only Memory (ROM) 1530 stores static data and instructions for the processing unit(s) 1510 and other modules of the electronic system. On the other hand, persistent storage 1535 is a read-write memory device. The device is a non-volatile memory unit that stores instructions and data even when the electronic system 1500 is turned off. Some embodiments of the invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as the persistent storage device 1535.
Other embodiments use removable storage devices (e.g., floppy disks, flash memory drives, etc.). As a permanent storage device. Like persistent storage 1535, system memory 1525 is a read-write memory device. However, unlike storage device 1535, the system memory is volatile read-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the processes of the present invention are stored in system memory 1525, persistent storage 1535 and/or read only memory 1530. Processing unit(s) 1510 retrieve instructions to be executed and data to be processed from these different memory units in order to perform the processes of some embodiments.
Bus 1505 is also connected to input and output devices 1540 and 1545. The input device enables a user to communicate information and selection commands to the electronic system. Input devices 1540 include an alphanumeric keyboard and pointing device (also referred to as a "cursor control device"). The output device 1545 displays the image generated by the electronic system. The output devices include printers and display devices, such as Cathode Ray Tubes (CRTs) or Liquid Crystal Displays (LCDs). Some embodiments include devices such as touch screens as input and output devices.
Finally, as shown in fig. 15, bus 1505 also couples electronic system 1500 to network 1565 through a network adapter (not shown). In this manner, the computer may be part of a computer network, such as a local area network ("LAN"), a wide area network ("WAN"), or one of the networks, such as the Internet. Any or all of the components of electronic system 1500 may be used in conjunction with the present invention.
Some embodiments include electronic components, such as microprocessors, storage devices, and memory, that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as a computer-readable storage medium, machine-readable medium, or machine-readable storage medium). Some examples of such computer-readable media include RAM, ROM, compact disk read-only (CD-ROM), compact disk recordable (CD-R), compact disk rewriteable (CD-RW), digital versatile disk read-only (e.g., DVD-ROM, dual layer DVD-ROM), various recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state disk drives, read-only and recordable
Figure BDA0002955564720000241
Disc, ultra-high density optical disc, any other optical or magnetic medium, and floppy disk. The computer-readable medium may store a computer program executable by at least one processing unit and including a set of instructions for performing various operations. Examples of a computer program or computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer, electronic component, or microprocessor using an interpreter.
Although the discussion above refers primarily to microprocessors or multi-core processors executing software, some embodiments are performed by one or more integrated circuits, such as Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs). In some embodiments, such integrated circuits execute instructions stored on the circuits themselves.
As used in this specification, the terms "computer," "server," "processor," and "memory" all refer to electronic or other technical devices. These terms do not include a person or group of people. For purposes of this description, the term display refers to display on an electronic device. The terms "computer-readable medium," "plurality of computer-readable media," and "machine-readable medium" as used in this specification are entirely limited to tangible, physical objects that store information in a computer-readable form. These terms do not include any wireless signals, wired download signals, and any other temporary signals.
Throughout this specification reference is made to computing and network environments including Virtual Machines (VMs). However, a virtual machine is only one example of a Data Compute Node (DCN) or data compute end node (also referred to as an addressable node). The DCN may include a non-virtualized physical host, a virtual machine, a container that runs on top of the host operating system without the need for a hypervisor or separate operating system, and a hypervisor kernel network interface module.
In some embodiments, VMs operate on hosts using their own guest operating systems using the resources of the hosts virtualized by virtualization software (e.g., hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) may select which applications are run on top of the guest operating system. On the other hand, some containers are constructs that run on top of the host operating system without the need for a hypervisor or a separate guest operating system. In some embodiments, the host operating system uses namespaces to isolate the containers from each other and thus provide operating system level isolation of different application groups operating within different containers. This isolation is similar to the VM isolation provided in the context of virtual machine hypervisor virtualization that virtualizes system hardware, and thus can be considered a form of virtualization that isolates different application groups running in different containers. Such containers are lighter weight than VMs.
In some embodiments of the present invention, in some embodiments,the hypervisor kernel network interface module is a non-VMDCN that includes a network stack having a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module, which is the VMware company's ESXi TM A part of the hypervisor.
It should be appreciated that while this description refers to a VM, the examples given may be any type of DCN, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. Indeed, in some embodiments, an example network may include a combination of different types of DCNs.
Although the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, many of the figures (including figures 10 and 12) conceptually illustrate the process. The specific operations of these processes may not be performed in the exact order shown and described. The particular operations may not be performed in a sequential series of operations and different particular operations may be performed in different embodiments. Furthermore, the process may be implemented using multiple sub-processes, or as part of a larger macro process. It will be understood by those of ordinary skill in the art, therefore, that the present invention is not to be limited by the foregoing illustrative details, but is to be defined by the appended claims.

Claims (17)

1. A method for network management and control, the method comprising:
receiving a definition of a logical network for implementation in a data center, the logical network comprising at least one logical switch to which logical network endpoints are attached and a logical router for processing data traffic between the logical network endpoints in the data center and an external network;
receiving, via an additional logical switch specified for service attachment, configuration data of at least one interface attaching a third party service to the logical router, the third party service for performing non-forwarding processing of data traffic between a logical network endpoint and an external network; and
the gateway machine in the data center is configured to implement the logical router and redirect at least a subset of data traffic between the logical network endpoint and the external network to the attached third party service.
2. The method of claim 1, wherein receiving configuration data comprises:
receiving a definition of a logical router interface as a service attachment interface;
receiving a definition of an additional logical switch and a connection of a service attachment interface to the additional logical switch; and
an attachment of a third party service to an additional logical switch is received.
3. The method of claim 2, wherein the definition of the additional logical switch designates the additional logical switch as an attached logical switch for the third party service.
4. The method of claim 2, wherein receiving configuration data further comprises receiving a definition of a third party service.
5. The method of claim 1, further comprising defining a distributed routing component and a centralized routing component for the logical router, wherein a distributed routing component is implemented by a plurality of machines including the gateway machine and a centralized routing component is implemented by only the gateway machine.
6. The method of claim 1, wherein the gateway machine is configured to redirect data traffic received from an external network prior to applying a logical router configuration to the data traffic.
7. The method of claim 6, wherein the gateway machine applies a logical router configuration to the data traffic received from the external network after redirecting the data traffic to and receiving the data traffic back from a third party service.
8. The method of claim 1, wherein the gateway machine is configured to redirect data traffic directed to an external network after applying a logical router configuration to the data traffic.
9. The method of claim 1, wherein the third party service is a first third party service and the subset of data traffic between logical network endpoints and external networks is a first subset of the data traffic, the method further comprising:
receiving, via the additional logical switch, configuration data of an interface attaching a second third party service to the logical router, the second third party service also being used to perform non-forwarding processing on data traffic between logical network endpoints and an external network; and
the gateway machine is configured to redirect the second subset of the data traffic to a second third party service.
10. The method of claim 9, wherein the first third party service and the second third party service have interfaces for network addresses in the same subnet as the interfaces.
11. The method of claim 1, wherein the interface is a first interface of the logical router, the additional logical switch is a first logical switch specified for service attachment, and a subset of data traffic between the logical network endpoint and an external network is a first subset of the data traffic, the method further comprising:
Receiving configuration data for a second interface attaching the third party service to the logical router via a second logical switch designated for service attachment; and
the gateway machine is configured to redirect the second subset of the data traffic to the third party service via a second logical switch.
12. The method of claim 11, wherein the third party service has a separate interface with separate network addresses attached to the first logical switch and the second logical switch.
13. The method of claim 1, wherein the configuration data attaches the third party service to two interfaces of the logical router, wherein the gateway machine is configured to direct incoming data traffic from an external network to the third party service via a first one of the interfaces and to receive the incoming data traffic back from the third party service via a second one of the interfaces.
14. The method of claim 13, wherein the gateway machine is configured to direct outgoing data traffic from the logical network endpoint to the third party service via the second interface and to receive back the outgoing data traffic from the third party service via the first interface.
15. A machine readable medium storing a program which, when implemented by at least one processing unit, implements the method of any of claims 1-14.
16. A computing device, comprising:
a set of processing units; and
a machine readable medium storing a program which when implemented by at least one of the processing units implements the method according to any one of claims 1-14.
17. A network management and control system comprising means for implementing the method according to any of claims 1-14.
CN201980057472.1A 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway Active CN112673596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310339981.1A CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US16/120,283 2018-09-02
US16/120,281 US10944673B2 (en) 2018-09-02 2018-09-02 Redirection of data messages at logical network gateway
US16/120,283 US11595250B2 (en) 2018-09-02 2018-09-02 Service insertion at logical network gateway
US16/120,281 2018-09-02
PCT/US2019/047586 WO2020046686A1 (en) 2018-09-02 2019-08-21 Service insertion at logical network gateway

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310339981.1A Division CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Publications (2)

Publication Number Publication Date
CN112673596A CN112673596A (en) 2021-04-16
CN112673596B true CN112673596B (en) 2023-05-02

Family

ID=67841276

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310339981.1A Pending CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway
CN201980057472.1A Active CN112673596B (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310339981.1A Pending CN116319541A (en) 2018-09-02 2019-08-21 Service insertion method, device and system at logic gateway

Country Status (3)

Country Link
EP (1) EP3815312A1 (en)
CN (2) CN116319541A (en)
WO (1) WO2020046686A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9225638B2 (en) 2013-05-09 2015-12-29 Vmware, Inc. Method and system for service switching using service tags
US11496606B2 (en) 2014-09-30 2022-11-08 Nicira, Inc. Sticky service sessions in a datacenter
US10257095B2 (en) 2014-09-30 2019-04-09 Nicira, Inc. Dynamically adjusting load balancing
US9935827B2 (en) 2014-09-30 2018-04-03 Nicira, Inc. Method and apparatus for distributing load among a plurality of service nodes
US10594743B2 (en) 2015-04-03 2020-03-17 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US10797966B2 (en) 2017-10-29 2020-10-06 Nicira, Inc. Service operation chaining
US11012420B2 (en) 2017-11-15 2021-05-18 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
US10797910B2 (en) 2018-01-26 2020-10-06 Nicira, Inc. Specifying and utilizing paths through a network
US10805192B2 (en) 2018-03-27 2020-10-13 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US10944673B2 (en) 2018-09-02 2021-03-09 Vmware, Inc. Redirection of data messages at logical network gateway
US11595250B2 (en) 2018-09-02 2023-02-28 Vmware, Inc. Service insertion at logical network gateway
US11042397B2 (en) 2019-02-22 2021-06-22 Vmware, Inc. Providing services with guest VM mobility
US11140218B2 (en) 2019-10-30 2021-10-05 Vmware, Inc. Distributed service chain across multiple clouds
US11283717B2 (en) 2019-10-30 2022-03-22 Vmware, Inc. Distributed fault tolerant service chain
US11223494B2 (en) 2020-01-13 2022-01-11 Vmware, Inc. Service insertion for multicast traffic at boundary
US11153406B2 (en) 2020-01-20 2021-10-19 Vmware, Inc. Method of network performance visualization of service function chains
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11368387B2 (en) 2020-04-06 2022-06-21 Vmware, Inc. Using router as service node through logical service plane
US11611625B2 (en) 2020-12-15 2023-03-21 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106134137A (en) * 2014-03-14 2016-11-16 Nicira股份有限公司 The advertising of route of managed gateway
CN107005584A (en) * 2014-09-30 2017-08-01 Nicira股份有限公司 Inline service switch
CN107113208A (en) * 2015-01-27 2017-08-29 华为技术有限公司 The network virtualization of network infrastructure
CN107210959A (en) * 2015-01-30 2017-09-26 Nicira股份有限公司 Router logic with multiple route parts

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10038628B2 (en) * 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10095535B2 (en) * 2015-10-31 2018-10-09 Nicira, Inc. Static route types for logical routers
US10333849B2 (en) * 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10841273B2 (en) * 2016-04-29 2020-11-17 Nicira, Inc. Implementing logical DHCP servers in logical networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106134137A (en) * 2014-03-14 2016-11-16 Nicira股份有限公司 The advertising of route of managed gateway
CN107005584A (en) * 2014-09-30 2017-08-01 Nicira股份有限公司 Inline service switch
CN107113208A (en) * 2015-01-27 2017-08-29 华为技术有限公司 The network virtualization of network infrastructure
CN107210959A (en) * 2015-01-30 2017-09-26 Nicira股份有限公司 Router logic with multiple route parts

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数据中心安全网络虚拟化;李清固;《信息安全与技术》;全文 *

Also Published As

Publication number Publication date
WO2020046686A9 (en) 2020-05-22
CN112673596A (en) 2021-04-16
CN116319541A (en) 2023-06-23
WO2020046686A1 (en) 2020-03-05
EP3815312A1 (en) 2021-05-05

Similar Documents

Publication Publication Date Title
CN112673596B (en) Service insertion method, device and system at logic gateway
US20230179474A1 (en) Service insertion at logical network gateway
US10944673B2 (en) Redirection of data messages at logical network gateway
US10601705B2 (en) Failover of centralized routers in public cloud logical networks
US11115465B2 (en) Accessing endpoints in logical networks and public cloud service providers native networks using a single network interface and a single routing table
US11374850B2 (en) Tunnel endpoint group records
US10862753B2 (en) High availability for stateful services in public cloud logical networks
US10491516B2 (en) Packet communication between logical networks and public cloud service providers native networks using a single network interface and a single routing table
CN112640369B (en) Method, apparatus, and machine-readable medium for intelligently using peers in a public cloud
CN110278151B (en) Dynamic routing for logical routers
CN105684363B (en) Logic router
EP3669504B1 (en) High availability for stateful services in public cloud logical networks
CN111478852B (en) Route advertisement for managed gateways
US9183028B1 (en) Managing virtual computing nodes
US10715419B1 (en) Software defined networking between virtualized entities of a data center and external entities
US20190109780A1 (en) Routing information validation in sdn environments
EP3673365A1 (en) Accessing endpoints in logical networks and public cloud service providers native networks using a single network interface and a single routing table
US11895030B2 (en) Scalable overlay multicast routing
US10491483B2 (en) Using application programming interface calls for communication between network functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant