US20240015086A1 - Detecting failure of layer 2 service using broadcast messages - Google Patents
Detecting failure of layer 2 service using broadcast messages Download PDFInfo
- Publication number
- US20240015086A1 US20240015086A1 US18/370,006 US202318370006A US2024015086A1 US 20240015086 A1 US20240015086 A1 US 20240015086A1 US 202318370006 A US202318370006 A US 202318370006A US 2024015086 A1 US2024015086 A1 US 2024015086A1
- Authority
- US
- United States
- Prior art keywords
- service
- interface
- data messages
- switch
- machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 83
- 230000008569 process Effects 0.000 claims description 60
- 238000012545 processing Methods 0.000 claims description 20
- 238000013519 translation Methods 0.000 claims description 5
- 238000001514 detection method Methods 0.000 abstract description 7
- 239000010410 layer Substances 0.000 description 30
- 238000005538 encapsulation Methods 0.000 description 6
- 101100289995 Caenorhabditis elegans mac-1 gene Proteins 0.000 description 4
- 101100059544 Arabidopsis thaliana CDC5 gene Proteins 0.000 description 3
- 101150115300 MAC1 gene Proteins 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005204 segregation Methods 0.000 description 3
- 238000006424 Flood reaction Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- CJRQAPHWCGEATR-UHFFFAOYSA-N n-methyl-n-prop-2-ynylbutan-2-amine Chemical compound CCC(C)N(C)CC#C CJRQAPHWCGEATR-UHFFFAOYSA-N 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0654—Management of faults, events, alarms or notifications using network fault recovery
- H04L41/0668—Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0811—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/10—Active monitoring, e.g. heartbeat, ping or trace-route
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/20—Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
Definitions
- a set of gateway devices connecting the internal virtualized network and an external network may have a layer 2 bump in the wire service (i.e., a service that does not change the layer 2 addresses of a processed data message) inserted in the processing pipeline. Failure of the layer 2 service is difficult to detect in some instances. When a backup layer 2 service node is provided and a primary layer 2 service node fails, the gateway device must begin sending the data messages to the backup layer 2 service node. A method for learning of the failure and quickly redirecting data messages to the backup layer 2 service node is necessary.
- Some embodiments provide a method for providing a layer 2 (L2) bump-in-the-wire service at a gateway device (e.g., a layer 3 (L3) gateway device) at the edge of a logical network.
- the method establishes a connection from a first interface of the gateway device to a service node that provides the L2 service.
- the method also establishes a connection from a second interface of the gateway device to the L2 service node.
- the method then sends data messages received by the gateway device that require the L2 service to the service node using the first interface.
- north-to-south traffic i.e., from the external network to the logical network
- south-to-north traffic is sent to the service node using the second interface.
- Some embodiments provide a method for applying different policies at the service node for different tenants of a datacenter.
- Data messages received for a particular tenant that require the L2 service are encapsulated or marked as belonging to the tenant before being sent to the service node.
- the service node Based on the encapsulation or marking, the service node provides the service according to policies defined for the tenant.
- the first and second interfaces of the gateway devices have different internet protocol (IP) addresses and media access control (MAC) addresses in some embodiments.
- IP addresses in some embodiments, are not used to communicate with devices of external networks and can have internal IP addresses used within the logical network.
- the next hop MAC address for a data message requiring the L2 service sent from the first interface will be the MAC address of the second interface and will arrive at the second interface with the destination MAC address unchanged by the service node.
- interfaces for connecting to the L2 service are disabled on standby gateway devices of the logical network and are enabled on only an active gateway device.
- Connections to the service node are made through layer 2 switches.
- each interface connects to a different switch connected to the service node.
- the service node in some embodiments, is a cluster of service nodes in an active-standby configuration that each connect to the same pair of switches.
- an active service node provides the L2 service while the standby service nodes drop all data messages that they receive. Failover between the active and standby service nodes is handled by the L2 service nodes with no involvement of the L 3 gateway device in some embodiments.
- the gateway device sends heartbeat signals between the two interfaces connected to the L2 service nodes in order to detect failure of the L2 service (e.g., a failure of all the service nodes).
- the heartbeat signals are unidirectional heartbeat signals (e.g., a unidirectional bidirectional-forwarding-detection (BFD) session) sent from each interface to the other.
- the heartbeat signals use the IP address of the destination interface as the destination IP address, but use a broadcast MAC address in order to reach the current active L2 service node in the case of a failover (i.e., an active service node failing and a standby service node becoming the new active service node).
- Additional embodiments utilize the unidirectional broadcast heartbeat signals to decrease the time between a failover and data messages being forwarded to the new active service node as well as detect a failure of the service node cluster.
- an architecture using different L2 switches between each interface and the service node cluster is used in conjunction with the unidirectional broadcast heartbeat signals to reduce the time to redirect data messages to the new active service node.
- the switches connecting the interfaces to the service node cluster associate MAC addresses with particular ports of the switch based on incoming data messages. For example, a data message received at the switch on a first port with a source MAC address “MAC1” (e.g., a 48-bit MAC address of the first interface) will cause the switch to associate the first port with the MAC address MAC1 and future data messages with destination address MAC1 will be sent out of the switch from the first port.
- MAC1 source MAC address
- the ports of the switches attached to the active service node can be associated with the correct MAC addresses for the two interfaces more quickly.
- the broadcast heartbeat data messages will be received and processed by the newly-active service node and the switches will associate the ports connected to the newly-active service node with the appropriate MAC addresses of the two interfaces.
- FIG. 1 conceptually illustrates a system in which some on the embodiments of the invention are performed.
- FIG. 2 conceptually illustrates a process to establish two connections from a device to a layer 2 bump-in-the-wire service node for the service node to provide a service to data messages.
- FIG. 3 conceptually illustrates an embodiment in which an L2 service is provided between two devices by a cluster of service nodes.
- FIG. 4 conceptually illustrates a process for detecting failure using the heartbeat signals.
- FIG. 5 conceptually illustrates a process performed by a service node in some embodiments.
- FIG. 6 conceptually illustrates a process performed by the switches, in some embodiments, to facilitate failover without the device, or devices, that send data messages to the service node cluster being aware of a service node cluster failover operation.
- FIGS. 7 A-B conceptually illustrate the flow of data messages in a single device embodiment for learning MAC addresses.
- FIG. 8 conceptually illustrates the processing of a data message requiring a service provided by the service node cluster after the switches have learned MAC address/interface associations from the data messages depicted in FIGS. 7 A-B or in other ways, such as by using an address resolution protocol (ARP) operation.
- ARP address resolution protocol
- FIGS. 9 A-B conceptually illustrate the path of a data message after a failover, before and after a subsequent heartbeat message is sent from an interface of a device.
- FIGS. 10 A-B conceptually illustrate an embodiment in which the heartbeat data messages are used to detect failure of a service node cluster as discussed in relation to FIG. 4 .
- FIG. 11 illustrates an embodiment including gateway devices in an active-standby configuration at a border between two networks.
- FIG. 12 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.
- Some embodiments provide a method for providing a layer 2 (L2) bump-in-the-wire service at a gateway device (e.g., a layer 3 (L3) gateway device) at the edge of a logical network.
- the method establishes a connection from a first interface of the gateway device to a service node that provides the L2 service.
- the method also establishes a connection from a second interface of the gateway device to the L2 service node.
- the method then sends data messages received by the gateway device that require the L2 service to the service node using the first interface.
- north-to-south traffic i.e., from the external network to the logical network
- south-to-north traffic is sent to the service node using the second interface.
- data packet, packet, data message, or message refers to a collection of bits in a particular format sent across a network. It should be understood that the term data packet, packet, data message, or message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. While the examples below refer to data packets, packets, data messages, or messages, it should be understood that the invention should not be limited to any specific format or type of data message.
- references to L2, L3, L4, and L7 layers are references to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model, respectively.
- OSI Open System Interconnection
- FIG. 1 conceptually illustrates a system in which some on the embodiments of the invention are performed.
- FIG. 1 depicts a gateway device 101 that serves as the gateway between a network 110 (e.g., an untrusted network) and a set of tenant networks 120 (e.g., a set of trusted networks that are logical networks in some embodiments).
- the gateway device implements a tier 0 (T0) logical router that is shared by multiple tenant networks, each of which connect to the T0 logical router through a unique interface (e.g. logical interface) using a tenant (or tier 1 (T1)) logical router.
- T0 tier 0
- the gateway device 101 also includes a set of interfaces 130 used to connect to a service node 102 that provides a layer 2 (L2) bump-in-the-wire service (e.g., a firewall, load balancing, network address translation (NAT), or virtual private network (VPN) service) through switches 103 .
- L2 layer 2
- NAT network address translation
- VPN virtual private network
- gateway device 101 allows for per-tenant policies to be applied by the service node 102 by appending a context (e.g., encapsulation or other marking) to a data message sent to service node 102 with a tenant identifier (e.g., a virtual local area network (VLAN) tag that is associated with a particular tenant's policies).
- a context e.g., encapsulation or other marking
- tenant identifier e.g., a virtual local area network (VLAN) tag that is associated with a particular tenant's policies.
- service node 102 is shown with a set of three logical interfaces, labeled 1-3 (corresponding to tenants 1-3), each connected to one interface of the two switches 103 (e.g., using VLAN trunking).
- the logical interfaces in some embodiments, correspond to a single physical interface of the service node 102 .
- Service node 102 represents a cluster of service nodes that provide the L2 service.
- the service nodes are configured in an active-standby configuration with one service node performing the L2 service with the additional service nodes in the cluster acting as standby service nodes in case the active service node fails.
- FIG. 1 also depicts a datapath for data messages requiring the L2 service (depicted as the dotted line between two interfaces of gateway device 101 ).
- the datapath ignores the datapath outside of the gateway device, as the data message may be received from, and destined for, any of the networks 110 or 120 A-C.
- Gateway device 101 is depicted as a gateway device, but one of ordinary skill in the art would understand that the device, in some embodiments, is at a different point in the network that requires an L2 bump-in-the-wire service.
- Gateway device 101 is a host computing machine that executes an edge node program.
- the edge node program includes at least one managed forwarding element (e.g. a managed routing element, managed switching element, or both), that implements a set of logical forwarding elements of a set of logical networks for a set of tenants.
- managed forwarding element e.g. a managed routing element, managed switching element, or both
- FIG. 1 Further details of the elements of FIG. 1 are described below in the discussion of FIG. 2 .
- FIG. 2 conceptually illustrates a process 200 to establish two connections from a device (e.g., gateway device 101 ) to a layer 2 (L2) bump-in-the-wire service node for the service node to provide a service to data messages.
- process 200 is performed by the device (e.g., gateway device 101 ).
- Process 200 begins by establishing (at 210 ) a connection to the L2 service node from a first interface 130 of the device.
- the first interface has a first internet protocol (IP) address which, in some embodiments, is a private IP address that is not used by external networks.
- IP internet protocol
- the connection from the first interface is made through a first layer 2 switch (e.g., switch 103 A).
- a layer 2 switch learns associations between ports (e.g., interface 130 ) of the switch and media access control (MAC) addresses of the devices connected to each port from a source MAC address field in the header of the data messages received at the port.
- the first switch is a logical switch that is implemented by a physical switch (e.g. a virtual switch or a hardware switch).
- the process continues by establishing (at 220 ) a second connection to the L2 service node from a second interface of the device.
- the second interface has a second, internet protocol (IP) address different from the first interface which, in some embodiments, is a private IP address that is not used by external networks.
- IP internet protocol
- the connection from the second interface is made through a second layer 2 switch.
- the second layer 2 switch also learns MAC address/port pairings from received data messages in some embodiments.
- the second switch in some embodiments, is a logical switch that is implemented by any of a virtual switch or a hardware switch.
- the process receives (at 230 ) a data message from another device (e.g., a physical router, or a T1 logical router for a specific tenant).
- the data message in some embodiments, is a data message exchanged between an external network and a tenant logical network for which the device serves as a gateway device.
- the data message is a data message exchanged between an external network and a device in a datacenter for which the device acts as a gateway device.
- the data message in some embodiments, is directed from a device in a tenant logical network to another device in a same datacenter or network for which the device acts as a gateway device (e.g., in a same tenant's logical network or a different tenant's logical network).
- the datacenter in some embodiments, implements a set of logical networks for a set of tenants.
- the data message is received on a third interface of the device.
- the third interface in some embodiments, has an IP address that is advertised to external networks by the device.
- the process determines (at 240 ) whether the data message requires the L2 bump-in-the-wire service.
- the determination is based on a value in a set of header fields of the received data message.
- the value that the determination is based on may be any combination of a source or destination IP or MAC address, a protocol, and a port number.
- a set of header fields are associated specifically with the L2 service (e.g., a network address translation (NAT) service or load balancing (LB) service may be addressable by a particular set of IP addresses, or may be associated with an IP subnet for which they provide the service).
- NAT network address translation
- LB load balancing
- the determination is made using a routing entry (e.g., a policy-based routing entry) that indicates a certain IP address or range of IP addresses should be forwarded to the MAC of the second interface from the first interface.
- the range of IP addresses in some embodiments, is associated with a network for which the L2 service is required.
- the policy-based routing entry identifies values in a combination of fields used to determine that a received data message should be forwarded to the MAC of the second interface from the first interface.
- the fields that may be used to specify data messages that should be forwarded to the MAC of the second interface from the first interface include a source IP address, destination IP address, source MAC address, destination MAC address, source port, destination port, and protocol.
- the determination (at 240 ) whether the data message requires the L2 bump-in-the-wire service also takes into account the logical network from which the data message was received.
- each tenant logical network implements a tier 1 logical router that connects to a tier 0 logical router executing on a gateway device through a different logical interface.
- For data messages received on a particular logical interface some embodiments, apply logical-interface-specific (e.g., tenant-specific) policies to determine (at 240 ) whether the data message requires the service.
- the tenant defines at least two “zones” that include different devices or interfaces and requires sets of services (e.g., services provided by a service node) for data messages between each pair of zones.
- the process determines (at 240 ) that the data message does not require the L2 service, the process (at 250 ) processes the data message and forwards it towards its destination and the process ends.
- the data message processing is logical processing performed by a software forwarding element implementing a logical forwarding element or elements (e.g., a logical router, a logical switch, or both).
- the process forwards (at 260 ) the data message out one of the interfaces connected to the L2 service node to be received at the other interface connected to the L2 service node.
- north-south traffic coming from an external network into a logical network for which the device is a gateway device is sent to the service node from the first interface to be received at the second interface while south-north traffic from a logical network to the external network is sent to the service node from the second interface to be received by the first interface.
- forwarding (at 260 ) the data message includes an encapsulation or other marking operation to identify a particular tenant. For example, referring to FIG. 1 , a data message received from logical interface ‘ 1 ’ of gateway device 101 that requires the service provided by service node 102 , is encapsulated so that it will be received at logical interface ‘1’ of service node 102 . Based on the encapsulation, service node 102 applies policies specific to tenant 1. Data messages sent between interfaces use the MAC addresses associated with the destination interface of the device which remains unchanged by the processing performed by the L2 service node.
- the process After forwarding (at 260 ) the data message out of one interface connected to the L2 service node, the process receives (at 270 ) the data message at the other interface.
- the received data message includes an encapsulation or marking associated with a specific tenant.
- the process then processes (at 250 ) the received data message and forwards the data message towards its destination.
- multiple L2 bump-in-the-wire services are independently provided in a similar fashion.
- FIG. 3 conceptually illustrates an embodiment in which an L2 service is provided between two devices 301 by a service node in a cluster of service nodes 305 .
- Device 301 A is depicted as including router 310 and switch 303 A which, in some embodiments, are software executing on device 301 A.
- Router 310 and switch 303 A implement logical forwarding elements.
- device 301 A is a gateway device connecting an internal network to an external network.
- the internal network is a physical network implementing a logical network in some embodiments, with device 301 A implementing the logical forwarding elements using router 310 and switch 303 A.
- Connections to the service nodes 302 are made through layer 2 switches 303 .
- the different devices 301 connect to the cluster of service nodes 302 through different switches 303 .
- the service nodes 302 are depicted as a cluster of service nodes 305 in an active-standby configuration that each connect to the same pair of switches.
- an active service node provides the L2 service while the standby service nodes drop all data messages that they receive. Failover between the active and standby service nodes is handled by the L2 service nodes with no involvement of devices 301 in some embodiments.
- Devices 301 send heartbeat signals between the two interfaces connected to the L2 service nodes in order to detect failure of the L2 service (e.g., a failure of all the service nodes).
- the heartbeat signals are unidirectional heartbeat signals (e.g., a unidirectional bidirectional-forwarding-detection (BFD) session) sent from each interface to the other.
- the heartbeat signals use the IP address of the destination interface as the destination IP address, but use a broadcast MAC address in order to reach the current active L2 service node in the case of a failover (i.e., an active service node failing and a standby service node becoming the new active service node).
- FIG. 4 conceptually illustrates a process 400 for detecting failure using the heartbeat signals.
- Process 400 in some embodiments, is executed by at least one device 301 and, in some embodiments, is executed by each device 301 .
- Process 400 begins (at 410 ) by establishing a unidirectional session between the interface (e.g., 330 A) that connects to the cluster of service nodes and the interface (e.g. 330 B) of the device attached to the other switch connected to the cluster of service nodes.
- the interface e.g., 330 A
- the interface e.g. 330 B
- the process subsequently sends (at 420 ) a heartbeat data message to the second device.
- device 301 A directs the data message to the IP address of the interface of the second device (e.g., 330 B) using a broadcast MAC address.
- the heartbeat data message has a source MAC address of the interface of the first device that is learned by the switches connected to the service nodes and associated by the switches with the interfaces on which the heartbeat data message is received by the switch.
- the process receives (at 430 ) a heartbeat data message from the second device.
- the heartbeat messages are sent and received at intervals that are shorter than a timeout of a learned MAC address/interface pairing in the switches (e.g., 303 ).
- the received message is sent from the second device directed to the IP address of the first interface using a broadcast MAC address.
- the process determines that the service nodes (e.g., 302 ) have failed. In some embodiments, the determination is made based on a time elapsed since a last heartbeat message was received. The time elapsed to determine failure of the service nodes (e.g., 302 ), in some embodiments, is based on the time between heartbeat signals, e.g., 5 heartbeat signals, or on a failover time for the service nodes in a service node cluster.
- the process Upon determining (at 440 ) that a service node cluster has failed, the process performs (at 450 ) a default operation for subsequent packets until the service is restored.
- the default operation is forwarding all data messages to their destination without sending them to be provided the L2 service.
- the default operation is dropping all data messages that require the L2 service until the L2 service is restored.
- the device continues to send heartbeat data messages and determines that the service has been restored when a heartbeat is received from the other device or interface.
- Additional embodiments utilize the unidirectional broadcast heartbeat signals to decrease the time between a failover and data messages being forwarded to the new active service node as well as detect a failure of the service node cluster.
- an architecture using different L2 switches between each interface and the service node cluster is used in conjunction with the unidirectional broadcast heartbeat signals to reduce the time to redirect data messages to the new active service node.
- FIGS. 5 and 6 conceptually illustrate processes performed by a service node and a switch, respectively, in some such embodiments.
- FIG. 5 conceptually illustrates a process 500 performed by a service node in some embodiments.
- Process 500 begins by receiving (at 510 ) data messages sent from one of two interfaces in communication with each other through the service node cluster including the service node performing 500 .
- the data messages are heartbeat data messages that are addressed to an IP address associated with either one of the two interfaces of the device or devices in communication with the service node and a broadcast MAC address.
- the heartbeat data messages are received from one of two interfaces connected to the service node cluster through a pair of switches as in FIG. 3 .
- the data messages include data messages requiring the service provided by the service node cluster.
- a data message is received with a context (e.g., an encapsulation or other marking) that is understood by the service node to identify a particular set of policies to apply to the data message.
- the context identifies a set of policies that are for a specific tenant.
- processing a data message comprises dropping the data message. Dropping data messages at the standby service node avoids redundant processing and, in embodiments providing a stateful service, misprocessing based on a lack of current state information.
- processing a heartbeat data message includes forwarding the data message to the destination interface without alteration.
- Processing the data message at an active node includes applying tenant-specific policies to the data message.
- the tenant-specific policies are identified based on a context appended to the data message by the device (e.g., a gateway device) that directs the data message to the service node.
- Processing a data message requiring the service at an active service node includes providing the service and forwarding the data message to the destination IP address without altering the source and destination MAC addresses of the received data message.
- a service node performing process 500 acts as a standby service node at some times and, if an active service node fails, acts (or is designated) as the active service node at other times.
- the failover process between service nodes in some embodiments, is independent of the devices sending the heartbeat data messages.
- the service node cluster has a control or management computer or cluster that determines and designates the active service node.
- the control/management computer in some embodiments, maintains its own failure detection protocol (e.g., BFD) to detect the health of the service nodes in a service node cluster and initiate a failover process.
- BFD failure detection protocol
- FIG. 6 conceptually illustrates a process 600 performed by the switches, in some embodiments, to facilitate failover without the device, or devices, that send data messages to the service node cluster being aware of a service node cluster failover operation.
- the process begins by receiving (at 610 ) a data message from one of the interfaces of a device sending data messages to the service node cluster through the switch.
- the data message in some embodiments, is a heartbeat data message sent from one interface to another through the switches and service node cluster.
- the heartbeat data message uses a broadcast MAC address (i.e., FF:FF:FF:FF:FF) as a destination MAC address.
- the heartbeat data message also includes a MAC address of the interface from which the data message was sent as a source MAC address.
- the process then learns (at 620 ) a pairing between a port (e.g. interface) at which the data message was received and a MAC address used as a source MAC address of the received data message.
- the learning in some embodiments, is accomplished through a table or other data structure that stores associations between MAC addresses and ports of the switch. The learned association is used to process subsequent data messages addressed to the MAC address by forwarding the subsequent data message to the destination from the associated port.
- the process then forwards (at 630 ) the received heartbeat data message out all the ports other than the port on which it was received.
- the broadcast heartbeat data message is then received at the service nodes of the service node cluster as described in relation to operation 510 of FIG. 5 for a particular service node. As described above in relation to FIG. 5 , only the active service node forwards the received heartbeat data message to the second interface through the second switch.
- the second switch receives the forwarded data message and associates the port connected to the active service node with the source MAC address of the heartbeat data message (i.e., the MAC address of the first interface) and forwards the heartbeat data message out all ports except for the port at which it was received as will be described in relation to operations 640 and 650 for the first switch performing process 600 .
- the source MAC address of the heartbeat data message i.e., the MAC address of the first interface
- the process then receives (at 640 ) a heartbeat data message from the second interface through an active service node.
- the heartbeat data message is received from the active service node, but not the standby service nodes as only the active service node allows data messages to be forwarded towards the destination.
- the heartbeat data message in some embodiments, is received by the first switch after a second switch receives the data message from the second interface.
- the second interface sends the heartbeat data message using the second interface's MAC address as a source MAC address and a broadcast MAC address as the destination address. Based on the broadcast MAC address, the second switch floods the data message to all the service nodes as described for the first switch in operation 630 .
- the process learns (at 650 ) a pairing between a port at which the data message was received and a MAC address used as a source MAC address of the received data message (i.e., the MAC address of the second interface).
- the port that is associated with the second interface's MAC address is the port connected to the active service node, because only the active service node forwards the data message to the first switch.
- the learned address/port pairing is stored, in some embodiments, in the same table or other data structure that stores the association between the MAC address of the first interface and the port at which the first heartbeat data message was received.
- the learned association is used to process subsequent data messages addressed to the MAC address of the second interface by forwarding the subsequent data message to the destination from the associated port.
- the switch has now learned the ports associated with the MAC addresses of the first and second interfaces and can use those learned associations to process subsequent data messages.
- the process receives (at 660 ) a data message that requires the service provided by the service node cluster.
- the data message is received at the port of the switch that connects to the first interface, in some embodiments.
- the data message in some embodiments, has a destination address that is the MAC address of the second interface.
- the process then forwards (at 670 ) the data message that requires the service to the active service node.
- the process does not need to perform an address resolution protocol (ARP) operation to identify the port because the MAC address/port pairing was previously learned as part of learning operation 650 .
- ARP address resolution protocol
- the heartbeat data messages sent subsequent to the service node failover process will be forwarded by the new active service node and the MAC address/port pairings for the first and second interface MAC addresses will be remapped to the ports connected to the new active service node.
- operations relating to heartbeat data messages are independent of operations related to data message processing for data messages received from a network connected to the device and may be omitted in some embodiments.
- FIGS. 7 A-B conceptually illustrates the flow of data messages in a single device embodiment 700 for learning MAC addresses.
- Device 701 serves as a gateway device between networks 710 and 720 .
- Data message ‘1’ represents a heartbeat data message sent from an interface 730 A to an interface 730 C (e.g., a port) of a switch 703 A.
- Data message ‘1’ is a heartbeat data message that has (1) a source IP address (Src IP) that is the IP address of interface 730 A, (2) a source MAC address (Src MAC) that is the MAC address of interface 730 A (e.g., MAC 1), (3) a destination IP address (Dst IP) that is the IP address of interface 730 B, and (4) a destination MAC address that is a broadcast MAC address (e.g., FF:FF:FF:FF:FF).
- switch 703 A receives data message ‘1’ at interface 730 C and learns an association between MAC 1 and interface 730 C, and forwards the data message as data messages ‘2’ to all other interfaces 730 D-F of the switch.
- Data message ‘2’ is received by service nodes 702 A-C and is forwarded to interface 730 G of switch 703 B only by the active service node 702 A as data message ‘3’ because standby service nodes 702 B-C drop data messages received based on their designation as standby service nodes.
- Data messages ‘2’ and ‘3’ maintain the same source and destination addresses as data message ‘1’ in some embodiments.
- Switch 703 B learns an association between MAC 1 and interface 730 G as discussed above in relation to FIG. 6 .
- Data message ‘3’ is then forwarded to all other interfaces of switch 703 B (i.e., interfaces 730 H-J) as data message ‘4.’
- Device 701 receives the heartbeat data message and determines that the service cluster has not failed.
- Standby service nodes 702 B-C drop the data message.
- an association between the MAC address of interface 730 A and interfaces 730 C and 730 G is learned by switches 703 A and 703 B respectively.
- a similar heartbeat data message sent from the interface 730 B causes an association between a MAC address of interface 730 B (e.g., MAC 2) with interfaces 730 J and 730 C to be learned by switches 703 B and 703 A respectively.
- Data message ‘5’ represents a heartbeat data message sent from an interface 730 B to an interface 730 J (e.g., a port) of a switch 703 B.
- Data message ‘5’ is a heartbeat data message that has (1) a Src IP that is the IP address of interface 730 B, (2) a Src MAC that is the MAC address of interface 730 B (e.g., MAC 2), (3) a Dst IP that is the IP address of interface 730 A, and (4) a destination MAC address that is a broadcast MAC address (e.g., FF:FF:FF:FF:FF).
- switch 703 B receives data message ‘5’ at interface 730 J and learns an association between MAC 2 and interface 730 J and forwards the data message as data messages ‘6’ to all other interfaces 730 G-I of the switch.
- Data message ‘6’ is received by service nodes 702 A-C and is forwarded to interface 730 D of switch 703 A only by the active service node 702 A as data message ‘7’ because standby service nodes 702 B-C drop data messages received based on their designation as standby service nodes.
- Data messages ‘6’ and ‘7’ maintain the same source and destination addresses as data message ‘5’ in some embodiments.
- Switch 703 A learns an association between MAC 2 and interface 730 D as discussed above in relation to FIG. 6 .
- Data message ‘7’ is then forwarded to all other interfaces of switch 703 A (i.e., interfaces 730 C, E, and F) as data message ‘8.’
- Device 701 receives the heartbeat data message and determines that the service cluster has not failed.
- Standby service nodes 702 B-C drop the data message.
- an association between the MAC address of interface 730 B and interfaces 730 D and 730 J is learned by switches 703 A and 703 B respectively.
- FIGS. 8 conceptually illustrates the processing of a data message requiring a service provided by the service node cluster 705 after the switches have learned MAC address/interface associations from the data messages depicted in FIG. 7 or in other ways, such as by using an address resolution protocol (ARP) operation.
- Data message ‘9’ represents a data message requiring the service provided by service node cluster 705 .
- Data message ‘9’ has (1) a Src IP that is the IP address of interface 730 A, (2) a Src MAC that is the MAC address of interface 730 A (e.g., MAC 1), (3) a Dst IP that is the IP address of interface 730 B, and (4) a destination MAC address that is a MAC address of interface 730 B (e.g., MAC 2).
- Data message ‘9’ is sent from interface 730 A to interface 730 C of switch 703 A.
- switch 703 A Upon receiving the data message, switch 703 A consults the table or other data structure storing the MAC/interface associations to determine that MAC 2 (i.e., the destination MAC address) is associated with interface 730 D and sends, as data message ‘10,’ the data message to service node 702 A using interface 730 D.
- Service node 702 A processes the data message, including providing the service provided by the service node cluster 705 and sends the processed data message as data message ‘11’ to interface 730 G of switch 703 B.
- switch 703 B Upon receiving data message ‘11,’ switch 703 B consults the table or other data structure storing the MAC/interface associations to determine that MAC 2 (i.e., the destination MAC address) is associated with interface 730 J and sends, as data message ‘12,’ the data message to interface 730 B using interface 730 J. Return data messages are handled similarly.
- MAC 2 i.e., the destination MAC address
- FIGS. 9 A-B conceptually illustrate the path of a data message after a failover, before and after a subsequent heartbeat message is sent from an interface 730 of device 701 .
- FIG. 9 A illustrates the failure of service node 702 A and service node 702 B being designated as the new active service node.
- data message ‘13’ is sent from interface 730 A with the same Src IP, Src MAC, Dst IP, and Dst MAC as data message ‘9.’
- Switch 703 A sends data message ‘14’ to service node 702 A based on the association previously learned between MAC 2 and interface 730 D, however, service node 702 A has failed and the data message is lost.
- the data messages in both directions would continue to be dropped (i.e., black-holed) until a timeout of the learned MAC address/interface associations, at which point a new learning operation (e.g. an ARP operation) would be performed indicating that the MAC address should be associated with the interface connected to the new active service node.
- a new learning operation e.g. an ARP operation
- switch 703 B once again floods the data message as data messages ‘16’ as described in relation to data message ‘6’ and the new active service node 702 B receives and forwards the data message to switch 703 A (not depicted). This causes switch 703 A to update its MAC address/interface table or other data structure to indicate an association between MAC 2 and interface 730 E connected to service node 702 B.
- Heartbeat data messages are sent at time intervals that are shorter than a timeout interval for learned MAC address/interface associations so that in the case of service node failover, the service is restored based on the shorter heartbeat data message interval rather than the longer timeout interval for learned MAC address/interface associations.
- FIGS. 10 A-B conceptually illustrates an embodiment in which the heartbeat data messages are used to detect failure of a service node cluster as discussed in relation to FIG. 4 .
- FIG. 10 A illustrates the same elements as in FIG. 3 , however in FIG. 10 A two of the three service nodes 302 have failed (i.e., 302 A and 302 C).
- a first heartbeat data message, data message ‘1,’ is sent from interface 330 A to interface 330 B.
- Data message ‘1’ traverses switch 303 A, service node 302 B and switch 303 B before arriving at interface 330 B.
- a heartbeat data message data message ‘2,’ is sent from interface 330 B to interface 330 A traversing switch 303 B, service node 302 B and switch 303 A before being received by device 301 A at interface 330 A.
- data messages ‘3’ and ‘4’ represent the rest of the datapath for heartbeat data messages. These heartbeat data messages are used to determine that the service node cluster 305 is still functioning (e.g., still providing the service).
- FIG. 10 B illustrates a heartbeat data message being unable to reach a destination interface after the failure of all the service nodes 302 in service node cluster 305 .
- Data messages ‘5’ and ‘6’ represent heartbeat data messages that are sent by interface 330 A and 330 B respectively.
- Data messages ‘5’ and ‘6’ arrive at switches 303 A and 303 B respectively, are forwarded to all the service nodes 302 A-C, as data messages ‘7’ and ‘8’ respectively, based on the broadcast destination MAC address, but are not forwarded towards the other interface because the service nodes have failed.
- the failure of the service nodes is based on a connection failure between the switches and the service node or between the interface of the devices 301 and a switch 303 .
- failure detection would function in the same way between two interfaces of a single device.
- failure detection may also be based on the fact that data messages the device sends out one interface are not received at the other interface which may enable faster failure detection than a system that is not aware of when heartbeat data messages are sent by the other device.
- devices 301 determine that the service node cluster 305 has failed and perform a default operation for data messages requiring the service provided by the service node cluster 305 .
- the default operation is to forward the data messages without providing the service (e.g., a fail-open condition) while in other embodiments, the default operation is to drop the data messages requiring the service (e.g., a fail-closed condition) until the service is restored.
- a fail-open condition may be more appropriate for services such as load balancing where security is not an issue while fail-closed may be more appropriate for a firewall operation relating to security and a network address translation (NAT) service which generally requires state information that is maintained by the service node providing the service.
- NAT network address translation
- FIG. 11 illustrates an embodiment including gateway device 701 A and gateway device 701 B that each act at a border between network 710 (e.g., an external network) and network 720 (e.g., an internal/logical network).
- the elements of FIG. 11 act as the similarly numbered elements of FIG. 7 with the additional designation of one of the devices 701 as the active gateway device (e.g., gateway device 701 A).
- the active gateway device 701 A receives all data messages exchanged between the networks 710 and 720 .
- the gateway devices also execute centralized aspects of a logical router for a logical network implemented in network 720 . In some embodiments using a centralized logical router in the gateway devices, only one gateway device provides the centralized logical router services.
- FIG. 12 conceptually illustrates an electronic system 1200 with which some embodiments of the invention are implemented.
- the electronic system 1200 can be used to execute any of the control, virtualization, or operating system applications described above.
- the electronic system 1200 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device.
- Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.
- Electronic system 1200 includes a bus 1205 , processing unit(s) 1210 , a system memory 1225 , a read-only memory (ROM) 1230 , a permanent storage device 1235 , input devices 1240 , and output devices 1245 .
- ROM read-only memory
- the bus 1205 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1200 .
- the bus 1205 communicatively connects the processing unit(s) 1210 with the read-only memory 1230 , the system memory 1225 , and the permanent storage device 1235 .
- the processing unit(s) 1210 retrieve instructions to execute and data to process in order to execute the processes of the invention.
- the processing unit(s) may be a single processor or a multi-core processor in different embodiments.
- the read-only-memory 1230 stores static data and instructions that are needed by the processing unit(s) 1210 and other modules of the electronic system.
- the permanent storage device 1235 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 1200 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1235 .
- the system memory 1225 is a read-and-write memory device. However, unlike storage device 1235 , the system memory is a volatile read-and-write memory, such as random access memory.
- the system memory stores some of the instructions and data that the processor needs at runtime.
- the invention's processes are stored in the system memory 1225 , the permanent storage device 1235 , and/or the read-only memory 1230 . From these various memory units, the processing unit(s) 1210 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
- the bus 1205 also connects to the input and output devices 1240 and 1245 .
- the input devices enable the user to communicate information and select commands to the electronic system.
- the input devices 1240 include alphanumeric keyboards and pointing devices (also called “cursor control devices”).
- the output devices 1245 display images generated by the electronic system.
- the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
- bus 1205 also couples electronic system 1200 to a network 1265 through a network adapter (not shown).
- the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 1200 may be used in conjunction with the invention.
- Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
- computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks.
- CD-ROM compact discs
- CD-R recordable compact discs
- the computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
- Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- integrated circuits execute instructions that are stored on the circuit itself.
- the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people.
- display or displaying means displaying on an electronic device.
- the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
- DCNs data compute nodes
- addressable nodes may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
- VMs in some embodiments, operate with their own guest operating systems on a host machine using resources of the host machine virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.).
- the tenant i.e., the owner of the VM
- Some containers are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system.
- the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers.
- This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers.
- Such containers are more lightweight than VMs.
- Hypervisor kernel network interface modules in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads.
- a hypervisor kernel network interface module is the vmknic module that is part of the ESXiTM hypervisor of VMware, Inc.
- VMs virtual machines
- examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules.
- the example networks could include combinations of different types of DCNs in some embodiments.
- FIGS. 2 and 4 - 6 conceptually illustrate processes.
- the specific operations of these processes may not be performed in the exact order shown and described.
- the specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments.
- the process could be implemented using several sub-processes, or as part of a larger macro process.
- the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Health & Medical Sciences (AREA)
- Cardiology (AREA)
- General Health & Medical Sciences (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Small-Scale Networks (AREA)
Abstract
Some embodiments provide a method for detecting a failure of a layer 2 (L2) bump-in-the-wire service at a device. In some embodiments, the device sends heartbeat signals to a second device connected to L2 service nodes in order to detect failure of the L2 service (e.g., a failure of all the service nodes). In some embodiments, the heartbeat signals are unidirectional heartbeat signals (e.g., a unidirectional bidirectional-forwarding-detection (BFD) session) sent from each device to the other. The heartbeat signals, in some embodiments, use a broadcast MAC address in order to reach the current active L2 service node in the case of a failover (i.e., an active service node failing and a standby service node becoming the new active service node). The unidirectional heartbeat signals are also used, in some embodiments, to decrease the time between a failover and data messages being forwarded to the new active service node.
Description
- In a software defined network, a set of gateway devices (e.g., Edge Nodes) connecting the internal virtualized network and an external network may have a
layer 2 bump in the wire service (i.e., a service that does not change thelayer 2 addresses of a processed data message) inserted in the processing pipeline. Failure of thelayer 2 service is difficult to detect in some instances. When abackup layer 2 service node is provided and aprimary layer 2 service node fails, the gateway device must begin sending the data messages to thebackup layer 2 service node. A method for learning of the failure and quickly redirecting data messages to thebackup layer 2 service node is necessary. - Some embodiments provide a method for providing a layer 2 (L2) bump-in-the-wire service at a gateway device (e.g., a layer 3 (L3) gateway device) at the edge of a logical network. The method, in some embodiments, establishes a connection from a first interface of the gateway device to a service node that provides the L2 service. The method also establishes a connection from a second interface of the gateway device to the L2 service node. The method then sends data messages received by the gateway device that require the L2 service to the service node using the first interface. In some embodiments, north-to-south traffic (i.e., from the external network to the logical network) is sent to the service node using the first interface while the south-to-north traffic is sent to the service node using the second interface.
- Some embodiments provide a method for applying different policies at the service node for different tenants of a datacenter. Data messages received for a particular tenant that require the L2 service are encapsulated or marked as belonging to the tenant before being sent to the service node. Based on the encapsulation or marking, the service node provides the service according to policies defined for the tenant.
- The first and second interfaces of the gateway devices have different internet protocol (IP) addresses and media access control (MAC) addresses in some embodiments. The IP addresses, in some embodiments, are not used to communicate with devices of external networks and can have internal IP addresses used within the logical network. The next hop MAC address for a data message requiring the L2 service sent from the first interface will be the MAC address of the second interface and will arrive at the second interface with the destination MAC address unchanged by the service node. In some embodiments, interfaces for connecting to the L2 service are disabled on standby gateway devices of the logical network and are enabled on only an active gateway device.
- Connections to the service node, in some embodiments, are made through
layer 2 switches. In some embodiments, each interface connects to a different switch connected to the service node. The service node, in some embodiments, is a cluster of service nodes in an active-standby configuration that each connect to the same pair of switches. In some embodiments of an active-standby configuration, an active service node provides the L2 service while the standby service nodes drop all data messages that they receive. Failover between the active and standby service nodes is handled by the L2 service nodes with no involvement of the L3 gateway device in some embodiments. - The gateway device, in some embodiments, sends heartbeat signals between the two interfaces connected to the L2 service nodes in order to detect failure of the L2 service (e.g., a failure of all the service nodes). In some embodiments, the heartbeat signals are unidirectional heartbeat signals (e.g., a unidirectional bidirectional-forwarding-detection (BFD) session) sent from each interface to the other. The heartbeat signals, in some embodiments, use the IP address of the destination interface as the destination IP address, but use a broadcast MAC address in order to reach the current active L2 service node in the case of a failover (i.e., an active service node failing and a standby service node becoming the new active service node).
- Additional embodiments utilize the unidirectional broadcast heartbeat signals to decrease the time between a failover and data messages being forwarded to the new active service node as well as detect a failure of the service node cluster. In embodiments with an L2 bump-in-the-wire service between any two interfaces (e.g., between interfaces of two devices, or between two interfaces of a same device) an architecture using different L2 switches between each interface and the service node cluster is used in conjunction with the unidirectional broadcast heartbeat signals to reduce the time to redirect data messages to the new active service node.
- In some embodiments, the switches connecting the interfaces to the service node cluster associate MAC addresses with particular ports of the switch based on incoming data messages. For example, a data message received at the switch on a first port with a source MAC address “MAC1” (e.g., a 48-bit MAC address of the first interface) will cause the switch to associate the first port with the MAC address MAC1 and future data messages with destination address MAC1 will be sent out of the switch from the first port. By sending the heartbeat data messages to the other interface with shorter time intervals between heartbeats than a timeout of a MAC address association (i.e., the time interval before an association between a MAC address and a port is removed) the ports of the switches attached to the active service node can be associated with the correct MAC addresses for the two interfaces more quickly. As a standby node becomes an active node, the broadcast heartbeat data messages will be received and processed by the newly-active service node and the switches will associate the ports connected to the newly-active service node with the appropriate MAC addresses of the two interfaces.
- The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description, and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
- The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
-
FIG. 1 conceptually illustrates a system in which some on the embodiments of the invention are performed. -
FIG. 2 conceptually illustrates a process to establish two connections from a device to alayer 2 bump-in-the-wire service node for the service node to provide a service to data messages. -
FIG. 3 conceptually illustrates an embodiment in which an L2 service is provided between two devices by a cluster of service nodes. -
FIG. 4 conceptually illustrates a process for detecting failure using the heartbeat signals. -
FIG. 5 conceptually illustrates a process performed by a service node in some embodiments. -
FIG. 6 conceptually illustrates a process performed by the switches, in some embodiments, to facilitate failover without the device, or devices, that send data messages to the service node cluster being aware of a service node cluster failover operation. -
FIGS. 7A-B conceptually illustrate the flow of data messages in a single device embodiment for learning MAC addresses. -
FIG. 8 conceptually illustrates the processing of a data message requiring a service provided by the service node cluster after the switches have learned MAC address/interface associations from the data messages depicted inFIGS. 7A-B or in other ways, such as by using an address resolution protocol (ARP) operation. -
FIGS. 9A-B conceptually illustrate the path of a data message after a failover, before and after a subsequent heartbeat message is sent from an interface of a device. -
FIGS. 10A-B conceptually illustrate an embodiment in which the heartbeat data messages are used to detect failure of a service node cluster as discussed in relation toFIG. 4 . -
FIG. 11 illustrates an embodiment including gateway devices in an active-standby configuration at a border between two networks. -
FIG. 12 conceptually illustrates an electronic system with which some embodiments of the invention are implemented. - In the following description, numerous details are set forth for the purpose of explanation. However, one of ordinary skill in the art will realize that the invention may be practiced without the use of these specific details. In other instances, well-known structures and devices are shown in block diagram form in order not to obscure the description of the invention with unnecessary detail.
- Some embodiments provide a method for providing a layer 2 (L2) bump-in-the-wire service at a gateway device (e.g., a layer 3 (L3) gateway device) at the edge of a logical network. The method, in some embodiments, establishes a connection from a first interface of the gateway device to a service node that provides the L2 service. The method also establishes a connection from a second interface of the gateway device to the L2 service node. The method then sends data messages received by the gateway device that require the L2 service to the service node using the first interface. In some embodiments, north-to-south traffic (i.e., from the external network to the logical network) is sent to the service node using the first interface while the south-to-north traffic is sent to the service node using the second interface.
- As used in this document, the term data packet, packet, data message, or message refers to a collection of bits in a particular format sent across a network. It should be understood that the term data packet, packet, data message, or message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. While the examples below refer to data packets, packets, data messages, or messages, it should be understood that the invention should not be limited to any specific format or type of data message. Also, as used in this document, references to L2, L3, L4, and L7 layers (or
layer 2,layer 3,layer 4, layer 7) are references to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model, respectively. -
FIG. 1 conceptually illustrates a system in which some on the embodiments of the invention are performed.FIG. 1 depicts agateway device 101 that serves as the gateway between a network 110 (e.g., an untrusted network) and a set of tenant networks 120 (e.g., a set of trusted networks that are logical networks in some embodiments). In some embodiments, the gateway device implements a tier 0 (T0) logical router that is shared by multiple tenant networks, each of which connect to the T0 logical router through a unique interface (e.g. logical interface) using a tenant (or tier 1 (T1)) logical router. Thegateway device 101 also includes a set ofinterfaces 130 used to connect to aservice node 102 that provides a layer 2 (L2) bump-in-the-wire service (e.g., a firewall, load balancing, network address translation (NAT), or virtual private network (VPN) service) through switches 103. - In some embodiments,
gateway device 101 allows for per-tenant policies to be applied by theservice node 102 by appending a context (e.g., encapsulation or other marking) to a data message sent toservice node 102 with a tenant identifier (e.g., a virtual local area network (VLAN) tag that is associated with a particular tenant's policies). InFIG. 1 ,service node 102 is shown with a set of three logical interfaces, labeled 1-3 (corresponding to tenants 1-3), each connected to one interface of the two switches 103 (e.g., using VLAN trunking). The logical interfaces, in some embodiments, correspond to a single physical interface of theservice node 102.Service node 102, in some embodiments, represents a cluster of service nodes that provide the L2 service. In some embodiments utilizing a cluster of service nodes, the service nodes are configured in an active-standby configuration with one service node performing the L2 service with the additional service nodes in the cluster acting as standby service nodes in case the active service node fails. -
FIG. 1 also depicts a datapath for data messages requiring the L2 service (depicted as the dotted line between two interfaces of gateway device 101). The datapath ignores the datapath outside of the gateway device, as the data message may be received from, and destined for, any of thenetworks C. Gateway device 101 is depicted as a gateway device, but one of ordinary skill in the art would understand that the device, in some embodiments, is at a different point in the network that requires an L2 bump-in-the-wire service. -
Gateway device 101, in some embodiments, is a host computing machine that executes an edge node program. In some embodiments, the edge node program includes at least one managed forwarding element (e.g. a managed routing element, managed switching element, or both), that implements a set of logical forwarding elements of a set of logical networks for a set of tenants. Further details relating to implementing logical networks using gateway devices (e.g., edge nodes) are found in U.S. Pat. No. 9,787,605 which is hereby incorporated by reference. Further details of the elements ofFIG. 1 are described below in the discussion ofFIG. 2 . -
FIG. 2 conceptually illustrates aprocess 200 to establish two connections from a device (e.g., gateway device 101) to a layer 2 (L2) bump-in-the-wire service node for the service node to provide a service to data messages. In some embodiments,process 200 is performed by the device (e.g., gateway device 101).Process 200 begins by establishing (at 210) a connection to the L2 service node from afirst interface 130 of the device. The first interface has a first internet protocol (IP) address which, in some embodiments, is a private IP address that is not used by external networks. In some embodiments, the connection from the first interface is made through afirst layer 2 switch (e.g., switch 103A). Alayer 2 switch, in some embodiments, learns associations between ports (e.g., interface 130) of the switch and media access control (MAC) addresses of the devices connected to each port from a source MAC address field in the header of the data messages received at the port. In some embodiments, the first switch is a logical switch that is implemented by a physical switch (e.g. a virtual switch or a hardware switch). - The process continues by establishing (at 220) a second connection to the L2 service node from a second interface of the device. The second interface has a second, internet protocol (IP) address different from the first interface which, in some embodiments, is a private IP address that is not used by external networks. In some embodiments, the connection from the second interface is made through a
second layer 2 switch. Thesecond layer 2 switch also learns MAC address/port pairings from received data messages in some embodiments. The second switch, in some embodiments, is a logical switch that is implemented by any of a virtual switch or a hardware switch. - Once connections are established from the device, the process receives (at 230) a data message from another device (e.g., a physical router, or a T1 logical router for a specific tenant). The data message, in some embodiments, is a data message exchanged between an external network and a tenant logical network for which the device serves as a gateway device. In some embodiments, the data message is a data message exchanged between an external network and a device in a datacenter for which the device acts as a gateway device. The data message, in some embodiments, is directed from a device in a tenant logical network to another device in a same datacenter or network for which the device acts as a gateway device (e.g., in a same tenant's logical network or a different tenant's logical network). The datacenter, in some embodiments, implements a set of logical networks for a set of tenants. In some embodiments, the data message is received on a third interface of the device. The third interface, in some embodiments, has an IP address that is advertised to external networks by the device.
- After receiving the data message, the process determines (at 240) whether the data message requires the L2 bump-in-the-wire service. In some embodiments, the determination is based on a value in a set of header fields of the received data message. The value that the determination is based on may be any combination of a source or destination IP or MAC address, a protocol, and a port number. In some embodiments, a set of header fields are associated specifically with the L2 service (e.g., a network address translation (NAT) service or load balancing (LB) service may be addressable by a particular set of IP addresses, or may be associated with an IP subnet for which they provide the service). The determination, in some embodiments, is made using a routing entry (e.g., a policy-based routing entry) that indicates a certain IP address or range of IP addresses should be forwarded to the MAC of the second interface from the first interface. The range of IP addresses, in some embodiments, is associated with a network for which the L2 service is required. In some embodiments, the policy-based routing entry identifies values in a combination of fields used to determine that a received data message should be forwarded to the MAC of the second interface from the first interface. The fields that may be used to specify data messages that should be forwarded to the MAC of the second interface from the first interface, in some embodiments, include a source IP address, destination IP address, source MAC address, destination MAC address, source port, destination port, and protocol.
- The determination (at 240) whether the data message requires the L2 bump-in-the-wire service, in some embodiments, also takes into account the logical network from which the data message was received. In some embodiments, each tenant logical network implements a
tier 1 logical router that connects to a tier 0 logical router executing on a gateway device through a different logical interface. For data messages received on a particular logical interface, some embodiments, apply logical-interface-specific (e.g., tenant-specific) policies to determine (at 240) whether the data message requires the service. The tenant, in some embodiments, defines at least two “zones” that include different devices or interfaces and requires sets of services (e.g., services provided by a service node) for data messages between each pair of zones. - If the process determines (at 240) that the data message does not require the L2 service, the process (at 250) processes the data message and forwards it towards its destination and the process ends. In some embodiments, the data message processing is logical processing performed by a software forwarding element implementing a logical forwarding element or elements (e.g., a logical router, a logical switch, or both).
- If the process determines (at 240) that the data message does require the L2 service, the process forwards (at 260) the data message out one of the interfaces connected to the L2 service node to be received at the other interface connected to the L2 service node. In some embodiments, north-south traffic coming from an external network into a logical network for which the device is a gateway device is sent to the service node from the first interface to be received at the second interface while south-north traffic from a logical network to the external network is sent to the service node from the second interface to be received by the first interface.
- In some embodiments, forwarding (at 260) the data message includes an encapsulation or other marking operation to identify a particular tenant. For example, referring to
FIG. 1 , a data message received from logical interface ‘1’ ofgateway device 101 that requires the service provided byservice node 102, is encapsulated so that it will be received at logical interface ‘1’ ofservice node 102. Based on the encapsulation,service node 102 applies policies specific totenant 1. Data messages sent between interfaces use the MAC addresses associated with the destination interface of the device which remains unchanged by the processing performed by the L2 service node. - After forwarding (at 260) the data message out of one interface connected to the L2 service node, the process receives (at 270) the data message at the other interface. In some embodiments, the received data message includes an encapsulation or marking associated with a specific tenant. The process then processes (at 250) the received data message and forwards the data message towards its destination. In some embodiments, multiple L2 bump-in-the-wire services are independently provided in a similar fashion.
-
FIG. 3 conceptually illustrates an embodiment in which an L2 service is provided between two devices 301 by a service node in a cluster of service nodes 305.Device 301A is depicted as includingrouter 310 and switch 303A which, in some embodiments, are software executing ondevice 301A.Router 310 and switch 303A, in some embodiments, implement logical forwarding elements. In some embodiments,device 301A is a gateway device connecting an internal network to an external network. The internal network is a physical network implementing a logical network in some embodiments, withdevice 301A implementing the logical forwardingelements using router 310 and switch 303A. - Connections to the service nodes 302, in the depicted embodiment, are made through
layer 2 switches 303. The different devices 301 connect to the cluster of service nodes 302 throughdifferent switches 303. The service nodes 302 are depicted as a cluster of service nodes 305 in an active-standby configuration that each connect to the same pair of switches. In some embodiments of an active-standby configuration, an active service node provides the L2 service while the standby service nodes drop all data messages that they receive. Failover between the active and standby service nodes is handled by the L2 service nodes with no involvement of devices 301 in some embodiments. - Devices 301, in some embodiments, send heartbeat signals between the two interfaces connected to the L2 service nodes in order to detect failure of the L2 service (e.g., a failure of all the service nodes). In some embodiments, the heartbeat signals are unidirectional heartbeat signals (e.g., a unidirectional bidirectional-forwarding-detection (BFD) session) sent from each interface to the other. The heartbeat signals, in some embodiments, use the IP address of the destination interface as the destination IP address, but use a broadcast MAC address in order to reach the current active L2 service node in the case of a failover (i.e., an active service node failing and a standby service node becoming the new active service node).
-
FIG. 4 conceptually illustrates aprocess 400 for detecting failure using the heartbeat signals.Process 400, in some embodiments, is executed by at least one device 301 and, in some embodiments, is executed by each device 301.Process 400 begins (at 410) by establishing a unidirectional session between the interface (e.g., 330A) that connects to the cluster of service nodes and the interface (e.g. 330B) of the device attached to the other switch connected to the cluster of service nodes. - The process subsequently sends (at 420) a heartbeat data message to the second device. In some embodiments,
device 301A directs the data message to the IP address of the interface of the second device (e.g., 330B) using a broadcast MAC address. The heartbeat data message has a source MAC address of the interface of the first device that is learned by the switches connected to the service nodes and associated by the switches with the interfaces on which the heartbeat data message is received by the switch. - The process receives (at 430) a heartbeat data message from the second device. In some embodiments, the heartbeat messages are sent and received at intervals that are shorter than a timeout of a learned MAC address/interface pairing in the switches (e.g., 303). In some embodiments, the received message is sent from the second device directed to the IP address of the first interface using a broadcast MAC address.
- At 440, the process determines that the service nodes (e.g., 302) have failed. In some embodiments, the determination is made based on a time elapsed since a last heartbeat message was received. The time elapsed to determine failure of the service nodes (e.g., 302), in some embodiments, is based on the time between heartbeat signals, e.g., 5 heartbeat signals, or on a failover time for the service nodes in a service node cluster.
- Upon determining (at 440) that a service node cluster has failed, the process performs (at 450) a default operation for subsequent packets until the service is restored. In some embodiments, the default operation is forwarding all data messages to their destination without sending them to be provided the L2 service. In other embodiments, the default operation is dropping all data messages that require the L2 service until the L2 service is restored. In some embodiments, the device continues to send heartbeat data messages and determines that the service has been restored when a heartbeat is received from the other device or interface.
- Additional embodiments utilize the unidirectional broadcast heartbeat signals to decrease the time between a failover and data messages being forwarded to the new active service node as well as detect a failure of the service node cluster. In embodiments with an L2 bump-in- the-wire service between any two interfaces (e.g., between interfaces of two devices, or between two interfaces of a same device) an architecture using different L2 switches between each interface and the service node cluster is used in conjunction with the unidirectional broadcast heartbeat signals to reduce the time to redirect data messages to the new active service node.
FIGS. 5 and 6 conceptually illustrate processes performed by a service node and a switch, respectively, in some such embodiments. -
FIG. 5 conceptually illustrates aprocess 500 performed by a service node in some embodiments.Process 500 begins by receiving (at 510) data messages sent from one of two interfaces in communication with each other through the service node cluster including the service node performing 500. When the service node is a standby service node, the data messages are heartbeat data messages that are addressed to an IP address associated with either one of the two interfaces of the device or devices in communication with the service node and a broadcast MAC address. In some embodiments, the heartbeat data messages are received from one of two interfaces connected to the service node cluster through a pair of switches as inFIG. 3 . When the service node is an active service node, the data messages include data messages requiring the service provided by the service node cluster. In some embodiments, a data message is received with a context (e.g., an encapsulation or other marking) that is understood by the service node to identify a particular set of policies to apply to the data message. The context, in some embodiments, identifies a set of policies that are for a specific tenant. - The process then processes (at 520) the data messages at the service node. When the service node is designated as a standby service node, processing a data message, in some embodiments, comprises dropping the data message. Dropping data messages at the standby service node avoids redundant processing and, in embodiments providing a stateful service, misprocessing based on a lack of current state information. When the service node is designated, or acting, as an active service node, processing a heartbeat data message includes forwarding the data message to the destination interface without alteration.
- Processing the data message at an active node, in some embodiments, includes applying tenant-specific policies to the data message. The tenant-specific policies are identified based on a context appended to the data message by the device (e.g., a gateway device) that directs the data message to the service node. Processing a data message requiring the service at an active service node includes providing the service and forwarding the data message to the destination IP address without altering the source and destination MAC addresses of the received data message.
- A service
node performing process 500, in some embodiments, acts as a standby service node at some times and, if an active service node fails, acts (or is designated) as the active service node at other times. The failover process between service nodes, in some embodiments, is independent of the devices sending the heartbeat data messages. In some embodiments, the service node cluster has a control or management computer or cluster that determines and designates the active service node. The control/management computer, in some embodiments, maintains its own failure detection protocol (e.g., BFD) to detect the health of the service nodes in a service node cluster and initiate a failover process. -
FIG. 6 conceptually illustrates aprocess 600 performed by the switches, in some embodiments, to facilitate failover without the device, or devices, that send data messages to the service node cluster being aware of a service node cluster failover operation. The process begins by receiving (at 610) a data message from one of the interfaces of a device sending data messages to the service node cluster through the switch. The data message, in some embodiments, is a heartbeat data message sent from one interface to another through the switches and service node cluster. In some embodiments, the heartbeat data message uses a broadcast MAC address (i.e., FF:FF:FF:FF:FF:FF) as a destination MAC address. The heartbeat data message also includes a MAC address of the interface from which the data message was sent as a source MAC address. - The process then learns (at 620) a pairing between a port (e.g. interface) at which the data message was received and a MAC address used as a source MAC address of the received data message. The learning, in some embodiments, is accomplished through a table or other data structure that stores associations between MAC addresses and ports of the switch. The learned association is used to process subsequent data messages addressed to the MAC address by forwarding the subsequent data message to the destination from the associated port.
- The process then forwards (at 630) the received heartbeat data message out all the ports other than the port on which it was received. The broadcast heartbeat data message is then received at the service nodes of the service node cluster as described in relation to
operation 510 ofFIG. 5 for a particular service node. As described above in relation toFIG. 5 , only the active service node forwards the received heartbeat data message to the second interface through the second switch. The second switch receives the forwarded data message and associates the port connected to the active service node with the source MAC address of the heartbeat data message (i.e., the MAC address of the first interface) and forwards the heartbeat data message out all ports except for the port at which it was received as will be described in relation tooperations switch performing process 600. - The process then receives (at 640) a heartbeat data message from the second interface through an active service node. The heartbeat data message is received from the active service node, but not the standby service nodes as only the active service node allows data messages to be forwarded towards the destination. The heartbeat data message, in some embodiments, is received by the first switch after a second switch receives the data message from the second interface. In some embodiments, the second interface sends the heartbeat data message using the second interface's MAC address as a source MAC address and a broadcast MAC address as the destination address. Based on the broadcast MAC address, the second switch floods the data message to all the service nodes as described for the first switch in
operation 630. - The process then learns (at 650) a pairing between a port at which the data message was received and a MAC address used as a source MAC address of the received data message (i.e., the MAC address of the second interface). The port that is associated with the second interface's MAC address is the port connected to the active service node, because only the active service node forwards the data message to the first switch. The learned address/port pairing is stored, in some embodiments, in the same table or other data structure that stores the association between the MAC address of the first interface and the port at which the first heartbeat data message was received. The learned association is used to process subsequent data messages addressed to the MAC address of the second interface by forwarding the subsequent data message to the destination from the associated port. The switch has now learned the ports associated with the MAC addresses of the first and second interfaces and can use those learned associations to process subsequent data messages.
- The process receives (at 660) a data message that requires the service provided by the service node cluster. The data message is received at the port of the switch that connects to the first interface, in some embodiments. The data message, in some embodiments, has a destination address that is the MAC address of the second interface.
- The process then forwards (at 670) the data message that requires the service to the active service node. The process does not need to perform an address resolution protocol (ARP) operation to identify the port because the MAC address/port pairing was previously learned as part of learning
operation 650. Additionally, if an active service node fails, the heartbeat data messages sent subsequent to the service node failover process will be forwarded by the new active service node and the MAC address/port pairings for the first and second interface MAC addresses will be remapped to the ports connected to the new active service node. One of ordinary skill in the art will understand that operations relating to heartbeat data messages are independent of operations related to data message processing for data messages received from a network connected to the device and may be omitted in some embodiments. -
FIGS. 7A-B conceptually illustrates the flow of data messages in a single device embodiment 700 for learning MAC addresses. As fordevice 101 inFIG. 1 ,Device 701 serves as a gateway device betweennetworks switch 703A. Data message ‘1’ is a heartbeat data message that has (1) a source IP address (Src IP) that is the IP address of interface 730A, (2) a source MAC address (Src MAC) that is the MAC address of interface 730A (e.g., MAC 1), (3) a destination IP address (Dst IP) that is the IP address of interface 730B, and (4) a destination MAC address that is a broadcast MAC address (e.g., FF:FF:FF:FF:FF:FF). As described above,switch 703A receives data message ‘1’ at interface 730C and learns an association betweenMAC 1 and interface 730C, and forwards the data message as data messages ‘2’ to all other interfaces 730D-F of the switch. Data message ‘2’ is received byservice nodes 702A-C and is forwarded to interface 730G ofswitch 703B only by theactive service node 702A as data message ‘3’ becausestandby service nodes 702B-C drop data messages received based on their designation as standby service nodes. Data messages ‘2’ and ‘3’ maintain the same source and destination addresses as data message ‘1’ in some embodiments. -
Switch 703B learns an association betweenMAC 1 and interface 730G as discussed above in relation toFIG. 6 . Data message ‘3’ is then forwarded to all other interfaces ofswitch 703B (i.e., interfaces 730H-J) as data message ‘4.’Device 701 receives the heartbeat data message and determines that the service cluster has not failed.Standby service nodes 702B-C drop the data message. At this stage, an association between the MAC address of interface 730A and interfaces 730C and 730G is learned byswitches - A similar heartbeat data message sent from the interface 730B causes an association between a MAC address of interface 730B (e.g., MAC 2) with interfaces 730J and 730C to be learned by
switches switch 703B. Data message ‘5’ is a heartbeat data message that has (1) a Src IP that is the IP address of interface 730B, (2) a Src MAC that is the MAC address of interface 730B (e.g., MAC 2), (3) a Dst IP that is the IP address of interface 730A, and (4) a destination MAC address that is a broadcast MAC address (e.g., FF:FF:FF:FF:FF:FF). As described above,switch 703B receives data message ‘5’ at interface 730J and learns an association betweenMAC 2 and interface 730J and forwards the data message as data messages ‘6’ to all other interfaces 730G-I of the switch. Data message ‘6’ is received byservice nodes 702A-C and is forwarded to interface 730D ofswitch 703A only by theactive service node 702A as data message ‘7’ becausestandby service nodes 702B-C drop data messages received based on their designation as standby service nodes. Data messages ‘6’ and ‘7’ maintain the same source and destination addresses as data message ‘5’ in some embodiments. -
Switch 703A learns an association betweenMAC 2 and interface 730D as discussed above in relation toFIG. 6 . Data message ‘7’ is then forwarded to all other interfaces ofswitch 703A (i.e., interfaces 730C, E, and F) as data message ‘8.’Device 701 receives the heartbeat data message and determines that the service cluster has not failed.Standby service nodes 702B-C drop the data message. At this stage, an association between the MAC address of interface 730B and interfaces 730D and 730J is learned byswitches -
FIGS. 8 conceptually illustrates the processing of a data message requiring a service provided by theservice node cluster 705 after the switches have learned MAC address/interface associations from the data messages depicted inFIG. 7 or in other ways, such as by using an address resolution protocol (ARP) operation. Data message ‘9’ represents a data message requiring the service provided byservice node cluster 705. Data message ‘9’ has (1) a Src IP that is the IP address of interface 730A, (2) a Src MAC that is the MAC address of interface 730A (e.g., MAC 1), (3) a Dst IP that is the IP address of interface 730B, and (4) a destination MAC address that is a MAC address of interface 730B (e.g., MAC 2). Data message ‘9’ is sent from interface 730A to interface 730C ofswitch 703A. - Upon receiving the data message,
switch 703A consults the table or other data structure storing the MAC/interface associations to determine that MAC 2 (i.e., the destination MAC address) is associated with interface 730D and sends, as data message ‘10,’ the data message to servicenode 702A using interface 730D.Service node 702A processes the data message, including providing the service provided by theservice node cluster 705 and sends the processed data message as data message ‘11’ to interface 730G ofswitch 703B. Upon receiving data message ‘11,’switch 703B consults the table or other data structure storing the MAC/interface associations to determine that MAC 2 (i.e., the destination MAC address) is associated with interface 730J and sends, as data message ‘12,’ the data message to interface 730B using interface 730J. Return data messages are handled similarly. -
FIGS. 9A-B conceptually illustrate the path of a data message after a failover, before and after a subsequent heartbeat message is sent from aninterface 730 ofdevice 701.FIG. 9A illustrates the failure ofservice node 702A andservice node 702B being designated as the new active service node. After the failure ofservice node 702A, data message ‘13’ is sent from interface 730A with the same Src IP, Src MAC, Dst IP, and Dst MAC as data message ‘9.’Switch 703A sends data message ‘14’ toservice node 702A based on the association previously learned betweenMAC 2 and interface 730D, however,service node 702A has failed and the data message is lost. In a setup without the heartbeat data messages described inFIGS. 7A-B , the data messages in both directions would continue to be dropped (i.e., black-holed) until a timeout of the learned MAC address/interface associations, at which point a new learning operation (e.g. an ARP operation) would be performed indicating that the MAC address should be associated with the interface connected to the new active service node. - If, however, heartbeat data message ‘15’ is sent from interface 730B (using the same combination of Src IP, Src MAC, Dst IP, and Dst MAC as data message ‘5’),
switch 703B once again floods the data message as data messages ‘16’ as described in relation to data message ‘6’ and the newactive service node 702B receives and forwards the data message to switch 703A (not depicted). This causesswitch 703A to update its MAC address/interface table or other data structure to indicate an association betweenMAC 2 and interface 730E connected toservice node 702B. Using this updated association allows subsequently received data message requiring the service provided byservice node cluster 705 to follow a path illustrated by data messages ‘17’-‘20’ without any change in the set of Src IP, Src MAC, Dst IP, and Dst MAC at thedevice 701 for data messages going in the same direction. Heartbeat data messages are sent at time intervals that are shorter than a timeout interval for learned MAC address/interface associations so that in the case of service node failover, the service is restored based on the shorter heartbeat data message interval rather than the longer timeout interval for learned MAC address/interface associations. -
FIGS. 10A-B conceptually illustrates an embodiment in which the heartbeat data messages are used to detect failure of a service node cluster as discussed in relation toFIG. 4 .FIG. 10A illustrates the same elements as inFIG. 3 , however inFIG. 10A two of the three service nodes 302 have failed (i.e., 302A and 302C). A first heartbeat data message, data message ‘1,’ is sent frominterface 330A to interface 330B. Data message ‘1’ traverses switch 303A,service node 302B and switch 303B before arriving atinterface 330B. a heartbeat data message, data message ‘2,’ is sent frominterface 330B to interface330 A traversing switch 303B,service node 302B and switch 303A before being received bydevice 301A atinterface 330A. As described in relation toFIG. 7 , data messages ‘3’ and ‘4’ represent the rest of the datapath for heartbeat data messages. These heartbeat data messages are used to determine that the service node cluster 305 is still functioning (e.g., still providing the service). -
FIG. 10B illustrates a heartbeat data message being unable to reach a destination interface after the failure of all the service nodes 302 in service node cluster 305. Data messages ‘5’ and ‘6’ represent heartbeat data messages that are sent byinterface switches 303A and 303B respectively, are forwarded to all theservice nodes 302A-C, as data messages ‘7’ and ‘8’ respectively, based on the broadcast destination MAC address, but are not forwarded towards the other interface because the service nodes have failed. In some embodiments, the failure of the service nodes is based on a connection failure between the switches and the service node or between the interface of the devices 301 and aswitch 303. One of ordinary skill in the art would understand that the same service node cluster failure detection would function in the same way between two interfaces of a single device. In embodiments in which the two interfaces belong to a same device, failure detection may also be based on the fact that data messages the device sends out one interface are not received at the other interface which may enable faster failure detection than a system that is not aware of when heartbeat data messages are sent by the other device. - As discussed above in relation to
FIG. 4 , after a certain time interval (e.g., representing a certain number of missed heartbeat data messages) during which a heartbeat data message has not been received, devices 301 determine that the service node cluster 305 has failed and perform a default operation for data messages requiring the service provided by the service node cluster 305. In some embodiments, the default operation is to forward the data messages without providing the service (e.g., a fail-open condition) while in other embodiments, the default operation is to drop the data messages requiring the service (e.g., a fail-closed condition) until the service is restored. A fail-open condition may be more appropriate for services such as load balancing where security is not an issue while fail-closed may be more appropriate for a firewall operation relating to security and a network address translation (NAT) service which generally requires state information that is maintained by the service node providing the service. -
FIG. 11 illustrates an embodiment includinggateway device 701A andgateway device 701B that each act at a border between network 710 (e.g., an external network) and network 720 (e.g., an internal/logical network). The elements ofFIG. 11 act as the similarly numbered elements ofFIG. 7 with the additional designation of one of thedevices 701 as the active gateway device (e.g.,gateway device 701A). Theactive gateway device 701A, in some embodiments receives all data messages exchanged between thenetworks network 720. In some embodiments using a centralized logical router in the gateway devices, only one gateway device provides the centralized logical router services. -
FIG. 12 conceptually illustrates anelectronic system 1200 with which some embodiments of the invention are implemented. Theelectronic system 1200 can be used to execute any of the control, virtualization, or operating system applications described above. Theelectronic system 1200 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.Electronic system 1200 includes abus 1205, processing unit(s) 1210, asystem memory 1225, a read-only memory (ROM) 1230, apermanent storage device 1235,input devices 1240, andoutput devices 1245. - The
bus 1205 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of theelectronic system 1200. For instance, thebus 1205 communicatively connects the processing unit(s) 1210 with the read-only memory 1230, thesystem memory 1225, and thepermanent storage device 1235. - From these various memory units, the processing unit(s) 1210 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.
- The read-only-
memory 1230 stores static data and instructions that are needed by the processing unit(s) 1210 and other modules of the electronic system. Thepermanent storage device 1235, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when theelectronic system 1200 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as thepermanent storage device 1235. - Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the
permanent storage device 1235, thesystem memory 1225 is a read-and-write memory device. However, unlikestorage device 1235, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in thesystem memory 1225, thepermanent storage device 1235, and/or the read-only memory 1230. From these various memory units, the processing unit(s) 1210 retrieve instructions to execute and data to process in order to execute the processes of some embodiments. - The
bus 1205 also connects to the input andoutput devices input devices 1240 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). Theoutput devices 1245 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices. - Finally, as shown in
FIG. 12 ,bus 1205 also coupleselectronic system 1200 to anetwork 1265 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components ofelectronic system 1200 may be used in conjunction with the invention. - Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
- As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
- This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
- VMs, in some embodiments, operate with their own guest operating systems on a host machine using resources of the host machine virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.
- Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.
- It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.
- While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including
FIGS. 2 and 4-6 ) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Claims (21)
1-19. (canceled)
20. A method for performing a service at an edge router comprising first and second interfaces, the method comprising:
configuring the edge router to send, through the first interface, packet flows to a service machine to perform the service and to receive, from the second interface, the serviced packet flows;
configuring the edge router to send, from the first interface, a first set of heartbeat data messages to the service machine and to process a second set of heartbeat data messages received from the service machine along the second interface; and
configuring the edge router to determine that the service machine has failed based on a period of time associated with receiving heartbeat data messages in the second set of heartbeat data messages.
21. The method of claim 20 , wherein data messages in the first set of heartbeat data messages traverse a datapath comprising a first switch through which the first set of heartbeat data messages traverse from the first interface to the service machine, and a second switch through which the second set of heartbeat data messages traverse from the service machine to the second interface.
22. The method of claim 21 , wherein each switch associates a port of the switch with media access control (MAC) addresses used as source addresses for data messages received at the port.
23. The method of claim 22 , wherein the period between data messages in each of the first and second sets of heartbeat data messages is less than a time period for a timeout of a learned media access control (MAC) address.
24. The method of claim 20 , wherein the service machine is part of a cluster of service machines that perform the service, and the data messages of the first set of data messages use a broadcast destination media access control (MAC) address in order to reach the service machine that is the active service machine in the cluster to perform the service.
25. The method of claim 24 , wherein the service machines provides the service for packet flows without changing source and destination media access control (MAC) addresses of the packet flows.
26. The method of claim 24 , wherein the cluster of service machines determines which service machine in the cluster is the active independent of the heartbeat data messages sent to and from the edge router.
27. The method of claim 24 , wherein
data messages in the first set of heartbeat data messages traverse a datapath comprising a first switch through which the first set of heartbeat data messages traverse from the first interface to the service machine, and a second switch through which the second set of heartbeat data messages traverse from the service machine to the second interface,
the first switch associates a MAC address of the first interface with a first port to which the first interface is connected based on the first data message being received from the first interface, and
the second switch associates the MAC address of the first interface with a second port to which the active service machine is connected based on the first data message being received from the active service machine.
28. The method of claim 20 further comprising dropping data messages that require the service determining that the service machine has failed.
29. The method of claim 20 , wherein the particular service is one of a firewall operation, a network address translation, and a load balancing operation, and the service machine is a service virtual machine or a service appliance.
30. A non-transitory machine readable medium storing a program which when executed by at least one processing unit configures an edge router comprising first and second interfaces to provide a service for a plurality of packet flows processed by the edge router, the program comprising sets of instructions for:
configuring the edge router to send, through the first interface, packet flows to a service machine to perform the service and to receive, from the second interface, the serviced packet flows;
configuring the edge router to send, from the first interface, a first set of heartbeat data messages to the service machine and to process a second set of heartbeat data messages received from the service machine along the second interface; and
configuring the edge router to determine that the service machine has failed based on a period of time associated with receiving heartbeat data messages in the second set of heartbeat data messages.
31. The non-transitory machine readable medium of claim 30 , wherein data messages in the first set of heartbeat data messages traverse a datapath comprising a first switch through which the first set of heartbeat data messages traverse from the first interface to the service machine, and a second switch through which the second set of heartbeat data messages traverse from the service machine to the second interface.
32. The non-transitory machine readable medium of claim 31 , wherein each switch associates a port of the switch with media access control (MAC) addresses used as source addresses for data messages received at the port.
33. The non-transitory machine readable medium of claim 32 , wherein the period between data messages in each of the first and second sets of heartbeat data messages is less than a time period for a timeout of a learned media access control (MAC) address.
34. The non-transitory machine readable medium of claim 30 , wherein the service machine is part of a cluster of service machines that perform the service, and the data messages of the first set of data messages use a broadcast destination media access control (MAC) address in order to reach the service machine that is the active service machine in the cluster to perform the service.
35. The non-transitory machine readable medium of claim 34 , wherein the service machines provides the service for packet flows without changing source and destination media access control (MAC) addresses of the packet flows.
36. The non-transitory machine readable medium of claim 34 , wherein the cluster of service machines determines which service machine in the cluster is the active independent of the heartbeat data messages sent to and from the edge router.
37. The non-transitory machine readable medium of claim 34 , wherein
data messages in the first set of heartbeat data messages traverse a datapath comprising a first switch through which the first set of heartbeat data messages traverse from the first interface to the service machine, and a second switch through which the second set of heartbeat data messages traverse from the service machine to the second interface,
the first switch associates a MAC address of the first interface with a first port to which the first interface is connected based on the first data message being received from the first interface, and
the second switch associates the MAC address of the first interface with a second port to which the active service machine is connected based on the first data message being received from the active service machine.
38. The non-transitory machine readable medium of claim 30 further comprising dropping data messages that require the service determining that the service machine has failed.
39. The non-transitory machine readable medium of claim 30 , wherein the particular service is one of a firewall operation, a network address translation, and a load balancing operation, and the service machine is a service virtual machine or a service appliance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/370,006 US20240015086A1 (en) | 2018-03-27 | 2023-09-19 | Detecting failure of layer 2 service using broadcast messages |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/937,621 US10805192B2 (en) | 2018-03-27 | 2018-03-27 | Detecting failure of layer 2 service using broadcast messages |
US16/945,868 US11038782B2 (en) | 2018-03-27 | 2020-08-01 | Detecting failure of layer 2 service using broadcast messages |
US17/346,255 US11805036B2 (en) | 2018-03-27 | 2021-06-13 | Detecting failure of layer 2 service using broadcast messages |
US18/370,006 US20240015086A1 (en) | 2018-03-27 | 2023-09-19 | Detecting failure of layer 2 service using broadcast messages |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/346,255 Continuation US11805036B2 (en) | 2018-03-27 | 2021-06-13 | Detecting failure of layer 2 service using broadcast messages |
Publications (1)
Publication Number | Publication Date |
---|---|
US20240015086A1 true US20240015086A1 (en) | 2024-01-11 |
Family
ID=68057399
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/937,621 Active 2038-09-06 US10805192B2 (en) | 2018-03-27 | 2018-03-27 | Detecting failure of layer 2 service using broadcast messages |
US16/945,868 Active US11038782B2 (en) | 2018-03-27 | 2020-08-01 | Detecting failure of layer 2 service using broadcast messages |
US17/346,255 Active 2038-08-10 US11805036B2 (en) | 2018-03-27 | 2021-06-13 | Detecting failure of layer 2 service using broadcast messages |
US18/370,006 Pending US20240015086A1 (en) | 2018-03-27 | 2023-09-19 | Detecting failure of layer 2 service using broadcast messages |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/937,621 Active 2038-09-06 US10805192B2 (en) | 2018-03-27 | 2018-03-27 | Detecting failure of layer 2 service using broadcast messages |
US16/945,868 Active US11038782B2 (en) | 2018-03-27 | 2020-08-01 | Detecting failure of layer 2 service using broadcast messages |
US17/346,255 Active 2038-08-10 US11805036B2 (en) | 2018-03-27 | 2021-06-13 | Detecting failure of layer 2 service using broadcast messages |
Country Status (1)
Country | Link |
---|---|
US (4) | US10805192B2 (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9225638B2 (en) | 2013-05-09 | 2015-12-29 | Vmware, Inc. | Method and system for service switching using service tags |
US10135737B2 (en) | 2014-09-30 | 2018-11-20 | Nicira, Inc. | Distributed load balancing systems |
US9935827B2 (en) | 2014-09-30 | 2018-04-03 | Nicira, Inc. | Method and apparatus for distributing load among a plurality of service nodes |
US11296930B2 (en) | 2014-09-30 | 2022-04-05 | Nicira, Inc. | Tunnel-enabled elastic service model |
US10609091B2 (en) | 2015-04-03 | 2020-03-31 | Nicira, Inc. | Method, apparatus, and system for implementing a content switch |
US10805181B2 (en) | 2017-10-29 | 2020-10-13 | Nicira, Inc. | Service operation chaining |
US11012420B2 (en) | 2017-11-15 | 2021-05-18 | Nicira, Inc. | Third-party service chaining using packet encapsulation in a flow-based forwarding element |
US10797910B2 (en) | 2018-01-26 | 2020-10-06 | Nicira, Inc. | Specifying and utilizing paths through a network |
US10805192B2 (en) | 2018-03-27 | 2020-10-13 | Nicira, Inc. | Detecting failure of layer 2 service using broadcast messages |
US10728174B2 (en) | 2018-03-27 | 2020-07-28 | Nicira, Inc. | Incorporating layer 2 service between two interfaces of gateway device |
US10942788B2 (en) | 2018-06-15 | 2021-03-09 | Vmware, Inc. | Policy constraint framework for an sddc |
US10812337B2 (en) | 2018-06-15 | 2020-10-20 | Vmware, Inc. | Hierarchical API for a SDDC |
US11086700B2 (en) | 2018-08-24 | 2021-08-10 | Vmware, Inc. | Template driven approach to deploy a multi-segmented application in an SDDC |
US10944673B2 (en) | 2018-09-02 | 2021-03-09 | Vmware, Inc. | Redirection of data messages at logical network gateway |
US11595250B2 (en) | 2018-09-02 | 2023-02-28 | Vmware, Inc. | Service insertion at logical network gateway |
US11042397B2 (en) | 2019-02-22 | 2021-06-22 | Vmware, Inc. | Providing services with guest VM mobility |
US11115342B2 (en) * | 2019-04-16 | 2021-09-07 | Hewlett Packard Enterprise Development Lp | Using BFD packets in a virtualized device |
US11283717B2 (en) | 2019-10-30 | 2022-03-22 | Vmware, Inc. | Distributed fault tolerant service chain |
US11140218B2 (en) | 2019-10-30 | 2021-10-05 | Vmware, Inc. | Distributed service chain across multiple clouds |
US11223494B2 (en) | 2020-01-13 | 2022-01-11 | Vmware, Inc. | Service insertion for multicast traffic at boundary |
US11659061B2 (en) | 2020-01-20 | 2023-05-23 | Vmware, Inc. | Method of adjusting service function chains to improve network performance |
US11153406B2 (en) | 2020-01-20 | 2021-10-19 | Vmware, Inc. | Method of network performance visualization of service function chains |
CN115380514B (en) | 2020-04-01 | 2024-03-01 | 威睿有限责任公司 | Automatic deployment of network elements for heterogeneous computing elements |
US11212356B2 (en) | 2020-04-06 | 2021-12-28 | Vmware, Inc. | Providing services at the edge of a network using selected virtual tunnel interfaces |
US11258711B2 (en) * | 2020-06-04 | 2022-02-22 | Vmware, Inc. | Split-brain prevention in a high availability system during workload migration |
US11803408B2 (en) | 2020-07-29 | 2023-10-31 | Vmware, Inc. | Distributed network plugin agents for container networking |
US11863352B2 (en) | 2020-07-30 | 2024-01-02 | Vmware, Inc. | Hierarchical networking for nested container clusters |
US11997064B2 (en) * | 2020-08-21 | 2024-05-28 | Arrcus Inc. | High availability network address translation |
US11343328B2 (en) * | 2020-09-14 | 2022-05-24 | Vmware, Inc. | Failover prevention in a high availability system during traffic congestion |
US11734043B2 (en) | 2020-12-15 | 2023-08-22 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11611625B2 (en) | 2020-12-15 | 2023-03-21 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11606254B2 (en) | 2021-06-11 | 2023-03-14 | Vmware, Inc. | Automatic configuring of VLAN and overlay logical switches for container secondary interfaces |
US11902245B2 (en) | 2022-01-14 | 2024-02-13 | VMware LLC | Per-namespace IP address management method for container networks |
US11848910B1 (en) | 2022-11-11 | 2023-12-19 | Vmware, Inc. | Assigning stateful pods fixed IP addresses depending on unique pod identity |
US11831511B1 (en) | 2023-01-17 | 2023-11-28 | Vmware, Inc. | Enforcing network policies in heterogeneous systems |
US12101244B1 (en) | 2023-06-12 | 2024-09-24 | VMware LLC | Layer 7 network security for container workloads |
Family Cites Families (660)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3266261A (en) | 1964-11-27 | 1966-08-16 | James H Anderson | Method and apparatus for evaporating liquefied gases |
DE1277655B (en) | 1967-02-13 | 1968-09-12 | Windmoeller & Hoelscher | Device for separating stacked tube pieces made of paper or plastic film |
US6154448A (en) * | 1997-06-20 | 2000-11-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Next hop loopback |
US6006264A (en) | 1997-08-01 | 1999-12-21 | Arrowpoint Communications, Inc. | Method and system for directing a flow between a client and a server |
US6104700A (en) | 1997-08-29 | 2000-08-15 | Extreme Networks | Policy based quality of service |
US6154488A (en) * | 1997-09-23 | 2000-11-28 | Hunt Technologies, Inc. | Low frequency bilateral communication over distributed power lines |
US6128279A (en) | 1997-10-06 | 2000-10-03 | Web Balance, Inc. | System for balancing loads among network servers |
US6779030B1 (en) | 1997-10-06 | 2004-08-17 | Worldcom, Inc. | Intelligent network |
US6665702B1 (en) | 1998-07-15 | 2003-12-16 | Radware Ltd. | Load balancing |
US8234477B2 (en) | 1998-07-31 | 2012-07-31 | Kom Networks, Inc. | Method and system for providing restricted access to a storage medium |
US6826694B1 (en) | 1998-10-22 | 2004-11-30 | At&T Corp. | High resolution access control |
US6760775B1 (en) | 1999-03-05 | 2004-07-06 | At&T Corp. | System, method and apparatus for network service load and reliability management |
US6970913B1 (en) | 1999-07-02 | 2005-11-29 | Cisco Technology, Inc. | Load balancing using distributed forwarding agents with application based feedback for different virtual machines |
US7013389B1 (en) | 1999-09-29 | 2006-03-14 | Cisco Technology, Inc. | Method and apparatus for creating a secure communication channel among multiple event service nodes |
WO2001040903A2 (en) | 1999-12-06 | 2001-06-07 | Warp Solutions, Inc. | System and method for enhancing operation of a web server cluster |
US6880089B1 (en) | 2000-03-31 | 2005-04-12 | Avaya Technology Corp. | Firewall clustering for multiple network servers |
US8538843B2 (en) | 2000-07-17 | 2013-09-17 | Galactic Computing Corporation Bvi/Bc | Method and system for operating an E-commerce service provider |
US20030050932A1 (en) | 2000-09-01 | 2003-03-13 | Pace Charles P. | System and method for transactional deployment of J2EE web components, enterprise java bean components, and application data over multi-tiered computer networks |
US7389358B1 (en) | 2000-09-13 | 2008-06-17 | Fortinet, Inc. | Distributed virtual system to support managed, network-based services |
US6985956B2 (en) | 2000-11-02 | 2006-01-10 | Sun Microsystems, Inc. | Switching system |
US7296291B2 (en) | 2000-12-18 | 2007-11-13 | Sun Microsystems, Inc. | Controlled information flow between communities via a firewall |
US6697206B2 (en) | 2000-12-19 | 2004-02-24 | Imation Corp. | Tape edge monitoring |
US7280540B2 (en) | 2001-01-09 | 2007-10-09 | Stonesoft Oy | Processing of data packets within a network element cluster |
US7002967B2 (en) | 2001-05-18 | 2006-02-21 | Denton I Claude | Multi-protocol networking processor with data traffic support spanning local, regional and wide area networks |
US6944678B2 (en) | 2001-06-18 | 2005-09-13 | Transtech Networks Usa, Inc. | Content-aware application switch and methods thereof |
US7493369B2 (en) | 2001-06-28 | 2009-02-17 | Microsoft Corporation | Composable presence and availability services |
US20030105812A1 (en) | 2001-08-09 | 2003-06-05 | Gigamedia Access Corporation | Hybrid system architecture for secure peer-to-peer-communications |
US7209977B2 (en) | 2001-10-01 | 2007-04-24 | International Business Machines Corporation | Method and apparatus for content-aware web switching |
US8095668B2 (en) | 2001-11-09 | 2012-01-10 | Rockstar Bidco Lp | Middlebox control |
TW544601B (en) | 2001-11-20 | 2003-08-01 | Ind Tech Res Inst | Method and structure for forming web server cluster by conversion and dispatching of web page documents |
US7379465B2 (en) | 2001-12-07 | 2008-05-27 | Nortel Networks Limited | Tunneling scheme optimized for use in virtual private networks |
US7239639B2 (en) | 2001-12-27 | 2007-07-03 | 3Com Corporation | System and method for dynamically constructing packet classification rules |
US8156216B1 (en) | 2002-01-30 | 2012-04-10 | Adobe Systems Incorporated | Distributed data collection and aggregation |
US7088718B1 (en) | 2002-03-19 | 2006-08-08 | Cisco Technology, Inc. | Server load balancing using IP option field approach to identify route to selected server |
US20030236813A1 (en) | 2002-06-24 | 2003-12-25 | Abjanic John B. | Method and apparatus for off-load processing of a message stream |
US7086061B1 (en) | 2002-08-01 | 2006-08-01 | Foundry Networks, Inc. | Statistical tracking of global server load balancing for selecting the best network address from ordered list of network addresses based on a set of performance metrics |
US8077681B2 (en) | 2002-10-08 | 2011-12-13 | Nokia Corporation | Method and system for establishing a connection via an access network |
US7480737B2 (en) | 2002-10-25 | 2009-01-20 | International Business Machines Corporation | Technique for addressing a cluster of network servers |
US20040215703A1 (en) | 2003-02-18 | 2004-10-28 | Xiping Song | System supporting concurrent operation of multiple executable application operation sessions |
US7388842B1 (en) | 2003-03-13 | 2008-06-17 | At&T Corp. | Method and apparatus for efficient routing of variable traffic |
US20050022017A1 (en) | 2003-06-24 | 2005-01-27 | Maufer Thomas A. | Data structures and state tracking for network protocol processing |
US20090299791A1 (en) | 2003-06-25 | 2009-12-03 | Foundry Networks, Inc. | Method and system for management of licenses |
US7483374B2 (en) | 2003-08-05 | 2009-01-27 | Scalent Systems, Inc. | Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using adaptive flow-based routing |
US7315693B2 (en) | 2003-10-22 | 2008-01-01 | Intel Corporation | Dynamic route discovery for optical switched networks |
US7447775B1 (en) | 2003-11-07 | 2008-11-04 | Cisco Technology, Inc. | Methods and apparatus for supporting transmission of streaming data |
US7496955B2 (en) | 2003-11-24 | 2009-02-24 | Cisco Technology, Inc. | Dual mode firewall |
US7962914B2 (en) | 2003-11-25 | 2011-06-14 | Emc Corporation | Method and apparatus for load balancing of distributed processing units based on performance metrics |
US8572249B2 (en) | 2003-12-10 | 2013-10-29 | Aventail Llc | Network appliance for balancing load and platform services |
US7370100B1 (en) | 2003-12-10 | 2008-05-06 | Foundry Networks, Inc. | Method and apparatus for load balancing based on packet header content |
GB0402739D0 (en) | 2004-02-09 | 2004-03-10 | Saviso Group Ltd | Methods and apparatus for routing in a network |
US8223634B2 (en) | 2004-02-18 | 2012-07-17 | Fortinet, Inc. | Mechanism for implementing load balancing in a network |
US8484348B2 (en) | 2004-03-05 | 2013-07-09 | Rockstar Consortium Us Lp | Method and apparatus for facilitating fulfillment of web-service requests on a communication network |
EP1732272B1 (en) | 2004-03-30 | 2014-03-19 | Panasonic Corporation | Communication device and communication system |
US8923292B2 (en) | 2004-04-06 | 2014-12-30 | Rockstar Consortium Us Lp | Differential forwarding in address-based carrier networks |
JP2005311863A (en) | 2004-04-23 | 2005-11-04 | Hitachi Ltd | Traffic distribution control method, controller and network system |
GB2418110B (en) | 2004-09-14 | 2006-09-06 | 3Com Corp | Method and apparatus for controlling traffic between different entities on a network |
US7805517B2 (en) | 2004-09-15 | 2010-09-28 | Cisco Technology, Inc. | System and method for load balancing a communications network |
US8145908B1 (en) | 2004-10-29 | 2012-03-27 | Akamai Technologies, Inc. | Web content defacement protection system |
US7475274B2 (en) | 2004-11-17 | 2009-01-06 | Raytheon Company | Fault tolerance and recovery in a high-performance computing (HPC) system |
US8028334B2 (en) | 2004-12-14 | 2011-09-27 | International Business Machines Corporation | Automated generation of configuration elements of an information technology system |
CA2594020C (en) | 2004-12-22 | 2014-12-09 | Wake Forest University | Method, systems, and computer program products for implementing function-parallel network firewall |
US20060155862A1 (en) | 2005-01-06 | 2006-07-13 | Hari Kathi | Data traffic load balancing based on application layer messages |
JP4394590B2 (en) | 2005-02-22 | 2010-01-06 | 株式会社日立コミュニケーションテクノロジー | Packet relay apparatus and communication bandwidth control method |
US7499463B1 (en) | 2005-04-22 | 2009-03-03 | Sun Microsystems, Inc. | Method and apparatus for enforcing bandwidth utilization of a virtual serialization queue |
US8738702B1 (en) | 2005-07-13 | 2014-05-27 | At&T Intellectual Property Ii, L.P. | Method and system for a personalized content dissemination platform |
WO2007052285A2 (en) | 2005-07-22 | 2007-05-10 | Yogesh Chunilal Rathod | Universal knowledge management and desktop search system |
US7721299B2 (en) | 2005-08-05 | 2010-05-18 | Red Hat, Inc. | Zero-copy network I/O for virtual hosts |
US8270413B2 (en) | 2005-11-28 | 2012-09-18 | Cisco Technology, Inc. | Method and apparatus for self-learning of VPNS from combination of unidirectional tunnels in MPLS/VPN networks |
US9686183B2 (en) | 2005-12-06 | 2017-06-20 | Zarbaña Digital Fund Llc | Digital object routing based on a service request |
US7660296B2 (en) | 2005-12-30 | 2010-02-09 | Akamai Technologies, Inc. | Reliable, high-throughput, high-performance transport and routing mechanism for arbitrary data flows |
US8856862B2 (en) | 2006-03-02 | 2014-10-07 | British Telecommunications Public Limited Company | Message processing methods and systems |
US20070260750A1 (en) | 2006-03-09 | 2007-11-08 | Microsoft Corporation | Adaptable data connector |
US20070214282A1 (en) | 2006-03-13 | 2007-09-13 | Microsoft Corporation | Load balancing via rotation of cluster identity |
US20070248091A1 (en) | 2006-04-24 | 2007-10-25 | Mohamed Khalid | Methods and apparatus for tunnel stitching in a network |
US7702843B1 (en) | 2006-04-27 | 2010-04-20 | Vmware, Inc. | Determining memory conditions in a virtual machine |
US8838756B2 (en) | 2009-07-27 | 2014-09-16 | Vmware, Inc. | Management and implementation of enclosed local networks in a virtual lab |
US8892706B1 (en) | 2010-06-21 | 2014-11-18 | Vmware, Inc. | Private ethernet overlay networks over a shared ethernet in a virtual environment |
DE102006022046B4 (en) | 2006-05-05 | 2008-06-12 | Nokia Siemens Networks Gmbh & Co.Kg | A method of enabling quality of service control and / or service charging in telecommunications services |
US7693985B2 (en) | 2006-06-09 | 2010-04-06 | Cisco Technology, Inc. | Technique for dispatching data packets to service control engines |
US7761596B2 (en) | 2006-06-30 | 2010-07-20 | Telefonaktiebolaget L M Ericsson (Publ) | Router and method for server load balancing |
WO2008018969A1 (en) | 2006-08-04 | 2008-02-14 | Parallel Computers Technology, Inc. | Apparatus and method of optimizing database clustering with zero transaction loss |
US7580417B2 (en) | 2006-08-07 | 2009-08-25 | Cisco Technology, Inc. | Method and apparatus for load balancing over virtual network links |
US8707383B2 (en) | 2006-08-16 | 2014-04-22 | International Business Machines Corporation | Computer workload management with security policy enforcement |
US8312120B2 (en) | 2006-08-22 | 2012-11-13 | Citrix Systems, Inc. | Systems and methods for providing dynamic spillover of virtual servers based on bandwidth |
GB2443229B (en) | 2006-08-23 | 2009-10-14 | Cramer Systems Ltd | Capacity management for data networks |
US8204982B2 (en) | 2006-09-14 | 2012-06-19 | Quova, Inc. | System and method of middlebox detection and characterization |
US8649264B2 (en) | 2006-10-04 | 2014-02-11 | Qualcomm Incorporated | IP flow-based load balancing over a plurality of wireless network links |
JP2008104027A (en) | 2006-10-19 | 2008-05-01 | Fujitsu Ltd | Apparatus and program for collecting packet information |
US8185893B2 (en) | 2006-10-27 | 2012-05-22 | Hewlett-Packard Development Company, L.P. | Starting up at least one virtual machine in a physical machine by a load balancer |
US8849746B2 (en) | 2006-12-19 | 2014-09-30 | Teradata Us, Inc. | High-throughput extract-transform-load (ETL) of program events for subsequent analysis |
US20160277261A9 (en) | 2006-12-29 | 2016-09-22 | Prodea Systems, Inc. | Multi-services application gateway and system employing the same |
US20080189769A1 (en) | 2007-02-01 | 2008-08-07 | Martin Casado | Secure network switching infrastructure |
US7865614B2 (en) | 2007-02-12 | 2011-01-04 | International Business Machines Corporation | Method and apparatus for load balancing with server state change awareness |
WO2008102195A1 (en) | 2007-02-22 | 2008-08-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Consistent and fault tolerant distributed hash table (dht) overlay network |
US20080225714A1 (en) | 2007-03-12 | 2008-09-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Dynamic load balancing |
US8144709B2 (en) | 2007-04-06 | 2012-03-27 | International Business Machines Corporation | Method, system and computer processing an IP packet, routing a structured data carrier, preventing broadcast storms, load-balancing and converting a full broadcast IP packet |
US8230493B2 (en) | 2007-05-02 | 2012-07-24 | Cisco Technology, Inc. | Allowing differential processing of encrypted tunnels |
US7898959B1 (en) | 2007-06-28 | 2011-03-01 | Marvell Israel (Misl) Ltd. | Method for weighted load-balancing among network interfaces |
US20090003375A1 (en) | 2007-06-29 | 2009-01-01 | Martin Havemann | Network system having an extensible control plane |
US8000329B2 (en) | 2007-06-29 | 2011-08-16 | Alcatel Lucent | Open platform architecture for integrating multiple heterogeneous network functions |
US7843914B2 (en) | 2007-06-29 | 2010-11-30 | Alcatel-Lucent | Network system having an extensible forwarding plane |
US8898331B2 (en) | 2007-07-09 | 2014-11-25 | Hewlett-Packard Development Company, L.P. | Method, network and computer program for processing a content request |
US7895425B2 (en) | 2007-08-03 | 2011-02-22 | Cisco Technology, Inc. | Operation, administration and maintenance (OAM) in a service insertion architecture (SIA) |
US20090063706A1 (en) | 2007-08-30 | 2009-03-05 | International Business Machines Corporation | Combined Layer 2 Virtual MAC Address with Layer 3 IP Address Routing |
US8201219B2 (en) | 2007-09-24 | 2012-06-12 | Bridgewater Systems Corp. | Systems and methods for server load balancing using authentication, authorization, and accounting protocols |
US8874789B1 (en) | 2007-09-28 | 2014-10-28 | Trend Micro Incorporated | Application based routing arrangements and method thereof |
US8553537B2 (en) | 2007-11-09 | 2013-10-08 | International Business Machines Corporation | Session-less load balancing of client traffic across servers in a server group |
US7855982B2 (en) | 2007-11-19 | 2010-12-21 | Rajesh Ramankutty | Providing services to packet flows in a network |
US8411564B2 (en) | 2007-12-17 | 2013-04-02 | Indian Institute Of Technology, Bombay | Architectural framework of communication network and a method of establishing QOS connection |
EP2248003A1 (en) | 2007-12-31 | 2010-11-10 | Netapp, Inc. | System and method for automatic storage load balancing in virtual server environments |
US9043862B2 (en) | 2008-02-06 | 2015-05-26 | Qualcomm Incorporated | Policy control for encapsulated data flows |
US8175863B1 (en) | 2008-02-13 | 2012-05-08 | Quest Software, Inc. | Systems and methods for analyzing performance of virtual environments |
US8521879B1 (en) | 2008-03-11 | 2013-08-27 | United Services Automobile Assocation (USAA) | Systems and methods for a load balanced interior gateway protocol intranet |
US7808919B2 (en) | 2008-03-18 | 2010-10-05 | Cisco Technology, Inc. | Network monitoring using a proxy |
US20090249471A1 (en) | 2008-03-27 | 2009-10-01 | Moshe Litvin | Reversible firewall policies |
US9762692B2 (en) | 2008-04-04 | 2017-09-12 | Level 3 Communications, Llc | Handling long-tail content in a content delivery network (CDN) |
US20110035494A1 (en) | 2008-04-15 | 2011-02-10 | Blade Network Technologies | Network virtualization for a virtualized server data center environment |
US9749404B2 (en) | 2008-04-17 | 2017-08-29 | Radware, Ltd. | Method and system for load balancing over a cluster of authentication, authorization and accounting (AAA) servers |
US8339959B1 (en) | 2008-05-20 | 2012-12-25 | Juniper Networks, Inc. | Streamlined packet forwarding using dynamic filters for routing and security in a shared forwarding plane |
US8849971B2 (en) | 2008-05-28 | 2014-09-30 | Red Hat, Inc. | Load balancing in cloud-based networks |
US8160063B2 (en) | 2008-06-09 | 2012-04-17 | Microsoft Corporation | Data center interconnect and traffic engineering |
US8996683B2 (en) | 2008-06-09 | 2015-03-31 | Microsoft Technology Licensing, Llc | Data center without structural bottlenecks |
US8108467B2 (en) | 2008-06-26 | 2012-01-31 | International Business Machines Corporation | Load balanced data processing performed on an application message transmitted between compute nodes of a parallel computer |
US8578483B2 (en) | 2008-07-31 | 2013-11-05 | Carnegie Mellon University | Systems and methods for preventing unauthorized modification of an operating system |
US20100036903A1 (en) | 2008-08-11 | 2010-02-11 | Microsoft Corporation | Distributed load balancer |
US8706878B1 (en) | 2008-08-21 | 2014-04-22 | United Services Automobile Association | Preferential loading in data centers |
US8873399B2 (en) | 2008-09-03 | 2014-10-28 | Nokia Siemens Networks Oy | Gateway network element, a method, and a group of load balanced access points configured for load balancing in a communications network |
US8228929B2 (en) | 2008-10-24 | 2012-07-24 | Juniper Networks, Inc. | Flow consistent dynamic load balancing |
US8171124B2 (en) | 2008-11-25 | 2012-05-01 | Citrix Systems, Inc. | Systems and methods for GSLB remote service monitoring |
US8078903B1 (en) | 2008-11-25 | 2011-12-13 | Cisco Technology, Inc. | Automatic load-balancing and seamless failover of data flows in storage media encryption (SME) |
CN102326366A (en) | 2008-12-22 | 2012-01-18 | 瑞典爱立信有限公司 | Method and device for handling of connections between a client and a server via a communication network |
US8442043B2 (en) | 2008-12-29 | 2013-05-14 | Cisco Technology, Inc. | Service selection mechanism in service insertion architecture data plane |
US8224885B1 (en) | 2009-01-26 | 2012-07-17 | Teradici Corporation | Method and system for remote computing session management |
US7948986B1 (en) | 2009-02-02 | 2011-05-24 | Juniper Networks, Inc. | Applying services within MPLS networks |
US20100223364A1 (en) | 2009-02-27 | 2010-09-02 | Yottaa Inc | System and method for network traffic management and load balancing |
US20100235915A1 (en) | 2009-03-12 | 2010-09-16 | Nasir Memon | Using host symptoms, host roles, and/or host reputation for detection of host infection |
US8094575B1 (en) | 2009-03-24 | 2012-01-10 | Juniper Networks, Inc. | Routing protocol extension for network acceleration service-aware path selection within computer networks |
JP4811489B2 (en) | 2009-03-27 | 2011-11-09 | 日本電気株式会社 | Server system, collective server device, and MAC address management method |
US20100254385A1 (en) | 2009-04-07 | 2010-10-07 | Cisco Technology, Inc. | Service Insertion Architecture (SIA) in a Virtual Private Network (VPN) Aware Network |
CN101873572B (en) | 2009-04-27 | 2012-08-29 | 中国移动通信集团公司 | Data transmission method, system and relevant network equipment based on PMIPv6 |
US8261266B2 (en) | 2009-04-30 | 2012-09-04 | Microsoft Corporation | Deploying a virtual machine having a virtual hardware configuration matching an improved hardware profile with respect to execution of an application |
US8578076B2 (en) | 2009-05-01 | 2013-11-05 | Citrix Systems, Inc. | Systems and methods for establishing a cloud bridge between virtual storage resources |
US9479358B2 (en) | 2009-05-13 | 2016-10-25 | International Business Machines Corporation | Managing graphics load balancing strategies |
CN101594358B (en) | 2009-06-29 | 2012-09-05 | 北京航空航天大学 | Method, device, system and host for three-layer switching |
US8352561B1 (en) | 2009-07-24 | 2013-01-08 | Google Inc. | Electronic communication reminder technology |
US20110040893A1 (en) | 2009-08-14 | 2011-02-17 | Broadcom Corporation | Distributed Internet caching via multiple node caching management |
US20110055845A1 (en) | 2009-08-31 | 2011-03-03 | Thyagarajan Nandagopal | Technique for balancing loads in server clusters |
CN102025608B (en) | 2009-09-17 | 2013-03-20 | 中兴通讯股份有限公司 | Communication method, data message forwarding method in communication process as well as communication nodes |
CN102025702B (en) | 2009-09-17 | 2014-11-05 | 中兴通讯股份有限公司 | Network based on identity and position separation frame, and backbone network and network element thereof |
US8451735B2 (en) | 2009-09-28 | 2013-05-28 | Symbol Technologies, Inc. | Systems and methods for dynamic load balancing in a wireless network |
US8811412B2 (en) | 2009-10-15 | 2014-08-19 | International Business Machines Corporation | Steering data communications packets among service applications with server selection modulus values |
JPWO2011049135A1 (en) | 2009-10-23 | 2013-03-14 | 日本電気株式会社 | Network system, control method therefor, and controller |
CN101729412B (en) | 2009-11-05 | 2012-03-14 | 北京超图软件股份有限公司 | Distributed level cluster method and system of geographic information service |
JP2013511240A (en) | 2009-11-16 | 2013-03-28 | インターデイジタル パテント ホールディングス インコーポレイテッド | Silent period adjustment for Dynamic Spectrum Manager (DSM) |
CN101714916B (en) * | 2009-11-26 | 2013-06-05 | 华为数字技术(成都)有限公司 | Method, equipment and system for backing up |
US8832683B2 (en) | 2009-11-30 | 2014-09-09 | Red Hat Israel, Ltd. | Using memory-related metrics of host machine for triggering load balancing that migrate virtual machine |
US8615009B1 (en) | 2010-01-25 | 2013-12-24 | Juniper Networks, Inc. | Interface for extending service capabilities of a network device |
JP5648926B2 (en) | 2010-02-01 | 2015-01-07 | 日本電気株式会社 | Network system, controller, and network control method |
CN102158386B (en) | 2010-02-11 | 2015-06-03 | 威睿公司 | Distributed load balance for system management program |
US8320399B2 (en) | 2010-02-26 | 2012-11-27 | Net Optics, Inc. | Add-on module and methods thereof |
US8996610B1 (en) | 2010-03-15 | 2015-03-31 | Salesforce.Com, Inc. | Proxy system, method and computer program product for utilizing an identifier of a request to route the request to a networked device |
US8971345B1 (en) | 2010-03-22 | 2015-03-03 | Riverbed Technology, Inc. | Method and apparatus for scheduling a heterogeneous communication flow |
EP2553901B1 (en) | 2010-03-26 | 2016-04-27 | Citrix Systems, Inc. | System and method for link load balancing on a multi-core device |
US8243598B2 (en) | 2010-04-26 | 2012-08-14 | International Business Machines Corporation | Load-balancing via modulus distribution and TCP flow redirection due to server overload |
US8504718B2 (en) | 2010-04-28 | 2013-08-06 | Futurewei Technologies, Inc. | System and method for a context layer switch |
US8811398B2 (en) | 2010-04-30 | 2014-08-19 | Hewlett-Packard Development Company, L.P. | Method for routing data packets using VLANs |
US8533337B2 (en) | 2010-05-06 | 2013-09-10 | Citrix Systems, Inc. | Continuous upgrading of computers in a load balanced environment |
US8499093B2 (en) | 2010-05-14 | 2013-07-30 | Extreme Networks, Inc. | Methods, systems, and computer readable media for stateless load balancing of network traffic flows |
US20110317708A1 (en) | 2010-06-28 | 2011-12-29 | Alcatel-Lucent Usa, Inc. | Quality of service control for mpls user access |
US8281033B1 (en) | 2010-06-29 | 2012-10-02 | Emc Corporation | Techniques for path selection |
WO2012006190A1 (en) | 2010-06-29 | 2012-01-12 | Huawei Technologies Co., Ltd. | Delegate gateways and proxy for target hosts in large layer 2 and address resolution with duplicated internet protocol addresses |
JP5716302B2 (en) | 2010-06-30 | 2015-05-13 | ソニー株式会社 | Information processing apparatus, content providing method, and program |
JP5668342B2 (en) | 2010-07-07 | 2015-02-12 | 富士通株式会社 | Content conversion program, content conversion system, and content conversion server |
US20120195196A1 (en) | 2010-08-11 | 2012-08-02 | Rajat Ghai | SYSTEM AND METHOD FOR QoS CONTROL OF IP FLOWS IN MOBILE NETWORKS |
US8745128B2 (en) | 2010-09-01 | 2014-06-03 | Edgecast Networks, Inc. | Optimized content distribution based on metrics derived from the end user |
JP5476261B2 (en) | 2010-09-14 | 2014-04-23 | 株式会社日立製作所 | Multi-tenant information processing system, management server, and configuration management method |
US8838830B2 (en) | 2010-10-12 | 2014-09-16 | Sap Portals Israel Ltd | Optimizing distributed computer networks |
US8681661B2 (en) * | 2010-10-25 | 2014-03-25 | Force10 Networks, Inc. | Limiting MAC address learning on access network switches |
US8533285B2 (en) | 2010-12-01 | 2013-09-10 | Cisco Technology, Inc. | Directing data flows in data centers with clustering services |
US8699499B2 (en) | 2010-12-08 | 2014-04-15 | At&T Intellectual Property I, L.P. | Methods and apparatus to provision cloud computing network elements |
US8755283B2 (en) | 2010-12-17 | 2014-06-17 | Microsoft Corporation | Synchronizing state among load balancer components |
US8804720B1 (en) | 2010-12-22 | 2014-08-12 | Juniper Networks, Inc. | Pass-through multicast admission control signaling |
IL210897A (en) | 2011-01-27 | 2017-12-31 | Verint Systems Ltd | Systems and methods for flow table management |
US10225335B2 (en) | 2011-02-09 | 2019-03-05 | Cisco Technology, Inc. | Apparatus, systems and methods for container based service deployment |
US9191327B2 (en) | 2011-02-10 | 2015-11-17 | Varmour Networks, Inc. | Distributed service processing of network gateways using virtual machines |
US8737210B2 (en) | 2011-03-09 | 2014-05-27 | Telefonaktiebolaget L M Ericsson (Publ) | Load balancing SCTP associations using VTAG mediation |
US8676980B2 (en) | 2011-03-22 | 2014-03-18 | Cisco Technology, Inc. | Distributed load balancer in a virtual machine environment |
US8875240B2 (en) | 2011-04-18 | 2014-10-28 | Bank Of America Corporation | Tenant data center for establishing a virtual machine in a cloud environment |
US8743885B2 (en) | 2011-05-03 | 2014-06-03 | Cisco Technology, Inc. | Mobile service routing in a network environment |
US20120303809A1 (en) | 2011-05-25 | 2012-11-29 | Microsoft Corporation | Offloading load balancing packet modification |
US9104460B2 (en) | 2011-05-31 | 2015-08-11 | Red Hat, Inc. | Inter-cloud live migration of virtualization systems |
US9134945B2 (en) | 2011-06-07 | 2015-09-15 | Clearcube Technology, Inc. | Zero client device with integrated serial bandwidth augmentation and support for out-of-band serial communications |
US9298910B2 (en) | 2011-06-08 | 2016-03-29 | Mcafee, Inc. | System and method for virtual partition monitoring |
US8923294B2 (en) | 2011-06-28 | 2014-12-30 | Polytechnic Institute Of New York University | Dynamically provisioning middleboxes |
US11233709B2 (en) | 2011-07-15 | 2022-01-25 | Inetco Systems Limited | Method and system for monitoring performance of an application system |
US20130021942A1 (en) | 2011-07-18 | 2013-01-24 | Cisco Technology, Inc. | Granular Control of Multicast Delivery Services for Layer-2 Interconnect Solutions |
US9424144B2 (en) | 2011-07-27 | 2016-08-23 | Microsoft Technology Licensing, Llc | Virtual machine migration to minimize packet loss in virtualized network |
JP6080313B2 (en) | 2011-08-04 | 2017-02-15 | ミドクラ エスエーアールエル | System and method for implementing and managing virtual networks |
EP3605969B1 (en) | 2011-08-17 | 2021-05-26 | Nicira Inc. | Distributed logical l3 routing |
US8856518B2 (en) | 2011-09-07 | 2014-10-07 | Microsoft Corporation | Secure and efficient offloading of network policies to network interface cards |
US9319459B2 (en) | 2011-09-19 | 2016-04-19 | Cisco Technology, Inc. | Services controlled session based flow interceptor |
US10200493B2 (en) | 2011-10-17 | 2019-02-05 | Microsoft Technology Licensing, Llc | High-density multi-tenant distributed cache as a service |
US9104497B2 (en) | 2012-11-07 | 2015-08-11 | Yahoo! Inc. | Method and system for work load balancing |
TWI625048B (en) | 2011-10-24 | 2018-05-21 | 內數位專利控股公司 | Methods, systems and apparatuses for machine-to-machine (m2m) communications between service layers |
US8717934B2 (en) | 2011-10-25 | 2014-05-06 | Cisco Technology, Inc. | Multicast source move detection for layer-2 interconnect solutions |
EP2748714B1 (en) | 2011-11-15 | 2021-01-13 | Nicira, Inc. | Connection identifier assignment and source network address translation |
US8767737B2 (en) | 2011-11-30 | 2014-07-01 | Industrial Technology Research Institute | Data center network system and packet forwarding method thereof |
US20130159487A1 (en) | 2011-12-14 | 2013-06-20 | Microsoft Corporation | Migration of Virtual IP Addresses in a Failover Cluster |
US20130160024A1 (en) | 2011-12-20 | 2013-06-20 | Sybase, Inc. | Dynamic Load Balancing for Complex Event Processing |
US8830834B2 (en) | 2011-12-21 | 2014-09-09 | Cisco Technology, Inc. | Overlay-based packet steering |
WO2013101765A1 (en) | 2011-12-27 | 2013-07-04 | Cisco Technology, Inc. | System and method for management of network-based services |
EP2792113B1 (en) | 2011-12-28 | 2016-04-27 | Huawei Technologies Co., Ltd. | A service router architecture |
US8914406B1 (en) | 2012-02-01 | 2014-12-16 | Vorstack, Inc. | Scalable network security with fast response protocol |
US8868711B2 (en) | 2012-02-03 | 2014-10-21 | Microsoft Corporation | Dynamic load balancing in a scalable environment |
US8553552B2 (en) | 2012-02-08 | 2013-10-08 | Radisys Corporation | Stateless load balancer in a multi-node system for transparent processing with packet preservation |
US8954964B2 (en) | 2012-02-27 | 2015-02-10 | Ca, Inc. | System and method for isolated virtual image and appliance communication within a cloud environment |
EP2820648A4 (en) | 2012-02-28 | 2016-03-02 | Ten Eight Technology Inc | Automated voice-to-reporting/ management system and method for voice call-ins of events/crimes |
US8955093B2 (en) | 2012-04-11 | 2015-02-10 | Varmour Networks, Inc. | Cooperative network security inspection |
US9331938B2 (en) | 2012-04-13 | 2016-05-03 | Nicira, Inc. | Extension of logical networks across layer 3 virtual private networks |
US9106508B2 (en) | 2012-04-30 | 2015-08-11 | International Business Machines Corporation | Providing services to virtual overlay network traffic |
US8825867B2 (en) | 2012-05-04 | 2014-09-02 | Telefonaktiebolaget L M Ericsson (Publ) | Two level packet distribution with stateless first level packet distribution to a group of servers and stateful second level packet distribution to a server within the group |
US8867367B2 (en) | 2012-05-10 | 2014-10-21 | Telefonaktiebolaget L M Ericsson (Publ) | 802.1aq support over IETF EVPN |
US9325562B2 (en) | 2012-05-15 | 2016-04-26 | International Business Machines Corporation | Overlay tunnel information exchange protocol |
US8862883B2 (en) | 2012-05-16 | 2014-10-14 | Cisco Technology, Inc. | System and method for secure cloud service delivery with prioritized services in a network environment |
EP2853066B1 (en) | 2012-05-23 | 2017-02-22 | Brocade Communications Systems, Inc. | Layer-3 overlay gateways |
US8908691B2 (en) | 2012-06-05 | 2014-12-09 | International Business Machines Corporation | Virtual ethernet port aggregation (VEPA)-enabled multi-tenant overlay network |
US9898317B2 (en) | 2012-06-06 | 2018-02-20 | Juniper Networks, Inc. | Physical path determination for virtual network packet flows |
US8488577B1 (en) | 2012-06-06 | 2013-07-16 | Google Inc. | Apparatus for controlling the availability of internet access to applications |
US9304801B2 (en) | 2012-06-12 | 2016-04-05 | TELEFONAKTIEBOLAGET L M ERRICSSON (publ) | Elastic enforcement layer for cloud security using SDN |
CN104769864B (en) | 2012-06-14 | 2018-05-04 | 艾诺威网络有限公司 | It is multicasted to unicast conversion technology |
US8913507B2 (en) | 2012-06-21 | 2014-12-16 | Breakingpoint Systems, Inc. | Virtual data loopback and/or data capture in a computing system |
US8948001B2 (en) | 2012-06-26 | 2015-02-03 | Juniper Networks, Inc. | Service plane triggered fast reroute protection |
US9143557B2 (en) | 2012-06-27 | 2015-09-22 | Juniper Networks, Inc. | Feedback loop for service engineered paths |
EP2861038B1 (en) | 2012-06-29 | 2019-12-18 | Huawei Technologies Co., Ltd. | Information processing method, forwarding plane apparatus and control plane apparatus |
US9325569B2 (en) | 2012-06-29 | 2016-04-26 | Hewlett Packard Enterprise Development Lp | Implementing a software defined network using event records that are transmitted from a network switch |
CN103975625B (en) | 2012-06-30 | 2019-05-24 | 华为技术有限公司 | A kind of management method of control and the forwarding surface tunnel resource under forwarding decoupling framework |
US9237098B2 (en) | 2012-07-03 | 2016-01-12 | Cisco Technologies, Inc. | Media access control (MAC) address summation in Datacenter Ethernet networking |
US9668161B2 (en) | 2012-07-09 | 2017-05-30 | Cisco Technology, Inc. | System and method associated with a service flow router |
US9608901B2 (en) | 2012-07-24 | 2017-03-28 | Telefonaktiebolaget Lm Ericsson (Publ) | System and method for enabling services chaining in a provider network |
US9065677B2 (en) | 2012-07-25 | 2015-06-23 | Qualcomm Incorporated | Forwarding tables for hybrid communication networks |
US9071631B2 (en) | 2012-08-09 | 2015-06-30 | International Business Machines Corporation | Service management roles of processor nodes in distributed node service management |
US9678801B2 (en) | 2012-08-09 | 2017-06-13 | International Business Machines Corporation | Service management modes of operation in distributed node service management |
US8989192B2 (en) | 2012-08-15 | 2015-03-24 | Futurewei Technologies, Inc. | Method and system for creating software defined ordered service patterns in a communications network |
US8825851B2 (en) | 2012-08-17 | 2014-09-02 | Vmware, Inc. | Management of a virtual machine in a storage area network environment |
US10397074B2 (en) | 2012-08-24 | 2019-08-27 | Red Hat, Inc. | Providing message flow analysis for an enterprise service bus |
US10203972B2 (en) | 2012-08-27 | 2019-02-12 | Vmware, Inc. | Framework for networking and security services in virtual networks |
US9104492B2 (en) | 2012-09-04 | 2015-08-11 | Wisconsin Alumni Research Foundation | Cloud-based middlebox management system |
KR102286388B1 (en) | 2012-09-12 | 2021-08-04 | 아이이엑스 그룹, 인크. | Transmission latency leveling apparatuses, methods and systems |
US9843484B2 (en) | 2012-09-25 | 2017-12-12 | A10 Networks, Inc. | Graceful scaling in software driven networks |
US9106561B2 (en) | 2012-12-06 | 2015-08-11 | A10 Networks, Inc. | Configuration of a virtual service network |
US9036476B2 (en) | 2012-09-28 | 2015-05-19 | Juniper Networks, Inc. | Maintaining load balancing after service application with a network device |
US9178715B2 (en) | 2012-10-01 | 2015-11-03 | International Business Machines Corporation | Providing services to virtual overlay network traffic |
US9148367B2 (en) | 2012-10-02 | 2015-09-29 | Cisco Technology, Inc. | System and method for binding flows in a service cluster deployment in a network environment |
US8855127B2 (en) | 2012-10-02 | 2014-10-07 | Lsi Corporation | Method and system for intelligent deep packet buffering |
US10044596B2 (en) | 2012-10-05 | 2018-08-07 | Carl D. Ostrom | Devices, methods, and systems for packet reroute permission based on content parameters embedded in packet header or payload |
US9071609B2 (en) | 2012-10-08 | 2015-06-30 | Google Technology Holdings LLC | Methods and apparatus for performing dynamic load balancing of processing resources |
US20140101656A1 (en) | 2012-10-10 | 2014-04-10 | Zhongwen Zhu | Virtual firewall mobility |
CN105190557B (en) | 2012-10-16 | 2018-09-14 | 思杰系统有限公司 | For by multistage API set in the public system and method bridged between private clound |
US9571507B2 (en) | 2012-10-21 | 2017-02-14 | Mcafee, Inc. | Providing a virtual security appliance architecture to a virtual cloud infrastructure |
SG11201502776XA (en) | 2012-11-02 | 2015-06-29 | Silverlake Mobility Ecosystem Sdn Bhd | Method of processing requests for digital services |
CN103229468B (en) | 2012-11-19 | 2016-05-25 | 华为技术有限公司 | Packet-switched resources distribution method and equipment |
US10713183B2 (en) | 2012-11-28 | 2020-07-14 | Red Hat Israel, Ltd. | Virtual machine backup using snapshots and current configuration |
US9338225B2 (en) | 2012-12-06 | 2016-05-10 | A10 Networks, Inc. | Forwarding policies on a virtual service network |
US20140164477A1 (en) | 2012-12-06 | 2014-06-12 | Gary M. Springer | System and method for providing horizontal scaling of stateful applications |
US9203748B2 (en) | 2012-12-24 | 2015-12-01 | Huawei Technologies Co., Ltd. | Software defined network-based data processing method, node, and system |
US9197549B2 (en) | 2013-01-23 | 2015-11-24 | Cisco Technology, Inc. | Server load balancer traffic steering |
US20150372911A1 (en) | 2013-01-31 | 2015-12-24 | Hitachi, Ltd. | Communication path management method |
US10375155B1 (en) | 2013-02-19 | 2019-08-06 | F5 Networks, Inc. | System and method for achieving hardware acceleration for asymmetric flow connections |
US10484334B1 (en) | 2013-02-26 | 2019-11-19 | Zentera Systems, Inc. | Distributed firewall security system that extends across different cloud computing networks |
US20140269724A1 (en) | 2013-03-04 | 2014-09-18 | Telefonaktiebolaget L M Ericsson (Publ) | Method and devices for forwarding ip data packets in an access network |
US9210072B2 (en) | 2013-03-08 | 2015-12-08 | Dell Products L.P. | Processing of multicast traffic in computer networks |
US9049127B2 (en) | 2013-03-11 | 2015-06-02 | Cisco Technology, Inc. | Methods and devices for providing service clustering in a trill network |
US9300627B2 (en) | 2013-03-14 | 2016-03-29 | Time Warner Cable Enterprises Llc | System and method for automatic routing of dynamic host configuration protocol (DHCP) traffic |
US9477500B2 (en) | 2013-03-15 | 2016-10-25 | Avi Networks | Managing and controlling a distributed network service platform |
US10356579B2 (en) | 2013-03-15 | 2019-07-16 | The Nielsen Company (Us), Llc | Methods and apparatus to credit usage of mobile devices |
US9621581B2 (en) | 2013-03-15 | 2017-04-11 | Cisco Technology, Inc. | IPV6/IPV4 resolution-less forwarding up to a destination |
US9509636B2 (en) | 2013-03-15 | 2016-11-29 | Vivint, Inc. | Multicast traffic management within a wireless mesh network |
US9619542B2 (en) | 2013-04-06 | 2017-04-11 | Citrix Systems, Inc. | Systems and methods for application-state distributed replication table hunting |
US9497281B2 (en) | 2013-04-06 | 2016-11-15 | Citrix Systems, Inc. | Systems and methods to cache packet steering decisions for a cluster of load balancers |
WO2014169251A1 (en) | 2013-04-12 | 2014-10-16 | Huawei Technologies Co., Ltd. | Service chain policy for distributed gateways in virtual overlay networks |
US10069903B2 (en) | 2013-04-16 | 2018-09-04 | Amazon Technologies, Inc. | Distributed load balancer |
US10038626B2 (en) | 2013-04-16 | 2018-07-31 | Amazon Technologies, Inc. | Multipath routing in a distributed load balancer |
US10075470B2 (en) | 2013-04-19 | 2018-09-11 | Nicira, Inc. | Framework for coordination between endpoint security and network security services |
US9178828B2 (en) | 2013-04-26 | 2015-11-03 | Cisco Technology, Inc. | Architecture for agentless service insertion |
US9794379B2 (en) | 2013-04-26 | 2017-10-17 | Cisco Technology, Inc. | High-efficiency service chaining with agentless service nodes |
US9407540B2 (en) | 2013-09-06 | 2016-08-02 | Cisco Technology, Inc. | Distributed service chaining in a network environment |
US9225638B2 (en) | 2013-05-09 | 2015-12-29 | Vmware, Inc. | Method and system for service switching using service tags |
US9246799B2 (en) | 2013-05-10 | 2016-01-26 | Cisco Technology, Inc. | Data plane learning of bi-directional service chains |
US9160666B2 (en) | 2013-05-20 | 2015-10-13 | Telefonaktiebolaget L M Ericsson (Publ) | Encoding a payload hash in the DA-MAC to facilitate elastic chaining of packet processing elements |
US9826025B2 (en) | 2013-05-21 | 2017-11-21 | Cisco Technology, Inc. | Chaining service zones by way of route re-origination |
CN104322019B (en) | 2013-05-23 | 2017-11-07 | 华为技术有限公司 | Service routing system, apparatus and method |
US9503378B2 (en) | 2013-06-07 | 2016-11-22 | The Florida International University Board Of Trustees | Load-balancing algorithms for data center networks |
US9444675B2 (en) | 2013-06-07 | 2016-09-13 | Cisco Technology, Inc. | Determining the operations performed along a service path/service chain |
US9495296B2 (en) | 2013-06-12 | 2016-11-15 | Oracle International Corporation | Handling memory pressure in an in-database sharded queue |
EP2813945A1 (en) | 2013-06-14 | 2014-12-17 | Tocario GmbH | Method and system for enabling access of a client device to a remote desktop |
CN105379196B (en) | 2013-06-14 | 2019-03-22 | 微软技术许可有限责任公司 | Method, system and computer storage medium for the routing of fault-tolerant and load balance |
US9621642B2 (en) | 2013-06-17 | 2017-04-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods of forwarding data packets using transient tables and related load balancers |
US20140372616A1 (en) | 2013-06-17 | 2014-12-18 | Telefonaktiebolaget L M Ericsson (Publ) | Methods of forwarding/receiving data packets using unicast and/or multicast communications and related load balancers and servers |
US9137165B2 (en) | 2013-06-17 | 2015-09-15 | Telefonaktiebolaget L M Ericsson (Publ) | Methods of load balancing using primary and stand-by addresses and related load balancers and servers |
WO2014207725A1 (en) | 2013-06-28 | 2014-12-31 | Telefonaktiebolaget L M Ericsson (Publ) | Method for enabling services chaining in a provider network |
US9686192B2 (en) | 2013-06-28 | 2017-06-20 | Niciria, Inc. | Network service slotting |
US9350657B2 (en) | 2013-07-08 | 2016-05-24 | Nicira, Inc. | Encapsulating data packets using an adaptive tunnelling protocol |
US9755963B2 (en) | 2013-07-09 | 2017-09-05 | Nicira, Inc. | Using headerspace analysis to identify flow entry reachability |
CN104283979B (en) | 2013-07-11 | 2017-11-17 | 华为技术有限公司 | The method, apparatus and system of message transmissions in multicast domain name system |
US9755959B2 (en) | 2013-07-17 | 2017-09-05 | Cisco Technology, Inc. | Dynamic service path creation |
US9509615B2 (en) | 2013-07-22 | 2016-11-29 | Vmware, Inc. | Managing link aggregation traffic in a virtual environment |
US9231863B2 (en) | 2013-07-23 | 2016-01-05 | Dell Products L.P. | Systems and methods for a data center architecture facilitating layer 2 over layer 3 communication |
US9331941B2 (en) | 2013-08-12 | 2016-05-03 | Cisco Technology, Inc. | Traffic flow redirection between border routers using routing encapsulation |
US9952885B2 (en) | 2013-08-14 | 2018-04-24 | Nicira, Inc. | Generation of configuration files for a DHCP module executing within a virtualized container |
US9887960B2 (en) | 2013-08-14 | 2018-02-06 | Nicira, Inc. | Providing services for logical networks |
US20160197831A1 (en) | 2013-08-16 | 2016-07-07 | Interdigital Patent Holdings, Inc. | Method and apparatus for name resolution in software defined networking |
US20160277294A1 (en) | 2013-08-26 | 2016-09-22 | Nec Corporation | Communication apparatus, communication method, control apparatus, and management apparatus in a communication system |
US9203765B2 (en) | 2013-08-30 | 2015-12-01 | Cisco Technology, Inc. | Flow based network service insertion using a service chain identifier |
US9577845B2 (en) | 2013-09-04 | 2017-02-21 | Nicira, Inc. | Multiple active L3 gateways for logical networks |
US9680748B2 (en) | 2013-09-15 | 2017-06-13 | Nicira, Inc. | Tracking prefixes of values associated with different rules to generate flows |
US9917745B2 (en) | 2013-09-27 | 2018-03-13 | Futurewei Technologies, Inc. | Validation of chained network services |
US10091276B2 (en) | 2013-09-27 | 2018-10-02 | Transvoyant, Inc. | Computer-implemented systems and methods of analyzing data in an ad-hoc network for predictive decision-making |
US9755960B2 (en) | 2013-09-30 | 2017-09-05 | Juniper Networks, Inc. | Session-aware service chaining within computer networks |
US9258742B1 (en) | 2013-09-30 | 2016-02-09 | Juniper Networks, Inc. | Policy-directed value-added services chaining |
US10148484B2 (en) | 2013-10-10 | 2018-12-04 | Nicira, Inc. | Host side method of using a controller assignment list |
US9264330B2 (en) | 2013-10-13 | 2016-02-16 | Nicira, Inc. | Tracing host-originated logical network packets |
US9385950B2 (en) | 2013-10-14 | 2016-07-05 | Cisco Technology, Inc. | Configurable service proxy local identifier mapping |
US9304804B2 (en) | 2013-10-14 | 2016-04-05 | Vmware, Inc. | Replicating virtual machines across different virtualization platforms |
CN103516807B (en) | 2013-10-14 | 2016-09-21 | 中国联合网络通信集团有限公司 | A kind of cloud computing platform server load balancing system and method |
US9264313B1 (en) | 2013-10-31 | 2016-02-16 | Vmware, Inc. | System and method for performing a service discovery for virtual networks |
US20150124622A1 (en) | 2013-11-01 | 2015-05-07 | Movik Networks, Inc. | Multi-Interface, Multi-Layer State-full Load Balancer For RAN-Analytics Deployments In Multi-Chassis, Cloud And Virtual Server Environments |
US9578141B2 (en) | 2013-11-03 | 2017-02-21 | Ixia | Packet flow modification |
US9363180B2 (en) | 2013-11-04 | 2016-06-07 | Telefonkatiebolaget L M Ericsson (Publ) | Service chaining in a cloud environment using Software Defined Networking |
US9397946B1 (en) | 2013-11-05 | 2016-07-19 | Cisco Technology, Inc. | Forwarding to clusters of service nodes |
US9634938B2 (en) | 2013-11-05 | 2017-04-25 | International Business Machines Corporation | Adaptive scheduling of data flows in data center networks for efficient resource utilization |
US9300585B2 (en) | 2013-11-15 | 2016-03-29 | Cisco Technology, Inc. | Shortening of service paths in service chains in a communications network |
US9392025B2 (en) | 2013-11-21 | 2016-07-12 | Cisco Technology, Inc. | Subscriber dependent redirection between a mobile packet core proxy and a cell site proxy in a network environment |
US9231871B2 (en) | 2013-11-25 | 2016-01-05 | Versa Networks, Inc. | Flow distribution table for packet flow load balancing |
US10104169B1 (en) | 2013-12-18 | 2018-10-16 | Amazon Technologies, Inc. | Optimizing a load balancer configuration |
WO2015094296A1 (en) | 2013-12-19 | 2015-06-25 | Nokia Solutions And Networks Oy | A method and apparatus for performing flexible service chaining |
US9548896B2 (en) * | 2013-12-27 | 2017-01-17 | Big Switch Networks, Inc. | Systems and methods for performing network service insertion |
CN104767629B (en) | 2014-01-06 | 2017-12-12 | 腾讯科技(深圳)有限公司 | Distribute the method, apparatus and system of service node |
US9825856B2 (en) | 2014-01-06 | 2017-11-21 | Futurewei Technologies, Inc. | Service function chaining in a packet network |
CN104811326A (en) | 2014-01-24 | 2015-07-29 | 中兴通讯股份有限公司 | Service chain management method, service chain management system, and devices |
US9992103B2 (en) | 2014-01-24 | 2018-06-05 | Cisco Technology, Inc. | Method for providing sticky load balancing |
US9514018B2 (en) | 2014-01-28 | 2016-12-06 | Software Ag | Scaling framework for querying |
CN105684505B (en) | 2014-01-29 | 2019-08-23 | 华为技术有限公司 | Communication network, equipment and control method |
US9467382B2 (en) | 2014-02-03 | 2016-10-11 | Cisco Technology, Inc. | Elastic service chains |
US9967175B2 (en) | 2014-02-14 | 2018-05-08 | Futurewei Technologies, Inc. | Restoring service functions after changing a service chain instance path |
US9215214B2 (en) | 2014-02-20 | 2015-12-15 | Nicira, Inc. | Provisioning firewall rules on a firewall enforcing device |
US9880826B2 (en) | 2014-02-25 | 2018-01-30 | Red Hat, Inc. | Installing of application resources in a multi-tenant platform-as-a-service (PaS) system |
CN103795805B (en) | 2014-02-27 | 2017-08-25 | 中国科学技术大学苏州研究院 | Distributed server load-balancing method based on SDN |
US10284478B2 (en) | 2014-03-04 | 2019-05-07 | Nec Corporation | Packet processing device, packet processing method and program |
CN109101318B (en) | 2014-03-12 | 2022-04-05 | 华为技术有限公司 | Virtual machine migration control method and device |
US9344337B2 (en) | 2014-03-13 | 2016-05-17 | Cisco Technology, Inc. | Service node originated service chains in a network environment |
EP3117561B1 (en) | 2014-03-14 | 2018-10-17 | Nicira Inc. | Route advertisement by managed gateways |
US9590901B2 (en) | 2014-03-14 | 2017-03-07 | Nicira, Inc. | Route advertisement by managed gateways |
EP2922252B1 (en) | 2014-03-21 | 2017-09-13 | Juniper Networks, Inc. | Selectable service node resources |
CN104954274B (en) | 2014-03-25 | 2018-03-16 | 华为技术有限公司 | Generate method, controller and the business Delivery Function of forwarding information |
US9787559B1 (en) | 2014-03-28 | 2017-10-10 | Juniper Networks, Inc. | End-to-end monitoring of overlay networks providing virtualized network services |
US9602380B2 (en) | 2014-03-28 | 2017-03-21 | Futurewei Technologies, Inc. | Context-aware dynamic policy selection for load balancing behavior |
US9215210B2 (en) | 2014-03-31 | 2015-12-15 | Nicira, Inc. | Migrating firewall connection state for a firewall service virtual machine |
US9473410B2 (en) | 2014-03-31 | 2016-10-18 | Sandvine Incorporated Ulc | System and method for load balancing in computer networks |
US10264071B2 (en) | 2014-03-31 | 2019-04-16 | Amazon Technologies, Inc. | Session management in distributed storage systems |
US9009289B1 (en) | 2014-03-31 | 2015-04-14 | Flexera Software Llc | Systems and methods for assessing application usage |
US9686200B2 (en) | 2014-03-31 | 2017-06-20 | Nicira, Inc. | Flow cache hierarchy |
US9503427B2 (en) | 2014-03-31 | 2016-11-22 | Nicira, Inc. | Method and apparatus for integrating a service virtual machine |
CN107342952B (en) | 2014-04-01 | 2022-03-01 | 华为技术有限公司 | Service link selection control method and equipment |
US10178181B2 (en) | 2014-04-02 | 2019-01-08 | Cisco Technology, Inc. | Interposer with security assistant key escrow |
CN104980348A (en) | 2014-04-04 | 2015-10-14 | 中兴通讯股份有限公司 | Business chain routing method, business chain routing system and device in system |
US9363183B2 (en) | 2014-04-10 | 2016-06-07 | Cisco Technology, Inc. | Network address translation offload to network infrastructure for service chains in a network environment |
US9634867B2 (en) | 2014-05-02 | 2017-04-25 | Futurewei Technologies, Inc. | Computing service chain-aware paths |
US10164894B2 (en) | 2014-05-05 | 2018-12-25 | Nicira, Inc. | Buffered subscriber tables for maintaining a consistent network state |
US9917781B2 (en) | 2014-06-05 | 2018-03-13 | KEMP Technologies Inc. | Methods for intelligent data traffic steering |
US9722927B2 (en) | 2014-06-05 | 2017-08-01 | Futurewei Technologies, Inc. | Service chain topology map construction |
TW201546649A (en) | 2014-06-05 | 2015-12-16 | Cavium Inc | Systems and methods for cloud-based WEB service security management based on hardware security module |
US9413655B2 (en) | 2014-06-13 | 2016-08-09 | Cisco Technology, Inc. | Providing virtual private service chains in a network environment |
JP2017518710A (en) | 2014-06-17 | 2017-07-06 | 華為技術有限公司Huawei Technologies Co.,Ltd. | Service flow processing method, apparatus, and device |
US10013276B2 (en) | 2014-06-20 | 2018-07-03 | Google Llc | System and method for live migration of a virtualized networking stack |
US9602308B2 (en) | 2014-06-23 | 2017-03-21 | International Business Machines Corporation | Servicing packets in a virtual network and a software-defined network (SDN) |
US10261814B2 (en) | 2014-06-23 | 2019-04-16 | Intel Corporation | Local service chaining with virtual machines and virtualized containers in software defined networking |
US9634936B2 (en) | 2014-06-30 | 2017-04-25 | Juniper Networks, Inc. | Service chaining across multiple networks |
US9419897B2 (en) | 2014-06-30 | 2016-08-16 | Nicira, Inc. | Methods and systems for providing multi-tenancy support for Single Root I/O Virtualization |
US9692698B2 (en) | 2014-06-30 | 2017-06-27 | Nicira, Inc. | Methods and systems to offload overlay network packet encapsulation to hardware |
US10747888B2 (en) | 2014-06-30 | 2020-08-18 | Nicira, Inc. | Method and apparatus for differently encrypting data messages for different logical networks |
US9455908B2 (en) | 2014-07-07 | 2016-09-27 | Cisco Technology, Inc. | Bi-directional flow stickiness in a network environment |
US10003530B2 (en) | 2014-07-22 | 2018-06-19 | Futurewei Technologies, Inc. | Service chain header and metadata transport |
CN105453493B (en) | 2014-07-23 | 2019-02-05 | 华为技术有限公司 | Service message retransmission method and device |
US9774533B2 (en) | 2014-08-06 | 2017-09-26 | Futurewei Technologies, Inc. | Mechanisms to support service chain graphs in a communication network |
US20160057687A1 (en) | 2014-08-19 | 2016-02-25 | Qualcomm Incorporated | Inter/intra radio access technology mobility and user-plane split measurement configuration |
US20160065503A1 (en) | 2014-08-29 | 2016-03-03 | Extreme Networks, Inc. | Methods, systems, and computer readable media for virtual fabric routing |
US9442752B1 (en) | 2014-09-03 | 2016-09-13 | Amazon Technologies, Inc. | Virtual secure execution environments |
EP3192213A1 (en) | 2014-09-12 | 2017-07-19 | Voellmy, Andreas R. | Managing network forwarding configurations using algorithmic policies |
JP6430634B2 (en) | 2014-09-19 | 2018-11-28 | ノキア ソリューションズ アンド ネットワークス オサケユキチュア | Chaining network service functions in communication networks |
EP3198795A1 (en) | 2014-09-23 | 2017-08-02 | Nokia Solutions and Networks Oy | Control of communication using service function chaining |
US9804797B1 (en) | 2014-09-29 | 2017-10-31 | EMC IP Holding Company LLC | Using dynamic I/O load differential for load balancing |
US11296930B2 (en) | 2014-09-30 | 2022-04-05 | Nicira, Inc. | Tunnel-enabled elastic service model |
US10135737B2 (en) | 2014-09-30 | 2018-11-20 | Nicira, Inc. | Distributed load balancing systems |
EP3190750B1 (en) | 2014-09-30 | 2020-11-25 | Huawei Technologies Co., Ltd. | Method and apparatus for generating service path |
US9935827B2 (en) | 2014-09-30 | 2018-04-03 | Nicira, Inc. | Method and apparatus for distributing load among a plurality of service nodes |
US10469342B2 (en) | 2014-10-10 | 2019-11-05 | Nicira, Inc. | Logical network traffic analysis |
US9548919B2 (en) | 2014-10-24 | 2017-01-17 | Cisco Technology, Inc. | Transparent network service header path proxies |
US10602000B2 (en) | 2014-10-29 | 2020-03-24 | Nokia Of America Corporation | Policy decisions based on offline charging rules when service chaining is implemented |
US9590902B2 (en) | 2014-11-10 | 2017-03-07 | Juniper Networks, Inc. | Signaling aliasing capability in data centers |
US9256467B1 (en) | 2014-11-11 | 2016-02-09 | Amazon Technologies, Inc. | System for managing and scheduling containers |
US9705775B2 (en) | 2014-11-20 | 2017-07-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Passive performance measurement for inline service chaining |
US9838286B2 (en) | 2014-11-20 | 2017-12-05 | Telefonaktiebolaget L M Ericsson (Publ) | Passive performance measurement for inline service chaining |
US10855791B2 (en) | 2014-11-25 | 2020-12-01 | Netapp, Inc. | Clustered storage system path quiescence analysis |
WO2016082167A1 (en) | 2014-11-28 | 2016-06-02 | 华为技术有限公司 | Service processing apparatus and method |
US20160164826A1 (en) | 2014-12-04 | 2016-06-09 | Cisco Technology, Inc. | Policy Implementation at a Network Element based on Data from an Authoritative Source |
US9866472B2 (en) | 2014-12-09 | 2018-01-09 | Oath Inc. | Systems and methods for software defined networking service function chaining |
CN107005478B (en) | 2014-12-09 | 2020-05-08 | 华为技术有限公司 | Adaptive flow table processing method and device |
US9571405B2 (en) | 2015-02-25 | 2017-02-14 | Cisco Technology, Inc. | Metadata augmentation in a service function chain |
US9660909B2 (en) | 2014-12-11 | 2017-05-23 | Cisco Technology, Inc. | Network service header metadata for load balancing |
CN105743822B (en) | 2014-12-11 | 2019-04-19 | 华为技术有限公司 | A kind of method and device handling message |
CA2963580C (en) | 2014-12-17 | 2020-04-21 | Huawei Technologies Co., Ltd. | Data forwarding method, device, and system in software-defined networking |
WO2016096002A1 (en) | 2014-12-17 | 2016-06-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and arrangement for relocating packet processing functions |
US9094464B1 (en) | 2014-12-18 | 2015-07-28 | Limelight Networks, Inc. | Connection digest for accelerating web traffic |
US9998954B2 (en) | 2014-12-19 | 2018-06-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for relocating packet processing functions |
US9462084B2 (en) | 2014-12-23 | 2016-10-04 | Intel Corporation | Parallel processing of service functions in service function chains |
US9680762B2 (en) | 2015-01-05 | 2017-06-13 | Futurewei Technologies, Inc. | Method and system for providing QoS for in-band control traffic in an openflow network |
GB2525701B (en) | 2015-01-08 | 2016-11-30 | Openwave Mobility Inc | A software defined network and a communication network comprising the same |
US20160212048A1 (en) | 2015-01-15 | 2016-07-21 | Hewlett Packard Enterprise Development Lp | Openflow service chain data packet routing using tables |
JP2016134700A (en) | 2015-01-16 | 2016-07-25 | 富士通株式会社 | Management server, communication system, and path management method |
US10341188B2 (en) | 2015-01-27 | 2019-07-02 | Huawei Technologies Co., Ltd. | Network virtualization for network infrastructure |
US10129180B2 (en) | 2015-01-30 | 2018-11-13 | Nicira, Inc. | Transit logical switch within logical router |
US10812632B2 (en) | 2015-02-09 | 2020-10-20 | Avago Technologies International Sales Pte. Limited | Network interface controller with integrated network flow processing |
CN106465230B (en) | 2015-02-13 | 2019-07-23 | 华为技术有限公司 | Control the devices, systems, and methods of access |
WO2016134752A1 (en) | 2015-02-24 | 2016-09-01 | Nokia Solutions And Networks Oy | Integrated services processing for mobile networks |
US9749225B2 (en) | 2015-04-17 | 2017-08-29 | Huawei Technologies Co., Ltd. | Software defined network (SDN) control signaling for traffic engineering to enable multi-type transport in a data plane |
US10116464B2 (en) | 2015-03-18 | 2018-10-30 | Juniper Networks, Inc. | EVPN inter-subnet multicast forwarding |
US10609091B2 (en) | 2015-04-03 | 2020-03-31 | Nicira, Inc. | Method, apparatus, and system for implementing a content switch |
US10038628B2 (en) | 2015-04-04 | 2018-07-31 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US10135789B2 (en) | 2015-04-13 | 2018-11-20 | Nicira, Inc. | Method and system of establishing a virtual private network in a cloud service for branch networking |
US10785130B2 (en) | 2015-04-23 | 2020-09-22 | Hewlett Packard Enterprise Development Lp | Network infrastructure device to implement pre-filter rules |
US9515993B1 (en) | 2015-05-13 | 2016-12-06 | International Business Machines Corporation | Automated migration planning for moving into a setting of multiple firewalls |
US9762402B2 (en) | 2015-05-20 | 2017-09-12 | Cisco Technology, Inc. | System and method to facilitate the assignment of service functions for service chains in a network environment |
US10021216B2 (en) | 2015-05-25 | 2018-07-10 | Juniper Networks, Inc. | Monitoring services key performance indicators using TWAMP for SDN and NFV architectures |
CN106302206B (en) | 2015-05-28 | 2020-04-24 | 中兴通讯股份有限公司 | Message forwarding processing method, device and system |
US9985869B2 (en) | 2015-06-09 | 2018-05-29 | International Business Machines Corporation | Support for high availability of service appliances in a software-defined network (SDN) service chaining infrastructure |
EP3300317B1 (en) | 2015-06-10 | 2020-08-26 | Huawei Technologies Co., Ltd. | Method, device and system for realizing service link |
JP6097467B1 (en) | 2015-06-10 | 2017-03-15 | 株式会社ソラコム | Communication system and communication method for providing wireless terminal with access to IP network |
US10742544B2 (en) | 2015-06-15 | 2020-08-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and network nodes for scalable mapping of tags to service function chain encapsulation headers |
US10042722B1 (en) | 2015-06-23 | 2018-08-07 | Juniper Networks, Inc. | Service-chain fault tolerance in service virtualized environments |
US10554484B2 (en) | 2015-06-26 | 2020-02-04 | Nicira, Inc. | Control plane integration with hardware switches |
WO2016209275A1 (en) | 2015-06-26 | 2016-12-29 | Hewlett Packard Enterprise Development Lp | Server load balancing |
US10609122B1 (en) | 2015-06-29 | 2020-03-31 | Amazon Technologies, Inc. | Instance backed building or place |
US9755903B2 (en) | 2015-06-30 | 2017-09-05 | Nicira, Inc. | Replicating firewall policy across multiple data centers |
US11204791B2 (en) | 2015-06-30 | 2021-12-21 | Nicira, Inc. | Dynamic virtual machine network policy for ingress optimization |
US9749229B2 (en) | 2015-07-01 | 2017-08-29 | Cisco Technology, Inc. | Forwarding packets with encapsulated service chain headers |
CN106330714B (en) | 2015-07-02 | 2020-05-29 | 中兴通讯股份有限公司 | Method and device for realizing service function chain |
US10313235B2 (en) | 2015-07-13 | 2019-06-04 | Futurewei Technologies, Inc. | Internet control message protocol enhancement for traffic carried by a tunnel over internet protocol networks |
US9929945B2 (en) | 2015-07-14 | 2018-03-27 | Microsoft Technology Licensing, Llc | Highly available service chains for network services |
US20170019303A1 (en) | 2015-07-14 | 2017-01-19 | Microsoft Technology Licensing, Llc | Service Chains for Network Services |
US10367728B2 (en) | 2015-07-15 | 2019-07-30 | Netsia, Inc. | Methods for forwarding rule hopping based secure communications |
US10637889B2 (en) | 2015-07-23 | 2020-04-28 | Cisco Technology, Inc. | Systems, methods, and devices for smart mapping and VPN policy enforcement |
US10069639B2 (en) | 2015-07-28 | 2018-09-04 | Ciena Corporation | Multicast systems and methods for segment routing |
US9923984B2 (en) | 2015-10-30 | 2018-03-20 | Oracle International Corporation | Methods, systems, and computer readable media for remote authentication dial in user service (RADIUS) message loop detection and mitigation |
US9894188B2 (en) | 2015-08-28 | 2018-02-13 | Nicira, Inc. | Packet data restoration for flow-based forwarding element |
US9906561B2 (en) | 2015-08-28 | 2018-02-27 | Nicira, Inc. | Performing logical segmentation based on remote device attributes |
US10432520B2 (en) | 2015-08-28 | 2019-10-01 | Nicira, Inc. | Traffic forwarding between geographically dispersed sites |
EP3311528A1 (en) | 2015-08-31 | 2018-04-25 | Huawei Technologies Co., Ltd. | Redirection of service or device discovery messages in software-defined networks |
US9591582B1 (en) | 2015-09-10 | 2017-03-07 | Qualcomm Incorporated | Smart co-processor for optimizing service discovery power consumption in wireless service platforms |
US9667518B2 (en) | 2015-09-11 | 2017-05-30 | Telefonaktiebolaget L M Ericsson (Publ) | Method and system for delay measurement of a traffic flow in a software-defined networking (SDN) system |
CN106533935B (en) | 2015-09-14 | 2019-07-12 | 华为技术有限公司 | A kind of method and apparatus obtaining business chain information in cloud computing system |
US20170093698A1 (en) | 2015-09-30 | 2017-03-30 | Huawei Technologies Co., Ltd. | Method and apparatus for supporting service function chaining in a communication network |
US9948577B2 (en) | 2015-09-30 | 2018-04-17 | Nicira, Inc. | IP aliases in logical networks with hardware switches |
US10853111B1 (en) | 2015-09-30 | 2020-12-01 | Amazon Technologies, Inc. | Virtual machine instance migration feedback |
US10116553B1 (en) | 2015-10-15 | 2018-10-30 | Cisco Technology, Inc. | Application identifier in service function chain metadata |
CN108141376B (en) | 2015-10-28 | 2020-12-01 | 华为技术有限公司 | Network node, communication network and method in communication network |
US10095535B2 (en) | 2015-10-31 | 2018-10-09 | Nicira, Inc. | Static route types for logical routers |
EP3361684B1 (en) | 2015-10-31 | 2020-07-29 | Huawei Technologies Co., Ltd. | Route determining method and corresponding apparatus and system |
US10078527B2 (en) | 2015-11-01 | 2018-09-18 | Nicira, Inc. | Securing a managed forwarding element that operates within a data compute node |
US9912788B2 (en) | 2015-11-10 | 2018-03-06 | Telefonaktiebolaget L M Ericsson (Publ) | Systems and methods of an enhanced state-aware proxy device |
US9860079B2 (en) | 2015-11-20 | 2018-01-02 | Oracle International Corporation | Redirecting packets for egress from an autonomous system using tenant specific routing and forwarding tables |
CN106788911A (en) | 2015-11-25 | 2017-05-31 | 华为技术有限公司 | A kind of method and apparatus of message retransmission |
US10067803B2 (en) | 2015-11-25 | 2018-09-04 | International Business Machines Corporation | Policy based virtual machine selection during an optimization cycle |
US10084703B2 (en) | 2015-12-04 | 2018-09-25 | Cisco Technology, Inc. | Infrastructure-exclusive service forwarding |
US10404791B2 (en) | 2015-12-04 | 2019-09-03 | Microsoft Technology Licensing, Llc | State-aware load balancing of application servers |
US9948611B2 (en) | 2015-12-14 | 2018-04-17 | Nicira, Inc. | Packet tagging for improved guest system security |
US20170170990A1 (en) | 2015-12-15 | 2017-06-15 | Microsoft Technology Licensing, Llc | Scalable Tenant Networks |
US10171336B2 (en) | 2015-12-16 | 2019-01-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Openflow configured horizontally split hybrid SDN nodes |
CN106936939B (en) | 2015-12-31 | 2020-06-02 | 华为技术有限公司 | Message processing method, related device and NVO3 network system |
US10075393B2 (en) | 2015-12-31 | 2018-09-11 | Fortinet, Inc. | Packet routing using a software-defined networking (SDN) switch |
US10063468B2 (en) | 2016-01-15 | 2018-08-28 | Cisco Technology, Inc. | Leaking routes in a service chain |
US11044203B2 (en) | 2016-01-19 | 2021-06-22 | Cisco Technology, Inc. | System and method for hosting mobile packet core and value-added services using a software defined network and service chains |
US20170214627A1 (en) | 2016-01-21 | 2017-07-27 | Futurewei Technologies, Inc. | Distributed Load Balancing for Network Service Function Chaining |
US10216467B2 (en) | 2016-02-03 | 2019-02-26 | Google Llc | Systems and methods for automatic content verification |
US10412048B2 (en) | 2016-02-08 | 2019-09-10 | Cryptzone North America, Inc. | Protecting network devices by a firewall |
US10547692B2 (en) | 2016-02-09 | 2020-01-28 | Cisco Technology, Inc. | Adding cloud service provider, cloud service, and cloud tenant awareness to network service chains |
US10158568B2 (en) | 2016-02-12 | 2018-12-18 | Huawei Technologies Co., Ltd. | Method and apparatus for service function forwarding in a service domain |
WO2017144957A1 (en) | 2016-02-26 | 2017-08-31 | Telefonaktiebolaget Lm Ericsson (Publ) | Dynamic re-route in a redundant system of a packet network |
US10003660B2 (en) | 2016-02-29 | 2018-06-19 | Cisco Technology, Inc. | System and method for data plane signaled packet capture in a service function chaining network |
CN107204941A (en) | 2016-03-18 | 2017-09-26 | 中兴通讯股份有限公司 | The method and apparatus that a kind of flexible Ethernet path is set up |
US10187306B2 (en) | 2016-03-24 | 2019-01-22 | Cisco Technology, Inc. | System and method for improved service chaining |
ES2716657T3 (en) | 2016-04-07 | 2019-06-13 | Telefonica Sa | A method to ensure the correct route of data packets through a particular path of a network |
US10320681B2 (en) | 2016-04-12 | 2019-06-11 | Nicira, Inc. | Virtual tunnel endpoints for congestion-aware load balancing |
US10931793B2 (en) | 2016-04-26 | 2021-02-23 | Cisco Technology, Inc. | System and method for automated rendering of service chaining |
US10171350B2 (en) | 2016-04-27 | 2019-01-01 | Cisco Technology, Inc. | Generating packets in a reverse direction of a service function chain |
US20170317936A1 (en) | 2016-04-28 | 2017-11-02 | Cisco Technology, Inc. | Selective steering network traffic to virtual service(s) using policy |
US10333849B2 (en) | 2016-04-28 | 2019-06-25 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10491688B2 (en) | 2016-04-29 | 2019-11-26 | Hewlett Packard Enterprise Development Lp | Virtualized network function placements |
EP3278513B1 (en) | 2016-04-29 | 2020-03-18 | Hewlett-Packard Enterprise Development LP | Transforming a service packet from a first domain to a second domain |
US10841273B2 (en) | 2016-04-29 | 2020-11-17 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US10469320B2 (en) | 2016-04-29 | 2019-11-05 | Deutsche Telekom Ag | Versioning system for network states in a software-defined network |
US10355983B2 (en) | 2016-05-09 | 2019-07-16 | Cisco Technology, Inc. | Traceroute to return aggregated statistics in service chains |
US10097402B2 (en) | 2016-05-11 | 2018-10-09 | Hewlett Packard Enterprise Development Lp | Filter tables for management functions |
KR102541641B1 (en) | 2016-06-07 | 2023-06-08 | 한국전자통신연구원 | Distributed Service Function Forwarding system and method |
US10284390B2 (en) | 2016-06-08 | 2019-05-07 | Cisco Technology, Inc. | Techniques for efficient service chain analytics |
US20170366605A1 (en) | 2016-06-16 | 2017-12-21 | Alcatel-Lucent Usa Inc. | Providing data plane services for applications |
US10275272B2 (en) | 2016-06-20 | 2019-04-30 | Vmware, Inc. | Virtual machine recovery in shared memory architecture |
US20170364794A1 (en) | 2016-06-20 | 2017-12-21 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for classifying the payload of encrypted traffic flows |
US10382596B2 (en) | 2016-06-23 | 2019-08-13 | Cisco Technology, Inc. | Transmitting network overlay information in a service function chain |
US10063415B1 (en) | 2016-06-29 | 2018-08-28 | Juniper Networks, Inc. | Network services using pools of pre-configured virtualized network functions and service chains |
US10318737B2 (en) | 2016-06-30 | 2019-06-11 | Amazon Technologies, Inc. | Secure booting of virtualization managers |
US10237176B2 (en) | 2016-06-30 | 2019-03-19 | Juniper Networks, Inc. | Auto discovery and auto scaling of services in software-defined network environment |
EP3468117B1 (en) | 2016-07-01 | 2023-05-24 | Huawei Technologies Co., Ltd. | Service function chaining (sfc)-based packet forwarding method, device and system |
CN111884933B (en) | 2016-07-01 | 2021-07-09 | 华为技术有限公司 | Method, device and system for forwarding message in Service Function Chain (SFC) |
US9843898B1 (en) | 2016-07-21 | 2017-12-12 | International Business Machines Corporation | Associating multiple user devices with a single user |
US20180026911A1 (en) | 2016-07-25 | 2018-01-25 | Cisco Technology, Inc. | System and method for providing a resource usage advertising framework for sfc-based workloads |
CN107666438B (en) | 2016-07-27 | 2021-10-22 | 中兴通讯股份有限公司 | Message forwarding method and device |
US10142356B2 (en) | 2016-07-29 | 2018-11-27 | ShieldX Networks, Inc. | Channel data encapsulation system and method for use with client-server data channels |
US10225270B2 (en) | 2016-08-02 | 2019-03-05 | Cisco Technology, Inc. | Steering of cloned traffic in a service function chain |
US10608928B2 (en) | 2016-08-05 | 2020-03-31 | Huawei Technologies Co., Ltd. | Service-based traffic forwarding in virtual networks |
EP3494682B1 (en) | 2016-08-05 | 2022-06-22 | Alcatel Lucent | Security-on-demand architecture |
US10972437B2 (en) | 2016-08-08 | 2021-04-06 | Talari Networks Incorporated | Applications and integrated firewall design in an adaptive private network (APN) |
US11989332B2 (en) | 2016-08-11 | 2024-05-21 | Intel Corporation | Secure public cloud with protected guest-verified host control |
US10303899B2 (en) | 2016-08-11 | 2019-05-28 | Intel Corporation | Secure public cloud with protected guest-verified host control |
WO2018037266A1 (en) | 2016-08-26 | 2018-03-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Improving sf proxy performance in sdn networks |
US10193749B2 (en) | 2016-08-27 | 2019-01-29 | Nicira, Inc. | Managed forwarding element executing in public cloud data compute node without overlay network |
WO2018044341A1 (en) | 2016-08-27 | 2018-03-08 | Nicira, Inc. | Extension of network control system into public cloud |
US10419340B2 (en) | 2016-08-29 | 2019-09-17 | Vmware, Inc. | Stateful connection optimization over stretched networks using specific prefix routes |
US10361969B2 (en) | 2016-08-30 | 2019-07-23 | Cisco Technology, Inc. | System and method for managing chained services in a network environment |
US11277338B2 (en) | 2016-09-26 | 2022-03-15 | Juniper Networks, Inc. | Distributing service function chain data and service function instance data in a network |
WO2018058677A1 (en) | 2016-09-30 | 2018-04-05 | 华为技术有限公司 | Message processing method, computing device, and message processing apparatus |
US10938668B1 (en) | 2016-09-30 | 2021-03-02 | Amazon Technologies, Inc. | Safe deployment using versioned hash rings |
US20180102965A1 (en) | 2016-10-07 | 2018-04-12 | Alcatel-Lucent Usa Inc. | Unicast branching based multicast |
US11824863B2 (en) | 2016-11-03 | 2023-11-21 | Nicira, Inc. | Performing services on a host |
US10616100B2 (en) | 2016-11-03 | 2020-04-07 | Parallel Wireless, Inc. | Traffic shaping and end-to-end prioritization |
US11055273B1 (en) | 2016-11-04 | 2021-07-06 | Amazon Technologies, Inc. | Software container event monitoring systems |
US10187263B2 (en) | 2016-11-14 | 2019-01-22 | Futurewei Technologies, Inc. | Integrating physical and virtual network functions in a service-chained network environment |
US9906401B1 (en) | 2016-11-22 | 2018-02-27 | Gigamon Inc. | Network visibility appliances for cloud computing architectures |
US10609160B2 (en) | 2016-12-06 | 2020-03-31 | Nicira, Inc. | Performing context-rich attribute-based services on a host |
US10129186B2 (en) | 2016-12-07 | 2018-11-13 | Nicira, Inc. | Service function chain (SFC) data communications with SFC data in virtual local area network identifier (VLAN ID) data fields |
GB2558205B (en) | 2016-12-15 | 2019-07-03 | Arm Ip Ltd | Enabling communications between devices |
US10623309B1 (en) | 2016-12-19 | 2020-04-14 | International Business Machines Corporation | Rule processing of packets |
EP3340581B1 (en) | 2016-12-20 | 2022-02-23 | InterDigital CE Patent Holdings | Method for managing service chaining at a network equipment, corresponding network equipment |
US10212071B2 (en) | 2016-12-21 | 2019-02-19 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10802857B2 (en) | 2016-12-22 | 2020-10-13 | Nicira, Inc. | Collecting and processing contextual attributes on a host |
US10574652B2 (en) | 2017-01-12 | 2020-02-25 | Zscaler, Inc. | Systems and methods for cloud-based service function chaining using security assertion markup language (SAML) assertion |
US20180203736A1 (en) | 2017-01-13 | 2018-07-19 | Red Hat, Inc. | Affinity based hierarchical container scheduling |
US10243835B2 (en) | 2017-02-02 | 2019-03-26 | Fujitsu Limited | Seamless service function chaining across domains |
US10892978B2 (en) | 2017-02-06 | 2021-01-12 | Silver Peak Systems, Inc. | Multi-level learning for classifying traffic flows from first packet data |
US10673785B2 (en) | 2017-02-16 | 2020-06-02 | Netscout Systems, Inc. | Flow and time based reassembly of fragmented packets by IP protocol analyzers |
US10243856B2 (en) | 2017-03-24 | 2019-03-26 | Intel Corporation | Load balancing systems, devices, and methods |
US10244034B2 (en) | 2017-03-29 | 2019-03-26 | Ca, Inc. | Introspection driven monitoring of multi-container applications |
US10698714B2 (en) | 2017-04-07 | 2020-06-30 | Nicira, Inc. | Application/context-based management of virtual networks using customizable workflows |
US10462047B2 (en) | 2017-04-10 | 2019-10-29 | Cisco Technology, Inc. | Service-function chaining using extended service-function chain proxy for service-function offload |
US10623264B2 (en) | 2017-04-20 | 2020-04-14 | Cisco Technology, Inc. | Policy assurance for service chaining |
US10158573B1 (en) | 2017-05-01 | 2018-12-18 | Barefoot Networks, Inc. | Forwarding element with a data plane load balancer |
US10587502B2 (en) | 2017-05-16 | 2020-03-10 | Ribbon Communications Operating Company, Inc. | Communications methods, apparatus and systems for providing scalable media services in SDN systems |
US10333822B1 (en) | 2017-05-23 | 2019-06-25 | Cisco Technology, Inc. | Techniques for implementing loose hop service function chains price information |
US10348638B2 (en) | 2017-05-30 | 2019-07-09 | At&T Intellectual Property I, L.P. | Creating cross-service chains of virtual network functions in a wide area network |
CN107105061B (en) | 2017-05-31 | 2020-09-29 | 北京中电普华信息技术有限公司 | Service registration method and device |
US10628236B2 (en) | 2017-06-06 | 2020-04-21 | Huawei Technologies Canada Co., Ltd. | System and method for inter-datacenter communication |
US10506083B2 (en) | 2017-06-27 | 2019-12-10 | Cisco Technology, Inc. | Segment routing gateway storing segment routing encapsulating header used in encapsulating and forwarding of returned native packet |
US10567360B2 (en) | 2017-06-29 | 2020-02-18 | Vmware, Inc. | SSH key validation in a hyper-converged computing environment |
US10757138B2 (en) | 2017-07-13 | 2020-08-25 | Nicira, Inc. | Systems and methods for storing a security parameter index in an options field of an encapsulation header |
US10432513B2 (en) | 2017-07-14 | 2019-10-01 | Nicira, Inc. | Asymmetric network elements sharing an anycast address |
US10673698B2 (en) | 2017-07-21 | 2020-06-02 | Cisco Technology, Inc. | Service function chain optimization using live testing |
US11296984B2 (en) | 2017-07-31 | 2022-04-05 | Nicira, Inc. | Use of hypervisor for active-active stateful network service cluster |
EP3673627B1 (en) | 2017-08-27 | 2023-09-13 | Nicira, Inc. | Performing in-line service in public cloud |
US10637828B2 (en) | 2017-09-17 | 2020-04-28 | Mellanox Technologies, Ltd. | NIC with stateful connection tracking |
US10721095B2 (en) | 2017-09-26 | 2020-07-21 | Oracle International Corporation | Virtual interface system and method for multi-tenant cloud networking |
EP3688595A1 (en) | 2017-09-30 | 2020-08-05 | Oracle International Corporation | Binding, in an api registry, backend services endpoints to api functions |
US10637750B1 (en) | 2017-10-18 | 2020-04-28 | Juniper Networks, Inc. | Dynamically modifying a service chain based on network traffic information |
US11120125B2 (en) | 2017-10-23 | 2021-09-14 | L3 Technologies, Inc. | Configurable internet isolation and security for laptops and similar devices |
US10805181B2 (en) | 2017-10-29 | 2020-10-13 | Nicira, Inc. | Service operation chaining |
US20190140863A1 (en) | 2017-11-06 | 2019-05-09 | Cisco Technology, Inc. | Dataplane signaled bidirectional/symmetric service chain instantiation for efficient load balancing |
US11012420B2 (en) | 2017-11-15 | 2021-05-18 | Nicira, Inc. | Third-party service chaining using packet encapsulation in a flow-based forwarding element |
US10708229B2 (en) | 2017-11-15 | 2020-07-07 | Nicira, Inc. | Packet induced revalidation of connection tracker |
US10757077B2 (en) | 2017-11-15 | 2020-08-25 | Nicira, Inc. | Stateful connection policy filtering |
US10938716B1 (en) | 2017-11-29 | 2021-03-02 | Riverbed Technology, Inc. | Preserving policy with path selection |
US11095617B2 (en) | 2017-12-04 | 2021-08-17 | Nicira, Inc. | Scaling gateway to gateway traffic using flow hash |
US11075888B2 (en) | 2017-12-04 | 2021-07-27 | Nicira, Inc. | Scaling gateway to gateway traffic using flow hash |
WO2019138415A1 (en) | 2018-01-12 | 2019-07-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Mechanism for control message redirection for sdn control channel failures |
US11888899B2 (en) | 2018-01-24 | 2024-01-30 | Nicira, Inc. | Flow-based forwarding element configuration |
US10536285B2 (en) | 2018-01-25 | 2020-01-14 | Juniper Networks, Inc. | Multicast join message processing by multi-homing devices in an ethernet VPN |
US10659252B2 (en) | 2018-01-26 | 2020-05-19 | Nicira, Inc | Specifying and utilizing paths through a network |
US10797910B2 (en) | 2018-01-26 | 2020-10-06 | Nicira, Inc. | Specifying and utilizing paths through a network |
WO2019147316A1 (en) | 2018-01-26 | 2019-08-01 | Nicira, Inc. | Specifying and utilizing paths through a network |
CN110113291B (en) | 2018-02-01 | 2020-10-13 | 上海诺基亚贝尔股份有限公司 | Method and apparatus for interworking between business function chain domains |
CN110166409B (en) | 2018-02-13 | 2021-12-28 | 华为技术有限公司 | Device access method, related platform and computer storage medium |
CN111801654A (en) | 2018-03-01 | 2020-10-20 | 谷歌有限责任公司 | High availability multi-tenant service |
US10785157B2 (en) | 2018-03-13 | 2020-09-22 | Juniper Networks, Inc. | Adaptive load-balancing over a multi-point logical interface |
US10860367B2 (en) | 2018-03-14 | 2020-12-08 | Microsoft Technology Licensing, Llc | Opportunistic virtual machine migration |
US10896160B2 (en) | 2018-03-19 | 2021-01-19 | Secure-24, Llc | Discovery and migration planning techniques optimized by environmental analysis and criticality |
US10805192B2 (en) | 2018-03-27 | 2020-10-13 | Nicira, Inc. | Detecting failure of layer 2 service using broadcast messages |
US10728174B2 (en) | 2018-03-27 | 2020-07-28 | Nicira, Inc. | Incorporating layer 2 service between two interfaces of gateway device |
US10749751B2 (en) | 2018-05-02 | 2020-08-18 | Nicira, Inc. | Application of profile setting groups to logical network entities |
US20190362004A1 (en) | 2018-05-23 | 2019-11-28 | Microsoft Technology Licensing, Llc | Data platform fabric |
US11283676B2 (en) | 2018-06-11 | 2022-03-22 | Nicira, Inc. | Providing shared memory for access by multiple network service containers executing on single service machine |
US20190377604A1 (en) | 2018-06-11 | 2019-12-12 | Nuweba Labs Ltd. | Scalable function as a service platform |
US10897392B2 (en) | 2018-06-11 | 2021-01-19 | Nicira, Inc. | Configuring a compute node to perform services on a host |
US10819571B2 (en) | 2018-06-29 | 2020-10-27 | Cisco Technology, Inc. | Network traffic optimization using in-situ notification system |
US11316900B1 (en) | 2018-06-29 | 2022-04-26 | FireEye Security Holdings Inc. | System and method for automatically prioritizing rules for cyber-threat detection and mitigation |
US10997177B1 (en) | 2018-07-27 | 2021-05-04 | Workday, Inc. | Distributed real-time partitioned MapReduce for a data fabric |
US10645201B2 (en) | 2018-07-31 | 2020-05-05 | Vmware, Inc. | Packet handling during service virtualized computing instance migration |
US11445335B2 (en) | 2018-08-17 | 2022-09-13 | Huawei Technologies Co., Ltd. | Systems and methods for enabling private communication within a user equipment group |
US11184397B2 (en) | 2018-08-20 | 2021-11-23 | Vmware, Inc. | Network policy migration to a public cloud |
US10986017B2 (en) | 2018-08-23 | 2021-04-20 | Agora Lab, Inc. | Large-scale real-time multimedia communications |
US10977111B2 (en) | 2018-08-28 | 2021-04-13 | Amazon Technologies, Inc. | Constraint solver execution service and infrastructure therefor |
US10944673B2 (en) | 2018-09-02 | 2021-03-09 | Vmware, Inc. | Redirection of data messages at logical network gateway |
CN112673596B (en) | 2018-09-02 | 2023-05-02 | Vm维尔股份有限公司 | Service insertion method, device and system at logic gateway |
US11595250B2 (en) | 2018-09-02 | 2023-02-28 | Vmware, Inc. | Service insertion at logical network gateway |
US10917340B2 (en) | 2018-09-11 | 2021-02-09 | Cisco Technology, Inc. | In-situ passive performance measurement in a network environment |
US11032190B2 (en) | 2018-09-12 | 2021-06-08 | Corsa Technology Inc. | Methods and systems for network security universal control point |
CN109213573A (en) | 2018-09-14 | 2019-01-15 | 珠海国芯云科技有限公司 | The equipment blocking method and device of virtual desktop based on container |
US10834004B2 (en) | 2018-09-24 | 2020-11-10 | Netsia, Inc. | Path determination method and system for delay-optimized service function chaining |
US10979347B2 (en) | 2018-10-27 | 2021-04-13 | Cisco Technology, Inc. | Software version aware networking |
US11012353B2 (en) | 2018-11-02 | 2021-05-18 | Cisco Technology, Inc. | Using in-band operations data to signal packet processing departures in a network |
US11398983B2 (en) | 2018-11-04 | 2022-07-26 | Cisco Technology, Inc. | Processing packets by an offload platform adjunct to a packet switching device |
US10944630B2 (en) | 2018-11-20 | 2021-03-09 | Cisco Technology, Inc. | Seamless automation of network device migration to and from cloud managed systems |
US10963282B2 (en) | 2018-12-11 | 2021-03-30 | Amazon Technologies, Inc. | Computing service with configurable virtualization control levels and accelerated launches |
US11463511B2 (en) | 2018-12-17 | 2022-10-04 | At&T Intellectual Property I, L.P. | Model-based load balancing for network data plane |
US10855588B2 (en) | 2018-12-21 | 2020-12-01 | Juniper Networks, Inc. | Facilitating flow symmetry for service chains in a computer network |
US10749787B2 (en) | 2019-01-03 | 2020-08-18 | Citrix Systems, Inc. | Method for optimal path selection for data traffic undergoing high processing or queuing delay |
US11042397B2 (en) | 2019-02-22 | 2021-06-22 | Vmware, Inc. | Providing services with guest VM mobility |
US11012351B2 (en) | 2019-02-22 | 2021-05-18 | Vmware, Inc. | Service path computation for service insertion |
US10951691B2 (en) | 2019-03-05 | 2021-03-16 | Cisco Technology, Inc. | Load balancing in a distributed system |
US20200344088A1 (en) | 2019-04-29 | 2020-10-29 | Vmware, Inc. | Network interoperability support for non-virtualized entities |
US10965592B2 (en) | 2019-05-31 | 2021-03-30 | Juniper Networks, Inc. | Inter-network service chaining |
US11184274B2 (en) | 2019-05-31 | 2021-11-23 | Microsoft Technology Licensing, Llc | Multi-cast support for a virtual network |
US11025545B2 (en) | 2019-06-06 | 2021-06-01 | Cisco Technology, Inc. | Conditional composition of serverless network functions using segment routing |
DE102020113346A1 (en) | 2019-07-02 | 2021-01-07 | Hewlett Packard Enterprise Development Lp | PROVISION OF SERVICE CONTAINERS IN AN ADAPTER DEVICE |
US20210011816A1 (en) | 2019-07-10 | 2021-01-14 | Commvault Systems, Inc. | Preparing containerized applications for backup using a backup services container in a container-orchestration pod |
US11108643B2 (en) | 2019-08-26 | 2021-08-31 | Vmware, Inc. | Performing ingress side control through egress side limits on forwarding elements |
LU101361B1 (en) | 2019-08-26 | 2021-03-11 | Microsoft Technology Licensing Llc | Computer device including nested network interface controller switches |
US12073364B2 (en) | 2019-09-10 | 2024-08-27 | Alawi Global Holdings LLC | Computer implemented system and associated methods for management of workplace incident reporting |
US20210120080A1 (en) | 2019-10-16 | 2021-04-22 | Vmware, Inc. | Load balancing for third party services |
US11200081B2 (en) | 2019-10-21 | 2021-12-14 | ForgeRock, Inc. | Systems and methods for tuning containers in a high availability environment |
WO2021086462A1 (en) | 2019-10-30 | 2021-05-06 | Vmware, Inc. | Distributed service chain across multiple clouds |
US11140218B2 (en) | 2019-10-30 | 2021-10-05 | Vmware, Inc. | Distributed service chain across multiple clouds |
US20210136140A1 (en) | 2019-10-30 | 2021-05-06 | Vmware, Inc. | Using service containers to implement service chains |
US11283717B2 (en) | 2019-10-30 | 2022-03-22 | Vmware, Inc. | Distributed fault tolerant service chain |
US11388228B2 (en) | 2019-10-31 | 2022-07-12 | Keysight Technologies, Inc. | Methods, systems and computer readable media for self-replicating cluster appliances |
US11157304B2 (en) | 2019-11-01 | 2021-10-26 | Dell Products L.P. | System for peering container clusters running on different container orchestration systems |
US11223494B2 (en) | 2020-01-13 | 2022-01-11 | Vmware, Inc. | Service insertion for multicast traffic at boundary |
US11153406B2 (en) | 2020-01-20 | 2021-10-19 | Vmware, Inc. | Method of network performance visualization of service function chains |
US11659061B2 (en) | 2020-01-20 | 2023-05-23 | Vmware, Inc. | Method of adjusting service function chains to improve network performance |
US11836158B2 (en) | 2020-02-03 | 2023-12-05 | Microstrategy Incorporated | Deployment of container-based computer environments |
US11522836B2 (en) | 2020-02-25 | 2022-12-06 | Uatc, Llc | Deterministic container-based network configurations for autonomous vehicles |
US11422900B2 (en) | 2020-03-02 | 2022-08-23 | Commvault Systems, Inc. | Platform-agnostic containerized application data protection |
US11627124B2 (en) | 2020-04-02 | 2023-04-11 | Vmware, Inc. | Secured login management to container image registry in a virtualized computer system |
US11372668B2 (en) | 2020-04-02 | 2022-06-28 | Vmware, Inc. | Management of a container image registry in a virtualized computer system |
US11212356B2 (en) | 2020-04-06 | 2021-12-28 | Vmware, Inc. | Providing services at the edge of a network using selected virtual tunnel interfaces |
US11336567B2 (en) | 2020-04-20 | 2022-05-17 | Cisco Technology, Inc. | Service aware virtual private network for optimized forwarding in cloud native environment |
US11467886B2 (en) | 2020-05-05 | 2022-10-11 | Red Hat, Inc. | Migrating virtual machines between computing environments |
US11902050B2 (en) | 2020-07-28 | 2024-02-13 | VMware LLC | Method for providing distributed gateway service at host computer |
US20220060467A1 (en) | 2020-08-24 | 2022-02-24 | Just One Technologies LLC | Systems and methods for phone number certification and verification |
WO2022132308A1 (en) | 2020-12-15 | 2022-06-23 | Vmware, Inc. | Providing stateful services a scalable manner for machines executing on host computers |
US11611625B2 (en) | 2020-12-15 | 2023-03-21 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11734043B2 (en) | 2020-12-15 | 2023-08-22 | Vmware, Inc. | Providing stateful services in a scalable manner for machines executing on host computers |
US11528213B2 (en) | 2020-12-30 | 2022-12-13 | Juniper Networks, Inc. | Sharing routes using an in-memory data store in a distributed network system |
US11153190B1 (en) | 2021-01-21 | 2021-10-19 | Zscaler, Inc. | Metric computation for traceroute probes using cached data to prevent a surge on destination servers |
-
2018
- 2018-03-27 US US15/937,621 patent/US10805192B2/en active Active
-
2020
- 2020-08-01 US US16/945,868 patent/US11038782B2/en active Active
-
2021
- 2021-06-13 US US17/346,255 patent/US11805036B2/en active Active
-
2023
- 2023-09-19 US US18/370,006 patent/US20240015086A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20190306036A1 (en) | 2019-10-03 |
US10805192B2 (en) | 2020-10-13 |
US11038782B2 (en) | 2021-06-15 |
US20200366584A1 (en) | 2020-11-19 |
US11805036B2 (en) | 2023-10-31 |
US20210306240A1 (en) | 2021-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11805036B2 (en) | Detecting failure of layer 2 service using broadcast messages | |
US10728174B2 (en) | Incorporating layer 2 service between two interfaces of gateway device | |
US11134008B2 (en) | Asymmetric network elements sharing an anycast address | |
US11296984B2 (en) | Use of hypervisor for active-active stateful network service cluster | |
US20230179474A1 (en) | Service insertion at logical network gateway | |
US11496392B2 (en) | Provisioning logical entities in a multidatacenter environment | |
US10944673B2 (en) | Redirection of data messages at logical network gateway | |
US11223494B2 (en) | Service insertion for multicast traffic at boundary | |
US11153122B2 (en) | Providing stateful services deployed in redundant gateways connected to asymmetric network | |
WO2020046686A1 (en) | Service insertion at logical network gateway | |
US11588682B2 (en) | Common connection tracker across multiple logical switches | |
US11570092B2 (en) | Methods for active-active stateful network service cluster | |
US10411948B2 (en) | Cooperative active-standby failover between network systems | |
US10938594B1 (en) | Transparent demilitarized zone providing stateful service between physical and logical networks | |
US11411777B2 (en) | Port mapping for bonded interfaces of ECMP group | |
US10951584B2 (en) | Methods for active-active stateful network service cluster | |
US10250493B2 (en) | Asymmetric network elements sharing an anycast address |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |