CN117178535A - On-demand routing grid for routing data packets through SD-WAN edge forwarding nodes in SD-WAN - Google Patents

On-demand routing grid for routing data packets through SD-WAN edge forwarding nodes in SD-WAN Download PDF

Info

Publication number
CN117178535A
CN117178535A CN202180097262.2A CN202180097262A CN117178535A CN 117178535 A CN117178535 A CN 117178535A CN 202180097262 A CN202180097262 A CN 202180097262A CN 117178535 A CN117178535 A CN 117178535A
Authority
CN
China
Prior art keywords
forwarding node
hub
site
forwarding
wan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180097262.2A
Other languages
Chinese (zh)
Inventor
N·K·拉玛斯瓦米
G·库马
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VMware LLC
Original Assignee
VMware LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/351,342 external-priority patent/US11509571B1/en
Application filed by VMware LLC filed Critical VMware LLC
Priority claimed from PCT/US2021/065168 external-priority patent/WO2022235303A1/en
Publication of CN117178535A publication Critical patent/CN117178535A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Some embodiments of the invention provide a method of facilitating routing through a software defined wide area network (SD-WAN) defined for an entity. A first edge forwarding node located at a first multimachine site of the entity, the first multimachine site located at a first physical location and comprising a first set of machines, the edge forwarding node acting as an edge forwarding node for the first set of machines by forwarding data packets between the first set of machines and other machines associated with the entity via other forwarding nodes in the SD-WAN. The first edge forwarding node receives configuration data specifying that the first edge forwarding node act as a hub forwarding node for forwarding a set of data packets from a second set of machines associated with the entity and operating at a second multi-machine site at a second physical location to a third set of machines associated with the entity and operating at a third multi-machine site at a third physical location. The first edge forwarding node then acts as a hub forwarding node to forward the set of data packets from the second set of machines to the third set of machines.

Description

On-demand routing grid for routing data packets through SD-WAN edge forwarding nodes in SD-WAN
Background
Today, software defined wide area networks (SD-WANs) provide secure access to applications hosted on cloud and enterprise data centers. Typical SD-WAN deployments require transit nodes (transit nodes) through which applications flow to reach the destination (e.g., cloud applications involve branch-to-branch flows (branch-to-branch flows) via gateways). However, certain challenges arise in handling critical application traffic, such as path impairment (path degradation) between the source edge node and the transit node, which may lead to application degradation. Additionally, critical applications with SOS properties also suffer from path instability and may eventually experience power down or power down conditions (which lead to undesirable consequences).
Disclosure of Invention
Some embodiments of the present invention provide a method of routing (packet) packets through a software defined wide area network (SD-WAN) defined for an entity. A first edge forwarding node (edge forwarding node) located at a first physical location of the entity (entity) and including a first multi-machine site of a first set of machines acts as an edge forwarding node for the first set of machines by forwarding data packets between the first set of machines and other machines associated with the entity via other forwarding nodes in the SD-WAN. The first edge forwarding node receives configuration data specifying that the first edge forwarding node act as a hub forwarding node (hub forwarding node) for forwarding a set of data packets from a second set of machines associated with an entity and operating at a second multi-machine site at a second physical location to a third set of machines associated with the entity and operating at a third multi-machine site at a third physical location. The first edge forwarding node then acts as a hub forwarding node to forward the set of data packets to a third set of machines at a third multi-machine site.
In some embodiments, the first edge forwarding node receives the set of data packets from the second edge forwarding node via a first tunnel between the first edge forwarding node and the second edge forwarding node, and forwards the data packets to the next hop via a second tunnel between the first edge forwarding node and the next hop on the way of the data packets to the destination. Before forwarding the set of data packets via the second tunnel, in some embodiments, the first edge forwarding node removes a first tunnel header identifier associated with the first tunnel and inserts a second tunnel header identifier associated with the second tunnel. In some embodiments, the first tunnel and the second tunnel are secure tunnels (e.g., virtual Private Network (VPN) tunnels).
In some embodiments, the configuration data is received by the first edge forwarding node from a controller of the SD-WAN. In some embodiments, the controller is a centralized controller, while in other embodiments the controller is a distributed controller (with controller agents executing on devices in the SD-WAN (e.g., on forwarding nodes)), while in still other embodiments the controller is a cloud gateway that performs the controller functions. Further, in some embodiments, the controller and cloud gateway share controller functionality.
The configuration data in some embodiments includes a route record specifying multiple sets of routes, wherein a first edge forwarding node acts as an edge forwarding unit only for a first multi-machine site, and wherein the first edge forwarding node acts as a hub forwarding unit for other multi-machine sites (such as a second multi-machine site). In some embodiments, the controller provides different route records specifying different route subsets to different edge forwarding nodes in the SD-WAN.
In some embodiments, the route record is generated by the controller based on routes identified in a route graph (e.g., a route mesh topology model) generated by the controller, the route graph showing connections between forwarding nodes in the SD-WAN. In some embodiments, the controller uses the generated routing graph to identify edge forwarding nodes that can act as hub forwarding nodes of the SD-WAN to provide alternative routes between the source forwarding node and the destination forwarding node if the source forwarding node experiences certain conditions when forwarding packets to other sites. For example, according to some embodiments, a particular forwarding node may not be able to connect to a hub forwarding node due to link degradation, congestion at the hub forwarding node due to another tenant, and so on. In another example, in some embodiments, the controller (or cloud gateway) may detect these conditions by ping (e.g., sending ICMP messages) a hub forwarding node or a group of hub forwarding nodes and detecting a slow response. In some embodiments, the controller actively provides the route records to the edge forwarding nodes in order to allow the edge forwarding nodes to react quickly when experiencing these certain conditions. Alternatively or in combination, in some embodiments, the controller passively provides the route record after detecting that a particular forwarding node is experiencing these certain conditions (e.g., by receiving a notification from the forwarding node).
In some embodiments, these conditions relate to a corrupted operational state of the hub forwarding node (i.e., transit node) and are associated with a specified threshold. In some embodiments, the degraded operational state of the hub forwarding node is caused by degradation of performance attributes such as latency, bandwidth, and throughput. According to some embodiments, these conditions and their associated thresholds are defined as policy-based routing (PBR) rules that are distributed by the controller to the forwarding nodes. In some embodiments, the forwarding node includes a metric generator (metric generator) for generating metrics to parse the PBR rules and select alternative routes.
In some embodiments, each edge forwarding node in the SD-WAN is associated with a set of SD-WAN profiles, each profile identifying a shared set of parameters. For example, in some embodiments, the SD-WAN profile may identify a shared set of security parameters, service parameters, and/or policy parameters. In some embodiments, the controller uses these SD-WAN profiles in performing path searches on the routing graph to identify edge forwarding nodes in the routing graph that may act as auxiliary roles for the SD-WANs as hub forwarding nodes.
In some embodiments, the controller uses the routing graph to calculate costs associated with a plurality of different paths through the routing graph. In some embodiments, the calculated cost is a link weight score (i.e., cost score) calculated for different links between forwarding nodes in the routing graph. In some embodiments, the weight score is calculated as a weighted combination of several calculated values and provider-specific values, such as the link's (1) calculated delay value, (2) calculated loss value, (3) provider network connection cost, and (4) provider calculation cost. In some embodiments, different links may have more than one associated cost. For example, in some embodiments, the link cost associated with using an edge forwarding node with its primary role as an edge forwarding node is less than the link cost associated with using an edge forwarding node with its secondary role as a hub forwarding node. According to some embodiments, the PBR rules used by the forwarding node are defined based on the calculated weight scores (e.g., when the delay is greater than N ms, a higher cost link is used).
In some embodiments, the routing graph is a set of routing graphs including at least a first routing graph that does not include direct links between a particular edge forwarding node and any edge forwarding nodes in a set of edge forwarding nodes associated with the entity and a second routing graph that does include direct links between the edge forwarding nodes. In some embodiments, to identify an edge forwarding node that may act as a hub forwarding node for the SD-WAN, the controller determines which routing graph results in a better routing metric than each of the other routing graphs in the group.
In some embodiments, to act as a hub forwarding node to send a set of data packets from a second set of machines to a third set of machines, a first edge forwarding node receives data packets from a second edge forwarding node of a second site through a first tunnel established between the first site and the second site, and forwards the data packets to a third edge forwarding node at the third site through a second tunnel between the first site and the third site. In some embodiments, the first edge forwarding node removes a first tunnel header identifier associated with the first tunnel from the data packet, inserts a second tunnel header identifier associated with the second tunnel, and forwards the data packet to a third edge forwarding node before forwarding the data packet through the second tunnel.
In some embodiments, the first edge forwarding node acts as a hub forwarding node for only a temporary period of time in order to forward the set of data packets. In some embodiments, the first edge forwarding node does not continue to act as a hub forwarding node for other communication sessions between the second set of machines and the third set of machines at the second and third sites, while in other embodiments, the first edge forwarding node acts as a hub forwarding node for all or some of the communication sessions between the second and third sites (and potentially other sites of the entity).
In some embodiments, the first, second, and third edge forwarding nodes act as spoke nodes (spoke nodes) in a hub-spoke architecture (hub-spoke architecture) that uses assigned hub forwarding nodes located at data center sites associated with entities. Thus, after the first edge forwarding node begins to operate as a hub forwarding node for the second site, in some embodiments, the SD-WAN has two hubs, including a first hub for the second multi-machine site at the first multi-machine site (e.g., also referred to herein as a multi-user computing site) and a second hub for the plurality of edge forwarding nodes at the data center site at the plurality of multi-machine sites for the entity. In some embodiments, the first edge forwarding node acts as a hub forwarding node for a particular multi-machine site of the entity that establishes a plurality of tunnels with the first edge forwarding node, each tunnel for a communication session between a machine at the particular multi-machine and a machine at another multi-machine site of the entity.
In some embodiments, the first multi-machine site of the entity is a first branch site of a plurality of branch sites of the entity, and in some embodiments, the first physical location is one of a plurality of geographically dispersed physical locations. In some embodiments, a branch site (e.g., multi-user computing site) is a location that has multiple user computing and/or other user operated devices and that acts as a source computer and device for communication with other computers and devices at other sites (e.g., other branch sites, data center sites, etc.). In some embodiments, the branch site may also include a server that is not operated by the user. In some embodiments, the second multimachine site is a multi-tenant data center, such as a data center of a software as a service (SaaS) provider. When the multi-tenant data center is a data center of a SaaS provider, in some embodiments, the second edge forwarding node is a multi-tenant gateway forwarding node.
In some embodiments, edge forwarding nodes associated with an SD-WAN may include edge forwarding nodes associated with branch sites of the SD-WAN, gateway forwarding nodes for private data centers, multi-tenant gateway forwarding nodes associated with public clouds, multi-tenant gateway forwarding nodes associated with SaaS provider clouds, and hub forwarding nodes that provide connectivity between spoke edge forwarding nodes in a hub-spoke configuration (hub-and-spoke configuration) of the SD-WAN.
The foregoing summary is intended to serve as a brief introduction to some embodiments of the invention. This is not meant to be an introduction or overview of all-inventive subject matter disclosed in this document. The following detailed description and the accompanying drawings referred to in the detailed description will further describe the embodiments described in the summary of the invention and other embodiments. Accordingly, a full appreciation of the summary, detailed description, drawings, and claims is required in order to understand all embodiments described herein. Furthermore, the claimed subject matter is not limited to the illustrative details in the summary, detailed description, and drawings.
Drawings
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
Fig. 1 illustrates an example of a virtual network created for a particular entity using a hub deployed in a public cloud data center of a public cloud provider, in accordance with some embodiments.
Fig. 2 illustrates an example of a virtual network in which a particular edge forwarding node has been assigned as a hub forwarding node to facilitate a communication session between two edge forwarding nodes, in accordance with some embodiments.
Fig. 3 illustrates a process performed by a forwarding node in a virtual network when attempting to establish a communication session with a particular destination, in accordance with some embodiments.
Fig. 4 illustrates a process performed by an edge forwarding node at a branch site when attempting to establish communication with a destination in some embodiments.
Fig. 5 illustrates an example of a virtual network in which a particular edge forwarding node has been assigned as a hub forwarding node to facilitate a communication session between the particular edge forwarding node and a SaaS data center, in accordance with some embodiments.
Fig. 6 illustrates a process performed by a forwarding node in a virtual network acting as a hub forwarding node to facilitate a communication session between a particular source and a particular destination, in accordance with some embodiments.
Fig. 7 illustrates an example of a virtual network in which a particular edge forwarding node has been assigned as a hub forwarding node for other edge forwarding nodes in the virtual network, in accordance with some embodiments.
Fig. 8 illustrates a process performed by a controller or cluster of controllers in a virtual network to identify potential edge forwarding nodes that can act as hub forwarding nodes to provide optimal routing for other forwarding nodes in the virtual network, in accordance with some embodiments.
Fig. 9 illustrates an example in which a particular edge forwarding node in a first SD-WAN has been assigned as a hub forwarding node to facilitate a communication session between another edge forwarding node in the first SD-WAN and an edge forwarding node in a second SD-WAN, in accordance with some embodiments.
Fig. 10 illustrates a process performed by a controller or cluster of controllers in a virtual network to identify potential edge forwarding nodes that can act as hub forwarding nodes to provide optimal routing for other forwarding nodes in the virtual network in response to a failed communication attempt detected by another forwarding node in the virtual network, in accordance with some embodiments.
11A-11G illustrate examples of routing graphs generated by a controller to identify all possible routes between a source and a destination, according to some embodiments.
Fig. 12 illustrates an example of two routing graphs generated for SD-WANs that treat one edge node differently.
FIG. 13 conceptually illustrates a computer system with which some embodiments of the invention are implemented.
Detailed Description
In the following detailed description of the present invention, numerous details, examples, and embodiments of the present invention are set forth and described. It will be apparent, however, to one skilled in the art that the invention is not limited to the illustrated embodiments and that the invention may be practiced without some of the specific details and examples that are discussed.
Some embodiments of the present invention provide a method of routing data packets through an entity-defined wide area network (SD-WAN) by enhancing the role of the SD-WAN device. Examples of roles for SD-WAN devices include SD-WAN edge forwarding nodes, SD-WAN hub forwarding nodes, and SD-WAN gateway forwarding nodes. In some embodiments, the role of the SD-WAN device may include primary and secondary functions, with the secondary functions either being located there at all times or requested on demand. In some embodiments, these roles are context-based. For example, in some embodiments, a controller or cluster of controllers may associate SD-WAN forwarding nodes with heuristic metrics (such as geographic location, number of paths to hub, path metrics).
In the case where the primary function of its role is as an edge forwarding node, for example, a first SD-WAN edge forwarding node of an entity located at a first physical location and including a first set of machines (e.g., also referred to herein as a multi-user computing station), a data packet may be forwarded from the first set of machines operating at the first multi-machine station to other forwarding nodes in the SD-WAN for forwarding to other machines associated with the entity. Based on configuration data (i.e., routing records) from the controller, the first SD-WAN edge forwarding node may then operate in its auxiliary function as a hub for the second multi-machine site and relay a set of data packets from the second set of machines operating at the second multi-machine site to the third set of machines associated with the entity.
Fig. 1 illustrates an example of a virtual network 100 created for a particular entity using SD-WAN forwarding elements deployed at branch sites, data centers, and public clouds. Examples of public clouds are those provided by Amazon Web Services (AWS), gu Geyun platform (GCP), microsoft Azure, etc., while examples of entities include companies (e.g., legal, partnership, etc.), organizations (e.g., schools, non-profit organizations, government entities, etc.), etc.
In fig. 1, SD-WAN forwarding elements include cloud gateway 105 and SD-WAN forwarding elements 130, 132, 134, 136. In some embodiments, the Cloud Gateway (CGW) is a forwarding element in a private or public data center 110. In some embodiments, CGW 105 has secure connection links (e.g., tunnels) with edge forwarding elements (e.g., SD-WAN edge Forwarding Elements (FEs) 130, 132, 134, and 136) at a particular entity's multimachine site (e.g., SD-WAN edge sites 120, 122, and 124), such as a multi-user computing site (e.g., branch office or other physical location with multi-user computers and other user operating devices and acting as source computers and devices to other machines at other sites), a data center (e.g., location hosting a server), etc. These multiple machine sites are typically in different physical locations (e.g., different buildings, different cities, different states, etc.).
Four multi-machine sites 120-126 are illustrated in FIG. 1, three of which are branch sites 120-124, and one of which is data center 126. Each branch site is shown as including edge forwarding nodes 130-134, while data center site 126 is shown as including hub forwarding node 136. Data center SD-WAN forwarding node 136 is referred to as a hub node because in some embodiments this forwarding node may be used to connect to other edge forwarding nodes of branch stations 120-124. In some embodiments, a hub node uses or has one or more service engines to perform services (e.g., middlebox services) on packets that the node forwards from one branch site to another. In some embodiments, when an edge forwarding node assumes the role of a hub forwarding node (e.g., based on a routing record provided by the controller cluster), the controller cluster provides service rules to the edge forwarding node to enable this node (or a service engine used by this node) to perform the service operations to be performed by the hub forwarding node. The hub node also provides access to data center resources 156, as described further below.
Each edge forwarding element (e.g., SD-WAN edge FEs 130-134) exchanges data packets with one or more cloud gateways 105 over one or more connection links 115 (e.g., a plurality of connection links available at the edge forwarding element). In some embodiments, these connection links include both secure and non-secure connection links, while in other embodiments, they include only secure connection links. As shown by edge node 134 and gateway 105, multiple secure connection links (e.g., multiple secure tunnels established over multiple physical links) may be established between one edge node and gateway.
When a plurality of such links are defined between an edge node and a gateway, in some embodiments, each secure connection link is associated with a different physical network link between the edge node and an external network. For example, to access an external network, in some embodiments the edge node has one or more commercial broadband internet links (e.g., cable modems, fiber optic links) to access the internet, MPLS links to access the external network through an MPLS (multiprotocol label switching) provider's network, wireless cellular links (e.g., 5GLTE networks), etc. In some embodiments, the different physical links between edge node 134 and cloud gateway 105 are the same type of link (e.g., are different MPLS links).
In some embodiments, one edge forwarding node 130-134 may also have multiple direct links 115 (e.g., secure connection links established over multiple physical links) to another edge forwarding node 130-134 and/or data center hub node 136. Also, in some embodiments different links may use different types of physical links or the same type of physical link. Further, in some embodiments, a first edge forwarding node of a first branch site may be connected to a second edge forwarding node of a second branch site by: either (1) directly through one or more links 115, (2) through a cloud gateway or data center hub to which the first edge forwarding node is connected through two or more links 115, or (3) through another edge forwarding node of another branch site that may enhance its role to that of the hub forwarding node, as will be described in more detail below. Thus, in some embodiments, a first edge forwarding node (e.g., 134) of a first branch site (e.g., 124) may reach a second edge forwarding node (e.g., 130) of a second branch site (e.g., 120) or a hub forwarding node 136 of a data center site 126 using multiple SD-WAN links 115.
In some embodiments, cloud gateway 105 is used to connect two SD-WAN forwarding nodes 130-136 through at least two secure connection links 115 between gateway 105 and two forwarding elements at two SD-WAN sites (e.g., branch sites 120-124 or data center site 126). In some embodiments, cloud gateway 105 also provides network data from one multi-machine site to another multi-machine site (e.g., provides an accessible subnet of one site to another site). Similar to cloud gateway 105, hub forwarding element 136 of data center 126 may be used in some embodiments to connect two SD-WAN forwarding nodes 130-134 of two branch sites through hub 136 and at least two secure connection links 115 between the two forwarding elements at the two branch sites 120-124.
In some embodiments, each secure connection link between two SD-WAN forwarding nodes (i.e., CGW 105 and edge forwarding nodes 130-136) is formed as a VPN tunnel between the two forwarding nodes. In this example, the SD-WAN forwarding nodes (e.g., forwarding elements 130-136 and gateway 105) and the set of secure connections 115 between the forwarding nodes form a virtual network 100 for a particular entity that spans at least the public or private cloud data center 110 to connect branch sites and data center sites 120-126.
In some embodiments, secure connection links are defined between gateways in different public cloud data centers to allow paths through the virtual network to pass from one public cloud data center to another, while in other embodiments such links are not defined. Further, in some embodiments, gateway 105 is a multi-tenant gateway that is used to define other virtual networks for other entities (e.g., other companies, organizations, etc.). Some such embodiments use the tenant identifier to create a tunnel between the gateway and the edge forwarding element of a particular entity, and then use the tunnel identifier of the created tunnel to allow the gateway to distinguish the data packet flow it receives from the edge forwarding element of one entity from the data packet flows it receives along the other tunnels of other entities. In other embodiments, the gateway is single tenant and is specifically deployed for use by only one entity.
Fig. 1 illustrates a controller cluster 140 that acts as a central point for managing (e.g., defining and modifying) configuration data that is provided to edge nodes and/or gateways to configure some or all of the operations. In some embodiments, this controller cluster 140 is in one or more public cloud data centers, while in other embodiments it is in one or more private data centers. In some embodiments, controller cluster 140 has a set of manager servers that define and modify configuration data, and a set of controller servers that distribute configuration data to edge Forwarding Elements (FEs), hubs, and/or gateways. In some embodiments, controller cluster 140 instructs the edge forwarding elements and hubs to use certain gateways (i.e., assigns gateways to edge forwarding elements and hubs). In some embodiments, some or all of the functions of the controller cluster are performed by a cloud gateway (e.g., cloud gateway 105).
In some embodiments, controller cluster 140 also provides next hop forwarding rules and load balancing criteria. As described above, in some embodiments, the controller cluster 140 also provides service rules to edge forwarding nodes that can act as hub forwarding nodes in order to enable these nodes or the service engines used by these nodes to perform service operations on data packets to be performed by the hub forwarding nodes. In some embodiments, the controller cluster actively provides configuration data (e.g., routing records, forwarding rules, etc.) to the edge forwarding nodes in order to allow the edge forwarding nodes to react quickly when experiencing certain circumstances that require the use of alternative routes. Alternatively or in combination, in some embodiments, the controller passively provides configuration data after detecting that a particular forwarding node is experiencing certain conditions (e.g., by receiving a notification from the forwarding node).
In some embodiments, these conditions relate to a corrupted operational state of the hub forwarding node and are associated with thresholds defined in forwarding rules (e.g., policy-based routing (PBR) rules). In some embodiments, the degraded operational state of the hub forwarding node may be due to issues with respect to latency, bandwidth, and/or throughput. For example, when the throughput of an assigned hub forwarding node used by a first edge forwarding node at a first site does not fall within a threshold defined in the forwarding rules, the forwarding rules may specify that the first edge forwarding node should use a second edge forwarding node at a second site to forward a set of data packets to a third site.
In some embodiments, the controller detects a degraded operational state of the hub forwarding node and signals the first edge forwarding node to use an alternate route through the second edge forwarding node, while in other embodiments the first edge forwarding node automatically uses the second edge forwarding node as the hub based on the route record. Fig. 3, 4, 6, 8, and 10 illustrate the process of facilitating routing by assigning edge forwarding nodes as hub forwarding nodes in some embodiments, and will be described below with reference to fig. 2, 5, 7, and 9.
Fig. 2 illustrates another example of a virtual network 200 created for a particular entity using SD-WAN forwarding elements deployed at branch sites, data centers, and public clouds in some embodiments. Similar to virtual network 100, the SD-WAN forwarding elements of virtual network 200 include SD-WAN edge forwarding nodes 230-234, cloud gateway 205, and SD-WAN hub forwarding node 236. In some embodiments, cloud gateway 205 is a forwarding element in private or public data center 210. In some embodiments, cloud gateway 205 has secure connection links (e.g., tunnels) with edge forwarding elements at different branch sites and data centers of an entity. In this example, edge forwarding nodes 230-234 are located at respective branch sites 220-224, while hub forwarding node 236 is located at data center 226.
While cloud gateway 105 and hub forwarding node 236 may provide forwarding services for branch sites 220-224, in some embodiments these connections may sometimes experience problems caused by heavy traffic loads from other sites in the SD-WAN. Thus, in some embodiments, edge forwarding nodes 230-234 are requested or indicated to act as hub forwarding nodes to facilitate communication sessions between other sites in the SD-WAN.
Fig. 3 illustrates a process performed by an edge forwarding node at a branch site when attempting to establish communication with a destination (e.g., any destination device based on a route) in some embodiments. Process 300 begins at 305 by attempting to establish a communication session with a particular forwarding node at a particular site via one or more hubs or gateways. For example, in virtual network 200, edge forwarding node 230 at branch site 220 may attempt to communicate with edge forwarding node 232 at branch site 222 through cloud gateway 205 and hub forwarding node 236 via connection link 260.
Next, at 310, the edge forwarding node determines whether the attempt to establish the communication session failed. In some embodiments, for example, when different branch sites of the same or different entities/tenants are sending large amounts of data via the hub or gateway forwarding node, connections (e.g., links) with other branch sites become less reliable (i.e., deteriorate). Furthermore, in some embodiments, the connection attempt fails because the hub or gateway forwarding node may be experiencing planned or unplanned downtime (e.g., for maintenance).
When the edge forwarding node determines at 310 that the attempt has not failed, the process transitions to 315 to send the communication (i.e., the data packet) via the successful route. The process then ends. Otherwise, when the edge forwarding node determines at 310 that the attempt did fail, the process transitions to 320 to determine if the threshold number of attempts has been exceeded. In some embodiments, the threshold number of attempts is predefined by a user (e.g., a network administrator) and implemented as a fault tolerance policy, or as a PBR rule, as will be described below with respect to fig. 4.
When the edge forwarding node determines that the threshold for failed attempts has not been exceeded, the process returns to 305 to continue attempting to establish a communication session via the hub and gateway forwarding nodes. Alternatively, when the edge forwarding node determines at 320 that a threshold number of failed attempts has been exceeded, the process transitions to 325 to establish a communication session with an intervening edge forwarding node at another branch site. For example, edge forwarding node 230 may establish a communication session with edge forwarding node 234 (which acts as a hub for the communication session between edge forwarding node 230 and edge forwarding node 232).
Next, at 330, the edge forwarding node begins forwarding the data packet to the intermediate edge forwarding node, which now acts as a hub forwarding node, for delivery to the particular forwarding node at the particular site. For example, in virtual network 200, edge forwarding node 230 is shown sending data packets 275 along route 270, which route 270 passes through edge forwarding node 234 for delivery to edge forwarding node 232. The process then ends.
Fig. 4 illustrates another process performed by an edge forwarding node at a branch site when attempting to establish communication with a destination (e.g., any destination device based on a route) in some embodiments. Process 400 begins at 410 when an edge forwarding node of a branch site receives a data packet for forwarding across an SD-WAN to a second site (e.g., from a source machine at the branch site).
Next, at 420, the process identifies a set of one or more PBR rules that apply to the data packet and that identify two next hops for two routes to the destination, one route using the assigned hub forwarding node at the data center site and the other route using another edge forwarding node at another site. In some embodiments, the edge forwarding node identifies applicable PBR rules based on five tuple identifiers associated with the data packet (e.g., source and destination addresses, source and destination port numbers, and protocols) and based on performance attributes of the assigned hub forwarding node (e.g., latency, bandwidth, and throughput).
After identifying the set of PBR rules, the process evaluates the conditions specified by the set of PBR rules at 430 to select one of the next hops identified by the set of PBR rules. For example, in some embodiments, the PBR rules specify thresholds for performance attributes. For example, the PBR rules may specify that when an assigned hub forwarding node has a delay greater than N ms, another edge forwarding node acting as a hub forwarding node should be the next hop for forwarding the packet. In another example, the PBR rules may specify that when an edge forwarding node experiences more than N failed attempts to connect to an assigned hub forwarding node, another edge forwarding node that is a hub forwarding node should be the next hop for forwarding the packet. The process then forwards the received data packet to the selected next hop at 440. After 440, process 400 ends.
Fig. 5 illustrates a virtual network 500 that includes a controller cluster 540, a plurality of branch sites (e.g., 520, 522, and 524), each having a respective SD-WAN edge forwarding node (e.g., 530, 532, and 534), and a set of resources (e.g., 550, 552, and 554). Virtual network 500 also includes data center 528 (public or private) with resources 558 and SD-WAN hub forwarding node 538, cloud gateway 505 in public cloud 510, and SaaS data center 526 with resources 556 and SD-WAN gateway forwarding node 536. Fig. 5 will be described below with reference to fig. 6.
Fig. 6 illustrates a process 600 performed in some embodiments by an edge forwarding node of a branch site that acts as a hub forwarding node to relay communications between other sites inside and outside of a virtual network. Process 600 begins at 610 when an edge forwarding node of a branch site (i.e., a first edge forwarding node) receives an instruction from a controller to act as a hub forwarding node to relay a set of data packets from a particular forwarding node at a particular site to a destination. For example, in some embodiments, in virtual network 500, edge forwarding node 530 at branch site 520 may establish a communication session with edge forwarding node 534 at branch site 524, such that edge forwarding node 534 acts as a hub for relaying a set of data packets from edge forwarding node 530 to gateway forwarding node 536 after a number of failed attempts to communicate through cloud gateway 505. In some embodiments, the instructions include a route record generated by the controller based on routes identified in one or more route maps for the SD-WAN.
After receipt of the instruction at 610, the edge forwarding node establishes a first tunnel with the particular forwarding node for the particular site and a second tunnel with the next hop on the path to the destination at 620 to relay the set of data packets from the particular forwarding node to the destination. For example, edge forwarding node 532 may establish a tunnel with edge forwarding node 530 via link 570 and with gateway forwarding node 536 (i.e., the destination) over link 572.
The edge forwarding node next receives the data packet from the particular forwarding node along the first tunnel, removes the identifier of the first tunnel from the data packet, and inserts the identifier of the second tunnel at 630. The edge forwarding node then forwards the data packet with the identifier of the second tunnel to the destination through the second tunnel at 640. For example, edge forwarding node 534 may receive packets from edge forwarding node 530 and forward the packets to destination gateway forwarding node 536 along the path shown by dashed line 574.
Next, at 650, the edge forwarding node determines whether there are additional packets in the set of packets to be forwarded. When the edge forwarding node determines that there are additional data packets to forward in the communication session (i.e., the session has not yet terminated), the process returns to 630 to receive the data packets from the particular forwarding node.
Otherwise, when the edge forwarding node determines that there are no additional data packets to forward (i.e., the communication session has terminated), the process transitions to 660 to terminate the first tunnel and the second tunnel and cease to assume the role of hub in accordance with the received instructions. For example, in some embodiments, an edge forwarding node operating in a hub role is configured to remain in the role as a hub only for the length of time it takes to relay a set of packets for which a tunnel was originally established, while in other embodiments, the edge forwarding node continues to operate in a hub role for a particular set of communication sessions, or in still other embodiments, the edge forwarding node operates in a hub role until it receives additional instructions (e.g., from a controller) to stop. After 660, the process ends.
Fig. 7 illustrates another example of a virtual network in some embodiments. Virtual network 700 includes a controller cluster 740, a set of branch sites (e.g., 720, 722, and 724) each including a set of resources (e.g., 750, 752, and 754) and SD-WAN edge forwarding nodes (e.g., 730, 732, and 734), and a data center 728 including resources 758 and hub forwarding nodes 738. Hub forwarding node 738 at data center 728 is used to connect each of branch sites 720-724 to gateway forwarding node 736 of external SaaS data center 726 to allow the branch sites to access resource 756 of the SaaS data center. Fig. 7 will be described in more detail below with reference to fig. 8.
FIG. 8 illustrates a process performed by a controller or cluster of controllers in some embodiments. In some embodiments, process 800 is performed as a passive process (i.e., performed in response to detection of an adverse condition in the SD-WAN), while in other embodiments, the process is an active process (i.e., performed before any adverse condition is detected). Process 800 begins at 810 when the controller generates a routing graph (e.g., a routing mesh topology model) based on profile settings of SD-WAN forwarding nodes to identify available routes between devices inside and outside the SD-WAN. For example, controller cluster 740 may identify all connections 760 between branch sites 720-724 and data center 728, as well as connections 765 between branch sites. Example routing diagrams will be described below with reference to fig. 11A-11G.
Next, at 820, the controller analyzes the routing graph to identify spoke SD-WAN edge forwarding nodes. In virtual network 700, controller cluster 740 may identify each of edge forwarding nodes 730-734 as spokes around hub forwarding node 738. Based on this analysis, the controller determines at 830 that a particular spoke SD-WAN edge forwarding node should act as an SD-WAN hub forwarding node for a set of SD-WAN edge forwarding nodes. For example, although each of edge forwarding nodes 730-734 have been identified as spokes, in some embodiments, controller 740 may determine that the best route for edge forwarding nodes 730 and 734 (e.g., in the event that these nodes cannot reach hub forwarding node 738 directly) will pass through edge forwarding node 732, as indicated by bold routes 770 and 775.
After determining that a particular spoke edge forwarding node should act as a hub forwarding node for a group of spoke edge forwarding nodes, the controller instructs the particular spoke edge forwarding node to act as a hub forwarding node for the group of SD-WAN edge forwarding nodes and instructs the group to use the particular spoke edge forwarding node as a hub forwarding node at 840. For example, controller cluster 740 may send respective instructions to each of edge forwarding nodes 730-734 using connection 780. In some embodiments, the controller instructs the set of edge forwarding nodes to use the assigned hub forwarding node only for a specified amount of time (e.g., for a particular set of communication sessions).
In some embodiments, the instructions include a routing record generated by the controller that identifies different paths using the particular spoked edge forwarding node as a hub forwarding node. In some embodiments, the routing records include two different sets of routing records generated based on first and second routing graphs, wherein the first set includes routes in which a particular spoke edge forwarding node acts only as an edge forwarding node, and the second set includes routes in which a particular spoke edge forwarding node acts as an edge forwarding node as well as a hub forwarding node. Alternatively or in combination, in some embodiments, the routing records include two different sets of routing records based on one routing graph generated by the controller, wherein the first set of routing records is further based on a first set of routes associated with a first cost when using a particular spoke edge forwarding node as an edge forwarding node, and the second set of routing records is further based on a second set of routes associated with a second cost when using the particular spoke edge forwarding node as a hub forwarding node. In some embodiments, the controller also sends a list of nodes identified in the routing graph as nodes that can act as hubs to forwarding nodes in the SD-WAN along with the routing record. After providing instructions to a particular spoked forwarding node, process 800 ends.
Fig. 9 illustrates an example of a communication session between stations in different SD-WANs relayed by an edge forwarding node. The first SD-WAN 901 includes a set of branch sites (e.g., 920, 921, and 922) each having a set of resources (e.g., 950, 951, and 952) and edge forwarding nodes (e.g., 930, 931, and 932), and a cloud gateway 905 in the public cloud 910. The second SD-WAN 902 includes a pair of branch sites (e.g., 924 and 925) each having a set of resources (e.g., 954 and 955) and edge forwarding nodes (e.g., 934 and 935). The first SD-WAN 901 and the second SD-WAN 902 are connected by a common data center 926 that includes a set of resources 956 and a hub forwarding node 936. Fig. 9 will be further described below with reference to fig. 8.
Fig. 10 illustrates a process performed by a controller or cluster of controllers in some embodiments to facilitate routing between forwarding nodes inside and outside of an SD-WAN. Process 1000 begins at 1010 when a controller detects performance degradation at an assigned hub forwarding node between a source site and a destination site. For example, a controller cluster (not shown) of SD-WAN 901 may detect performance degradation of cloud gateway 905 and/or link 960 between edge forwarding node 931 and cloud gateway 905. In some embodiments, the controller cluster detects such network events by receiving notifications about the network events from different forwarding elements (e.g., edge forwarding nodes 931, cloud gateways 905, etc.).
Next, at 1020, the controller generates a routing graph (i.e., the routing graph generated in process 800) to perform a path search to identify an alternate route between the source site and the destination site across the SD-WAN that uses the particular edge forwarding node (i.e., rather than the assigned hub forwarding node) at the particular site as the next hop for the set of packets. For example, a cluster of controllers (not shown) of SD-WAN 901 may identify edge forwarding node 930 as a spoke edge forwarding node capable of acting as a hub for a communication session between edge forwarding node 931 in SD-WAN 901 and a destination, such as edge forwarding node 934 of branch point 924 in SD-WAN 902.
The controller cluster then selects the best route from the identified alternative routes for forwarding the set of data packets from the source site to the destination site at 1030. In some embodiments, each route has an associated cost, and the best route selected is the route with the lowest cost, while in other embodiments, the best route is not the route with the lowest cost.
At 1040, the controller cluster instructs a particular edge forwarding node that acts as the next hop in the selected route to act as a hub forwarding node to forward a set of packets from the source site to the destination site. For example, a controller cluster (not shown) may instruct the edge forwarding node 930 to act as a hub for the edge forwarding node 931 to have the edge forwarding node 931 forward a set of data packets to the edge forwarding node 934 of the second SD-WAN 902 such that the data packets are forwarded from the edge forwarding node 930 acting as a hub to the hub forwarding node 936 of the data center 926 and finally to the edge forwarding node 934.
Additionally, at 1050, the controller cluster instructs the edge forwarding node at the source site to use the particular edge forwarding node at the particular site as a next hop for forwarding a set of data packets. In some embodiments, the controller cluster instructs the edge forwarding node to use the particular edge forwarding node as the next hop for only the group of packets, while in other embodiments the controller cluster instructs the edge forwarding node to use the particular edge forwarding node as the next hop for additional groups of packets. The process then ends.
In some embodiments, the cloud gateway 905 performs some or all of the functions of the controller clusters described above. For example, in some embodiments, the cloud gateway is responsible for collecting network event related data from other forwarding elements connected by the SD-WAN and providing this data to the controller cluster, while in other embodiments, the cloud gateway collects this data, analyzes the data to detect any problems, and provides a solution (e.g., by providing alternative routes for forwarding data packets).
Although the processes in fig. 3-10 are described with reference to the elements in fig. 2-9, the specific destination for each of these processes may be any of the following: an SD-WAN edge forwarding node at a branch site, an SD-WAN gateway forwarding node of a private data center, a multi-tenant SD-WAN gateway forwarding node associated with a public cloud, a multi-tenant SD-WAN gateway forwarding node associated with a SaaS provider cloud, or an SD-WAN hub forwarding node providing connectivity between spoke SD-WAN edge forwarding nodes in a hub-spoke configuration of the SD-WAN.
As described above, in some embodiments, a controller or cluster of controllers actively or passively creates and examines a routing graph to determine routes for data packets between SD-WAN edge forwarding nodes. In some embodiments, the controller generates one or more routing graphs to perform a path search to identify routes between SD-WAN sites that are sources and destinations of data packet flows that traverse SD-WAN forwarding nodes (e.g., edge nodes, hub nodes, cloud gateway nodes, etc.). In some embodiments, the controller also provides each forwarding node in the SD-WAN with a list of forwarding nodes that may act as hub forwarding nodes. Additional details regarding generating routing diagrams and performing path searches on those routing diagrams to identify paths through the SD-WAN can be found in U.S. patent No. 11,005,684.
11A-11G illustrate examples of routing diagrams generated by a controller, along with a subset of potential desired paths transposed over the routing diagrams, from which the controller can select one or more edge forwarding nodes to operate as hub forwarding nodes in an auxiliary function. Although the route map generation and analysis is described below as being performed by a controller, in some embodiments, some or all of these functions are performed by a cloud gateway.
Fig. 11A illustrates a routing graph 1100 generated by a controller to identify nodes in a virtual network and connections between them. The routing graph 1100 includes five edge forwarding nodes (e.g., 1110, 1112, 1114, 1116, and 1118), gateway forwarding node 1120, and hub forwarding node 1122. Additionally, the routing graph includes a node 1138, representing an external legal computing node (e.g., a branch office or data center) or SaaS provider, accessible through the edge forwarding node 1118, and a set of nodes 1130, 1132, 1134, and 1136 representing machines or groups of machines at branch sites served by the edge forwarding nodes 1110-1118. For example, nodes 1130 and 1132 represent machines accessible through edge forwarding node 1110, node 1134 represents a machine accessible through edge forwarding node 1114, and node 1136 represents a machine accessible through edge forwarding node 1116.
The routing diagram 1100 also illustrates connections between these forwarding nodes, including links 1140 between edge forwarding nodes, links 1142 between edge forwarding nodes and gateway forwarding nodes 1120, links 1144 between edge forwarding nodes and hub forwarding nodes 1122, and links 1146 between gateway forwarding nodes 1120 and hub forwarding nodes 1122. In some embodiments, the controller removes any identified bad links prior to generating the routing graph 1100.
As described with respect to process 800, the controller may analyze the routing graph to identify spoke SD-WAN edge forwarding nodes (such as spoke edge forwarding nodes 1110-1118) and determine whether any of the identified spoke edge forwarding nodes should act as hub forwarding nodes for other edge forwarding nodes. For example, edge forwarding node 1112 has a connection to hub forwarding node 1122 via link 1144, so if the connection link 1142 between edge forwarding node 1110 and gateway forwarding node 1120 becomes unreliable, edge forwarding node 1112 may act as a hub forwarding node for edge forwarding node 1110. Each node that exists as a center or spoke in the routing graph 1100 is also labeled with a cost label that indicates the cost of using each respective node in its primary role (e.g., edge forwarding node 1110 has an associated cost of 1 ("E1-C1") and edge forwarding node 1116 has an associated cost of 1 ("E4-C1")).
In some embodiments, for one or more links in the routing graph, the controller calculates a link weight score (cost score) as a weighted combination of a number of calculated values and provider-specific values. In some embodiments, the weight score is a weighted combination of the following for the link: (1) a calculated delay value, (2) a calculated loss value, (3) a provider network connection cost, and (4) a provider calculation cost. In some embodiments, provider computing costs are considered (account for) because the managed forwarding nodes to which the links are connected are machines (e.g., virtual machines or containers) executing on the hosts of the public cloud data center(s). In some embodiments, these weight scores may be used to determine which edge forwarding nodes will be best suited to act as hub forwarding nodes in their secondary functions.
For example, fig. 11B illustrates a routing graph 1100 in which weight scores are added to links for use by a controller to determine the most desirable route between a source node (e.g., emphasized edge forwarding node 1110) and a destination node (e.g., emphasized gateway forwarding node 1136). For example, the link between edge forwarding node 1110 and gateway forwarding node 1120 has a weight value of L-C1 (i.e., link cost 1), while the link between edge forwarding node 1110 and edge forwarding node 1112 has a weight value of L-C2 (i.e., link cost 2). It may be assumed that in some embodiments, the link cost (e.g., L-C1) between an edge forwarding node and an assigned gateway is normally less than the link cost (e.g., L-C2) between a first edge forwarding node and a second edge forwarding node operating in a hub forwarding node role.
In addition to the weight value associated with each link and the initial cost score for each node, edge forwarding nodes 1112, 1114, and 1118 also include an auxiliary cost score (e.g., edge forwarding node 1112 includes auxiliary cost E2-H-C1) that represents the cost of using each of these particular edge forwarding nodes as hub forwarding nodes in its auxiliary function. In some embodiments, it may be assumed that the cost score of an edge forwarding node is smaller when the node operates as an edge forwarding node in its primary function than when the node operates as a hub forwarding node in its secondary function. In some such embodiments, it may also be assumed that under normal operating conditions, the cost score when an edge forwarding node operates as a hub forwarding node in its auxiliary function is greater than the cost score associated with the assigned hub forwarding node.
Fig. 11C illustrates a routing graph 1100 over which a first desired path between an edge forwarding node 1110 and an edge forwarding node 1116 is transposed, the first desired path being represented by an emphasized and labeled link. In this example, each of the forwarding nodes that traffic will traverse has a cost associated with the primary function of that forwarding node (i.e., none of the edge forwarding nodes in this example operate as hub forwarding nodes). Thus, in some embodiments, the cost of using this particular path is less than the cost of other potential paths.
Fig. 11D illustrates a routing diagram 1100 over which a second desired path between an edge forwarding node 1110 and an edge forwarding node 1116 is transposed. In this example, one edge forwarding node (i.e., edge forwarding node 1118) operates as a hub forwarding node to pass traffic from gateway 1120 to edge forwarding node 1116.
In some embodiments, the decision to enhance the role of an edge forwarding node is based on conditions faced by another forwarding node that cause the other forwarding node to fail to forward traffic to the intended next hop. For example, according to some embodiments, a particular forwarding node may not be able to connect to a hub forwarding node due to link degradation, congestion at the hub forwarding node due to another tenant, and so on. In another example, in some embodiments, the controller (or cloud gateway) may detect these conditions by ping (e.g., sending ICMP messages) a hub forwarding node or a set of hub forwarding nodes and detecting a slow response.
In some embodiments, the conditions faced by a forwarding node are associated with specified thresholds such as a bandwidth threshold, a connection attempt threshold (i.e., the number of failed attempts by the forwarding node to connect to another forwarding node), a response time threshold (i.e., how fast the forwarding node responds to ICMP messages), etc. For example, in some embodiments, the role of enhanced edge forwarding node 1118 is such that the decision to operate as a hub forwarding node is based on exceeding a threshold number of failed connection attempts when gateway forwarding node 1120 attempts to connect to hub forwarding node 1122. As described above, in some embodiments, the failed attempt may be due to congestion caused by heavy traffic from other tenants using hub forwarding node 1122.
As a result of congestion (or another condition), in some embodiments, the controller determines that the cost of using hub forwarding node 1120 becomes much greater than the cost of using edge forwarding node 1118 as a hub forwarding node to deliver traffic to a destination. Alternatively or in combination, forwarding nodes experiencing these conditions make their own alternate routing using routing records provided by the controller (or cloud gateway), according to some embodiments.
In some embodiments, forwarding nodes make their selections according to policy-based routing (PBR) rules. In some such embodiments, the forwarding node includes a metric generator that generates metrics that are used to parse the PBR rules. For example, the PBR rules may specify for a source (e.g., a branch of los angeles), if the traffic destination is X (e.g., a branch of san francisco), then the next hop is Y (e.g., a branch of fresco) if the delay of Y is within 80% of the specified ideal range, otherwise the next hop is Z (e.g., a branch of las vegas). Thus, if an edge forwarding node located at the source site determines that the delay of Y is not within that range, the edge forwarding node will use Z as its next hop.
Fig. 11E illustrates a routing diagram 1100 over which a third desired path between an edge forwarding node 1110 and an edge forwarding node 1116 is transposed. As with the example of fig. 11D, this example includes one edge forwarding node, this time edge forwarding node 1112, operating as a hub forwarding node to pass traffic from edge forwarding node 1110 to hub forwarding node 1122 for final delivery to destination 1116. Since gateway 1120 is a multi-tenant forwarding node, as is the hub forwarding node 1122, heavy traffic from another tenant may put gateway forwarding node 1120 into dilemma, thus creating a need for alternative routing and counteracting the generally higher cost of using edge forwarding node 1112 in place of gateway 1120.
Fig. 11F illustrates a routing graph 1100 over which a fourth desired path between an edge forwarding node 1110 and an edge forwarding node 1116 is transposed. This fourth path uses the auxiliary hub functions of both edge forwarding node 1112 and edge forwarding node 1118. Unlike the example of fig. 11C-11E, the example path of fig. 11F includes additional nodes for traffic traversing. It may be inferred that, according to some embodiments, in addition to the additional cost of two edge forwarding nodes operating as hub forwarding nodes in the auxiliary function, the additional cost of traversing the additional nodes is now less than the cost of using the direct link between edge forwarding node 1110 and gateway 1120 plus the cost of traversing hub forwarding node 1122.
Finally, fig. 11G illustrates a routing graph 1100 on which a fifth desired path (and least desirable of the 5 shown) between edge forwarding node 1110 and edge forwarding node 1116 is transposed. In this example, the two edge forwarding nodes (e.g., 1114 and 1118) in turn operate as hub forwarding nodes.
In some embodiments, the cost of using the path shown in FIG. 11F may be equal to the path shown in FIG. 11G, and other means of determining the best path may be used instead of cost. For example, in some embodiments, the controller may associate forwarding nodes with heuristic metrics (such as geographic location, number of paths to the hub, and other path metrics). In some embodiments, the path in fig. 11F may be more desirable and less expensive than the path in fig. 11G based on the possibility of additional traffic to edge forwarding node 1114 (which provides access to gateway forwarding node 1134 of an external site). While the example paths provided above are limited, in some embodiments, the controller identifies each potential path between the source and destination and selects the best path.
As described above, different embodiments generate and utilize routing graphs differently. For example, some embodiments define only one routing graph, but allow an edge node to act as an edge forwarding node or a hub forwarding node by providing two different costs for each such edge node for two different capabilities under which it may operate (i.e., a first cost when it operates as an edge forwarding node and a second cost when it operates as a hub forwarding node). These embodiments then perform a path search on this common routing graph to identify, in combination, the following routes for pairs of sites to which the SD-WAN is connected: (1) A route that uses only a particular edge node as an edge forwarding element, and (2) a route that also uses a particular edge node as a hub forwarding element.
On the other hand, other embodiments define two routing graphs, where one routing graph does not treat any edge forwarding nodes as hub nodes, and the other routing graph allows edge nodes to be edge forwarding nodes and hub forwarding nodes of some or all other edge nodes. These embodiments perform a path search on each routing graph to identify the best route between each pair of sites connected by the SD-WAN. Fig. 12 illustrates an example of two routing graphs 1200a and 1200b generated for an SD-WAN that treat one edge node 1212 ("E2") differently. In graph 1200a, edge node 1212 is assigned the acronym EFE only to identify that it operates as an edge forwarding element only. Thus, in this routing graph, node 1212 cannot be used to define a route from node 1210 to node 1214 (i.e., via link 1240), but rather all possible routes must pass through hub node 1220 and/or cloud gateway node 1222, as highlighted by overlapping example routes 1250a and 1250 b.
In the second graph 1200b, edge node 1212 is assigned the acronyms EFE and HFE to identify that it may operate as an edge forwarding element and a hub forwarding element. Thus, in this routing graph, node 1212 may be used to define a route from node 1210 to node 1214 (i.e., via link 1240), as highlighted by overlapping routes 1252a and 1252b shown from node 1210 through node 1212 to node 1214. In some embodiments, different costs are associated with node 1212 acting as an EFE or HFE, as described above with reference to fig. 11A-11G.
Many of the above features and applications are implemented as a software process that is specified as a set of instructions recorded on a computer-readable storage medium (also referred to as a computer-readable medium). When executed by one or more processing units (e.g., one or more processors, cores of processors, or other processing units), cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROM, flash memory drives, RAM chips, hard drives, EPROMs, and the like. Computer readable media does not include carrier waves and electronic signals transmitted wirelessly or through a wired connection.
In this specification, the term "software" is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Furthermore, in some embodiments, multiple software inventions may be implemented as sub-portions of a larger program while retaining different software inventions. In some embodiments, multiple software inventions may also be implemented as separate programs. Finally, any combination of separate programs that collectively implement the software invention described herein is within the scope of the present invention. In some embodiments, one or more specific machine implementations are defined that execute and run the operations of the software program when the software program is installed to run on one or more electronic systems.
Fig. 13 conceptually illustrates a computer system 1300 with which some embodiments of the invention are implemented. Computer system 1300 may be used to implement any of the hosts, controllers, gateways, and edge forwarding elements described above. It can therefore be used to perform any of the above-described processes. Such computer systems include various types of non-transitory machine-readable media and interfaces for various other types of computer-readable media. Computer system 1300 includes a bus 1305, a processing unit(s) 1310, a system memory 1325, a read only memory 1330, a persistent storage device 1335, an input device 1340, and an output device 1345.
Bus 1305 generally represents all of the system, peripherals, and chipset buses that communicatively connect the many internal devices of computer system 1300. For example, bus 1305 communicatively connects processing unit(s) 1310 with read-only memory 1330, system memory 1325, and persistent storage 1335.
From these various memory units, processing unit(s) 1310 retrieve instructions to be executed and data to be processed in order to perform the processes of the present invention. In various embodiments, the processing unit(s) may be a single processor or a multi-core processor. Read Only Memory (ROM) 1330 stores static data and instructions required by processing unit(s) 1310 and other modules of the electronic system. Persistent storage 1335, on the other hand, is a read-write memory device. This device 1335 is a non-volatile memory unit that stores instructions and data even when the computer system 1300 is turned off. Some embodiments of the invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1335.
Other embodiments use removable storage devices (such as floppy disks, flash memory drives, etc.) as the permanent storage device 1335. Like persistent storage 1335, system memory 1325 is a read-write memory device. However, unlike storage device 1335, the system memory is volatile read-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the processes of the present invention are stored in system memory 1325, persistent storage 1335 and/or read-only memory 1330. From among these various memory units, processing unit(s) 1310 retrieve instructions to be executed and data to be processed in order to perform the processes of some embodiments.
Bus 1305 is also connected to input and output devices 1340 and 1345. The input device enables a user to communicate information and selection commands to the electronic system. Input devices 1340 include an alphanumeric keyboard and pointing device (also referred to as a "cursor control device"). An output device 1345 displays the image generated by the electronic system. The output devices include printers and display devices, such as Cathode Ray Tubes (CRTs) or Liquid Crystal Displays (LCDs). Some embodiments include devices that function as both input and output devices, such as touch screens.
Finally, as shown in FIG. 13, bus 1305 also couples computer system 1300 to network 1365 through a network adapter (not shown). In this manner, the computer may be a computer network (such as a local area network ("LAN"), a wide area network ("WAN") or an intranet) or a portion of a network of networks (such as the Internet). Any or all of the components of computer system 1300 may be used in conjunction with the present invention.
Some embodiments include electronic components, such as microprocessors, storage devices, and memory, that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as a computer-readable storage medium, a machine-readable medium, or a machine-readable storage medium). Some examples of such computer readable media include RAM, ROM, compact disk read-only (CD-ROM), compact disk recordable (CD-R), compact disk rewriteable (CD-RW), digital versatile disk read-only (e.g., DVD-ROM, dual-sided tape) Layer DVD-ROM), various recordable/rewriteable DVDs (e.g., DVD-RAM, DVD-RW, dvd+rw, etc.), flash memory (e.g., SD card), mini-SD card, micro-SD card, etc.), magnetic and/or solid state hard drives, read-only and recordableDisk, super-density optical disk, any other optical or magnetic medium, and floppy disk magnetic disk. The computer readable medium may store a computer program executable by at least one processing unit and including a set of instructions for performing various operations. Examples of a computer program or computer code include machine code, such as produced by a compiler, and files containing higher level code that are executed by a computer, electronic component, or microprocessor using an interpreter.
While the above discussion primarily refers to a microprocessor or multi-core processor executing software, some embodiments are performed by one or more integrated circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA). In some embodiments, such integrated circuits execute instructions stored on the circuit itself.
As used in this specification, the terms "computer," "server," "processor," and "memory" all refer to electronic or other technical devices. These terms do not include a person or group of people. For the purposes of this specification, the term "display" refers to displaying on an electronic device. The terms "computer-readable medium" and "machine-readable medium" as used in this specification are entirely limited to tangible physical objects that store information in a computer-readable form. These terms do not include any wireless signals, wired download signals, and any other transitory or transient signals.
Although the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be practiced in other specific forms without departing from the spirit of the invention. For example, several of the above embodiments deploy gateways in public cloud data centers. However, in other embodiments, the gateway is deployed at a third party's virtual private cloud data center (e.g., a data center that the third party uses to deploy cloud gateways for different entities in order to deploy virtual networks for these entities). It is therefore to be understood that the invention is not to be limited by the foregoing illustrative details, but is to be defined by the appended claims.

Claims (62)

1. A method of routing data packets across a software defined wide area network (SD-WAN) defined for an entity, the method comprising:
performing, at a first edge forwarding node located at a first multi-machine site of the entity, the first multi-machine site being at a first physical location and comprising a first set of machines:
acting as an edge forwarding node for the first set of machines by forwarding data packets between the first set of machines and other machines associated with the entity via other forwarding nodes in the SD-WAN;
Receiving configuration data specifying that the first edge forwarding node act as a hub forwarding node for forwarding a set of data packets from a second set of machines associated with the entity and operating at a second multi-machine site at a second physical location to a third set of machines associated with the entity and operating at a third multi-machine site at a third physical location; and
acting as a hub forwarding node to forward the set of data packets from the second set of machines to the third set of machines.
2. The method of claim 1, wherein acting as the hub forwarding node to forward the set of data packets from the second set of machines to the third set of machines comprises:
receiving the set of data packets from a second edge forwarding node of a second multimachine site; and
the set of data packets is forwarded to a third edge forwarding node of a third multi-machine site for delivery to a third set of machines.
3. The method of claim 2, wherein the set of data packets is received from the second edge forwarding node through a first tunnel between the first edge forwarding node and the second edge forwarding node and is forwarded to the third edge forwarding node through a second tunnel between the first edge forwarding node and the third edge forwarding node.
4. The method of claim 3, wherein forwarding the set of data packets to a third forwarding node through a second tunnel further comprises: for each data packet in the set, (i) removing a first tunnel header identifier associated with the first tunnel, and (ii) inserting a second tunnel header identifier associated with the second tunnel.
5. The method of claim 2, wherein the configuration data is received from the controller of the SD-WAN after the controller detects that a second edge forwarding node has exceeded a threshold number of failed attempts to connect to a hub forwarding node located at a data center site associated with the entity for forwarding the set of data packets to a third edge forwarding node.
6. The method of claim 5, wherein prior to receiving the configuration data, first, second, and third edge forwarding nodes act as spoke nodes in a hub-spoke architecture using the assigned hub forwarding node at the data center.
7. The method of claim 6, wherein the controller generates a routing mesh topology of connections between forwarding nodes and uses the generated routing mesh topology to identify edge forwarding nodes capable of acting as hub forwarding nodes to provide an alternative route between a source forwarding node and a destination forwarding node when the source forwarding node exceeds a threshold number of failed attempts to connect to the assigned hub forwarding node.
8. The method of claim 6, wherein the entity is a first tenant, the SD-WAN is a first SD-WAN, and the assigned hub forwarding node belongs to a set of assigned hub forwarding nodes, wherein the set of assigned hub forwarding nodes includes a set of multi-tenant hub forwarding nodes that act as hub forwarding nodes for a plurality of SD-WANs defined for a plurality of tenants.
9. The method of claim 8, wherein communication between the second forwarding node and the third forwarding node through the assigned set of hub forwarding nodes fails due to network activity of at least a second tenant of the plurality of tenants.
10. The method of claim 6, wherein the SD-WAN has two hub forwarding nodes after the first edge forwarding node begins to operate as a hub forwarding node between the second multimachine site and the third multimachine site, comprising: (i) A first edge forwarding node at a first multimachine site that acts as a hub forwarding node between a second multimachine site and a third multimachine site, and (ii) the assigned hub forwarding node at the data center site that acts as a plurality of edge forwarding nodes at a plurality of multimachine sites of the entity.
11. The method of claim 10, wherein the configuration data further specifies that the first edge forwarding node act as a hub forwarding node between a particular spoke edge forwarding node executing at a particular multi-machine site of the entity and a set of other multi-machine sites of the entity, wherein the particular spoke edge forwarding node establishes a plurality of tunnels to the first edge forwarding node acting as a hub forwarding node, each tunnel being used for a communication session between a machine at the particular multi-machine site and a machine at another multi-machine site of the set of multi-machine sites of the entity.
12. The method of claim 1, wherein the received configuration data specifies that the first edge forwarding node ceases to act as the hub forwarding node between the second site and the third site after the first edge forwarding node has completed forwarding the set of data packets.
13. The method of claim 1, wherein the first multi-machine site of the entity is a first branch site of a plurality of branch sites of the entity, and the first physical location is a first physical location of a plurality of geographically dispersed physical locations.
14. The method of claim 13, wherein the second multi-machine site comprises a multi-tenant data center.
15. The method of claim 14, wherein the multi-tenant data center is a data center of a software as a service (SaaS) provider, and the second forwarding node comprises a multi-tenant SD-WAN gateway FE.
16. The method of claim 1, wherein the set of data packets traverse an assigned hub forwarding node accessible from the first and second multimachine sites but not from the third multimachine site.
17. The method of claim 16, wherein the assigned hub forwarding node is an SD-WAN gateway FE providing access to a network external to the SD-WAN, wherein a second multimachine site is external to the SD-WAN.
18. A method of dynamically adjusting a role of an edge forwarding node of an SD-WAN for a set of edge forwarding nodes in an entity-defined software-defined wide area network (SD-WAN), the SD-WAN including (i) at least one data center site including a center forwarding node and a plurality of server machines, and (ii) two or more multi-user computing sites, each multi-user computing site including one edge forwarding node to connect a plurality of user machines at its respective site to the SD-WAN, the method comprising:
Collecting network event data from the at least one data center and the two or more multi-user computing sites;
detecting that a first edge forwarding node at a first multi-user computing site of the two or more multi-user computing sites is experiencing a problem of forwarding a particular set of data packets;
based on the detected problem, generating a set of route records identifying a set of alternative routes for use by the first edge forwarding node, the set of alternative routes including at least one alternative route in which a second edge forwarding node at the second multi-user computing site acts as a hub forwarding node for forwarding the particular set of data packets; and
distributing the different subsets of the generated set of routing records to forwarding nodes in the SD-WAN for forwarding the particular set of data packets.
19. The method of claim 18, wherein the generated set of routing records is a second generated set of routing records that is an updated version of the first generated set of routing records provided to the forwarding node prior to detecting that the first edge forwarding node is experiencing a problem.
20. The method of claim 18, wherein the detected problem comprises one of: congestion at a particular forwarding node, slow response of the particular forwarding node, and downtime experienced by the particular forwarding node.
21. The method of claim 20, wherein the detected problem is associated with a specified threshold.
22. The method of claim 20, wherein
The entity is a first tenant of a plurality of tenants;
the hub forwarding node at the datacenter site is a multi-tenant hub forwarding node serving the plurality of tenants;
the detected problem includes congestion at the multi-tenant hub forwarding node; and is also provided with
The congestion is caused by heavy traffic of a second tenant using the multi-tenant hub forwarding node.
23. The method of claim 18, wherein the method is performed by a controller of the SD-WAN, wherein collecting network event data from the at least one data center and the two or more multi-user computing sites comprises: a set of links connecting the controller to forwarding nodes in the SD-WAN is monitored for notifications from the forwarding nodes regarding problems experienced by the forwarding nodes.
24. The method of claim 23, wherein detecting that the first edge forwarding node is experiencing the problem of forwarding the particular set of data packets comprises analyzing notifications received from the first edge forwarding node.
25. The method of claim 23, wherein at least a subset of the functions of the controller are performed by a cloud gateway.
26. The method of claim 23, wherein the controller is a cloud gateway.
27. The method of claim 18, wherein generating the set of routing records comprises:
generating at least one routing graph for identifying a plurality of routes between forwarding nodes for forwarding the particular set of data packets;
performing a set of path searches using the at least one routing graph to identify the set of alternative routes; and
the set of route records is generated to implement the identified set of alternative routes.
28. The method of claim 27, wherein each edge forwarding node in the SD-WAN is associated with a set of SD-WAN profiles, each SD-WAN profile identifying at least one of a set of shared security, service, and policy parameters.
29. The method of claim 28, wherein performing the set of path searches using the at least one routing graph comprises: the set of SD-WAN profiles is used to identify one or more edge forwarding nodes to act as hub forwarding nodes for the SD-WAN.
30. The method of claim 18, wherein the second edge forwarding node acts as a hub forwarding node for only the first multi-user computing station to forward the particular set of data packets.
31. The method of claim 18, wherein the second edge forwarding node acts as a hub forwarding node for the first multi-user computing site and at least one other multi-user computer site in the SD-WAN.
32. The method of claim 18, wherein the second edge forwarding node establishes at least one tunnel with the first edge forwarding node, the tunnel being used to receive the particular set of data packets for forwarding to a site different from the first multi-user computing site and the second multi-user computing site.
33. The method of claim 32, wherein the tunnel is a secure tunnel.
34. The method of claim 18, wherein each route in the set of alternative routes is associated with a respective cost, wherein a route using the second edge forwarding node as a hub forwarding node has a first cost and a route using the second edge forwarding node as an edge forwarding node has a second cost different from the first cost.
35. The method of claim 18, wherein the set of edge forwarding nodes includes at least one of: an edge forwarding node associated with a branch site of an SD-WAN, an SD-WAN gateway FE for a private data center, a multi-tenant SD-WAN gateway FE associated with a public cloud, a multi-tenant SD-WAN gateway FE associated with a software as a service (SaaS) provider cloud, and a hub forwarding node providing connectivity between spoke edge forwarding nodes in a hub-spoke configuration of the SD-WAN.
36. The method of claim 18, wherein each edge forwarding node in the set of edge forwarding nodes acts as a spoke node in a hub-spoke architecture that uses hub forwarding nodes at the data center site.
37. A method of facilitating routing through a software defined wide area network (SD-WAN) defined for an entity, the method comprising:
at a first edge forwarding node located at a first multi-user computing site of the entity comprising a first set of machines:
determining whether a hub forwarding node at the data center site is in a degraded operational state;
using a first route from a first site to a second multi-user site of the entity through a hub forwarding node at a data center site of the entity when the hub forwarding node is not in a degraded operational state;
When the hub forwarding node is in a degraded operational state, a second route is used that passes through an edge forwarding node at a third multi-user computing site that acts as a hub node for data packets from the first site to the second site.
38. The method of claim 37, wherein determining whether the hub forwarding node at the data center site is in a degraded operational state comprises: an Internet Control Message Protocol (ICMP) message is sent to the hub forwarding node and a response time of the hub forwarding node is determined to exceed a threshold specified by one or more policy-based routing (PBR) rules of a set of PBR rules.
39. The method of claim 38, wherein the set of PBR rules is provided by the controller of the SD-WAN as a policy-based routing record to the first edge forwarding node, wherein the policy-based routing record includes a set of routes including at least a first route and a second route.
40. The method of claim 39, wherein the policy-based routing record specifies: (i) A first route is used when the hub forwarding node is determined to be in a degraded operational state based on a set of performance characteristics quantifying a quality of connection between a first site and the data center site meeting a set of threshold criteria, and (ii) a second route is used when the hub forwarding node is determined not to be in a degraded operational state based on the set of performance characteristics not meeting the set of threshold criteria.
41. The method of claim 40, wherein the set of performance characteristics includes at least delay, throughput, and bandwidth.
42. The method of claim 39, wherein the controller generates the policy-based routing record based on a route identified in a routing graph generated by the controller, the routing graph identifying connections between forwarding nodes in the SD-WAN.
43. The method of claim 42, wherein the controller uses the generated routing graph to identify edge forwarding nodes that can act as hub forwarding nodes to provide alternative routes between source forwarding nodes and destination forwarding nodes when the hub forwarding nodes at the data center site have degraded performance.
44. The method of claim 43, wherein:
the entity is a first tenant, the SD-WAN is a first SD-WAN, and the hub forwarding node belongs to a set of multi-tenant hub forwarding nodes that act as hub forwarding nodes for a plurality of SD-WANs defined for a plurality of tenants; and is also provided with
The degraded performance of the set of multi-tenant hub forwarding nodes is caused by network activity of at least a second tenant of the set of multi-tenant hub forwarding nodes services.
45. The method of claim 42, wherein each edge forwarding node in the SD-WAN is associated with a set of SD-WAN profiles, each SD-WAN profile identifying at least one of a set of shared security, service, and policy parameters, wherein the controller generates the routing graph based on the set of SD-WAN profiles.
46. The method of claim 37, wherein each edge forwarding node at each of the two or more multi-user sites acts as a spoke node in a hub-spoke architecture that uses the hub forwarding nodes at the data center site.
47. The method of claim 37, wherein the first route is associated with a first cost and the second route is associated with a second cost different from the first cost.
48. The method of claim 37, wherein using the second route for the flow of packets from the first station to the second station through the third station comprises: at least one tunnel is established between the first station and the second station, and at least one tunnel is established between the second station and the third station.
49. A method of routing through a software defined wide area network (SD-WAN) defined for an entity, the SD-WAN including (i) at least one data center site including a hub forwarding node and a plurality of server machines, and (ii) two or more multi-user computing sites, each multi-user computing site including an edge forwarding node to connect a plurality of user machines at its respective site to the SD-WAN, the method comprising:
At a first edge forwarding node located at a first multi-user computing site of the entity comprising a first set of machines:
receiving two sets of routes to a second multi-user computing site of the entity, the first set of routes including a first route through a hub forwarding node at a data center site of the entity, and the second set of routes including a second route through an edge forwarding node at a third multi-user computing site, the edge forwarding node at the third multi-user computing site being used as a hub node for data packets from the first site to the second site when the hub forwarding node at the data center site has degraded performance;
when the hub forwarding node does not have degraded performance, using a first route for a flow of data packets from a first site to a second site through the data center site;
when the hub forwarding node has degraded performance, the second route is used for the flow of packets from the first site to the second site through the third site.
50. The method of claim 49, wherein using the first route and the second route comprises using at least one set of one or more policy-based routing records specifying: (i) A first route is used when a set of performance characteristics quantifying connection quality between a first site and the data center site meets a set of threshold criteria, and (ii) a second route is used when the set of performance characteristics does not meet the set of threshold criteria.
51. The method of claim 50, wherein the set of performance characteristics includes at least delay, throughput, and bandwidth.
52. The method of claim 50, wherein each route of the first and second sets of routes is associated with a respective cost, wherein the first route is associated with a first cost and the second route is associated with a second cost different from the first cost.
53. The method of claim 49, wherein each edge forwarding node at each of the two or more multi-user sites acts as a spoke node in a hub-spoke architecture that uses the hub forwarding nodes at the data center site.
54. The method of claim 53, wherein the two sets of routes are received from a controller of the SD-WAN, wherein the controller generates a routing graph of connections between forwarding nodes in the SD-WAN, and uses the generated routing graph to identify edge forwarding nodes capable of acting as hub forwarding nodes to provide alternative routes between source forwarding nodes and destination forwarding nodes when the hub forwarding nodes at the data center site have degraded performance.
55. The method of claim 54, wherein the entity is a first tenant, the SD-WAN is a first SD-WAN, and the hub forwarding node belongs to a set of multi-tenant hub forwarding nodes that act as hub forwarding nodes for a plurality of SD-WANs defined for a plurality of tenants.
56. The method of claim 55 wherein the degraded performance of the set of multi-tenant hub forwarding nodes is caused by network activity of at least a second tenant of the set of multi-tenant hub forwarding nodes.
57. The method of claim 54, wherein each edge forwarding node in the SD-WAN is associated with a set of SD-WAN profiles, each SD-WAN profile identifying at least one of a set of shared security, service, and policy parameters, wherein the controller generates the routing graph based on the set of SD-WAN profiles.
58. The method of claim 49, wherein using the second route for the flow of packets from the first station to the second station through the third station comprises: at least one tunnel is established between the first station and the second station, and at least one tunnel is established between the second station and the third station.
59. A machine readable medium storing a program which when executed by at least one processing unit implements the method of any one of claims 1-58.
60. An electronic device, comprising:
a set of processing units; and
a machine readable medium storing a program which when executed by at least one of the processing units implements the method of any of claims 1-58.
61. A system comprising means for implementing the method of any one of claims 1-58.
62. A computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1-58.
CN202180097262.2A 2021-05-03 2021-12-24 On-demand routing grid for routing data packets through SD-WAN edge forwarding nodes in SD-WAN Pending CN117178535A (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
IN202141020149 2021-05-03
US17/351,333 2021-06-18
US17/351,342 2021-06-18
US17/351,342 US11509571B1 (en) 2021-05-03 2021-06-18 Cost-based routing mesh for facilitating routing through an SD-WAN
US17/351,340 2021-06-18
US17/351,345 2021-06-18
US17/351,327 2021-06-18
PCT/US2021/065168 WO2022235303A1 (en) 2021-05-03 2021-12-24 On demand routing mesh for routing packets through sd-wan edge forwarding nodes in an sd-wan

Publications (1)

Publication Number Publication Date
CN117178535A true CN117178535A (en) 2023-12-05

Family

ID=88939861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180097262.2A Pending CN117178535A (en) 2021-05-03 2021-12-24 On-demand routing grid for routing data packets through SD-WAN edge forwarding nodes in SD-WAN

Country Status (1)

Country Link
CN (1) CN117178535A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190104035A1 (en) * 2017-10-02 2019-04-04 Nicira, Inc. Three tiers of saas providers for deploying compute and network infrastructure in the public cloud
CN112585910A (en) * 2020-09-15 2021-03-30 香港应用科技研究院有限公司 Method and apparatus for establishing secure, low-latency, optimized paths in a wide area network
US20210112034A1 (en) * 2019-10-15 2021-04-15 Cisco Technology, Inc. Dynamic discovery of peer network devices across a wide area network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190104035A1 (en) * 2017-10-02 2019-04-04 Nicira, Inc. Three tiers of saas providers for deploying compute and network infrastructure in the public cloud
US20210112034A1 (en) * 2019-10-15 2021-04-15 Cisco Technology, Inc. Dynamic discovery of peer network devices across a wide area network
CN112585910A (en) * 2020-09-15 2021-03-30 香港应用科技研究院有限公司 Method and apparatus for establishing secure, low-latency, optimized paths in a wide area network

Similar Documents

Publication Publication Date Title
US11381499B1 (en) Routing meshes for facilitating routing through an SD-WAN
WO2022235303A1 (en) On demand routing mesh for routing packets through sd-wan edge forwarding nodes in an sd-wan
US11025543B2 (en) Route advertisement by managed gateways
US10389634B2 (en) Multiple active L3 gateways for logical networks
US11611507B2 (en) Managing forwarding elements at edge nodes connected to a virtual network
US11729065B2 (en) Methods for application defined virtual network service among multiple transport in SD-WAN
US20220232411A1 (en) Proactive optimization across network segments to maintain end-to-end performance
US11803408B2 (en) Distributed network plugin agents for container networking
US9225597B2 (en) Managed gateways peering with external router to attract ingress packets
US20230261974A1 (en) On demand routing mesh for routing packets through sd-wan edge forwarding nodes in an sd-wan
US11588682B2 (en) Common connection tracker across multiple logical switches
US10938594B1 (en) Transparent demilitarized zone providing stateful service between physical and logical networks
US20200322181A1 (en) Scalable cloud switch for integration of on premises networking infrastructure with networking services in the cloud
CN117178535A (en) On-demand routing grid for routing data packets through SD-WAN edge forwarding nodes in SD-WAN
US20240022499A1 (en) Dns-based gslb-aware sd-wan for low latency saas applications
US20240031273A1 (en) Method for modifying an sd-wan using metric-based heat maps
CN117223270A (en) Method for micro-segmentation in SD-WAN of virtual network
WO2024019853A1 (en) Method for modifying an sd-wan using metric-based heat maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: U.S.A.

Address after: California, USA

Applicant after: Weirui LLC

Address before: California, USA

Applicant before: VMWARE, Inc.

Country or region before: U.S.A.

CB02 Change of applicant information