CN115941580A - Scalable SD-WAN topology and routing automation - Google Patents

Scalable SD-WAN topology and routing automation Download PDF

Info

Publication number
CN115941580A
CN115941580A CN202111265048.1A CN202111265048A CN115941580A CN 115941580 A CN115941580 A CN 115941580A CN 202111265048 A CN202111265048 A CN 202111265048A CN 115941580 A CN115941580 A CN 115941580A
Authority
CN
China
Prior art keywords
network
network infrastructure
vpnc
overlay
devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111265048.1A
Other languages
Chinese (zh)
Inventor
D·古普塔
H·马甘马内
S·苏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Enterprise Development LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development LP filed Critical Hewlett Packard Enterprise Development LP
Publication of CN115941580A publication Critical patent/CN115941580A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/76Routing in software-defined topologies, e.g. routing between virtual machines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Abstract

An example network infrastructure device of a software defined wide area network (SD-WAN) includes processing circuitry and memory, the memory including instructions that cause the network infrastructure device to: advertising a set of SD-WAN overlay tunnels terminated at the network infrastructure device; receiving a network connectivity graph including a categorized group of network infrastructure devices that are members of an advertised region and links between the group of network infrastructure devices; receiving data traffic intended for a destination device in the set of network infrastructure devices; determining a preferred path to the destination device based on the network connectivity map; and send data traffic via the interface associated with the preferred path.

Description

Scalable SD-WAN topology and routing automation
Technical Field
Implementations and embodiments of the invention relate to scalable SD-WAN topology and route automation
Background
A Wide Area Network (WAN) may extend across multiple network sites (e.g., geographic, logical). The sites of the WAN are interconnected so that devices at one site can access resources at another site. In some topologies, many services and resources are installed at a core site (e.g., data center, headquarters), and many branch sites (e.g., regional offices, retail stores) connect client devices (e.g., laptops, smart phones, internet of things devices) to the WAN. Enterprises often use these types of topologies when establishing their corporate networks.
Each network site has its own Local Area Network (LAN) that connects to other LANs at other sites to form a WAN. Network infrastructure (such as switches and routers) is used to forward network traffic through each LAN, through the entire WAN, and between the WAN and the internet. The LAN of each network site is connected to a broader network (e.g., to a WAN, the internet) through a gateway router. A Branch Gateway (BG) connects branch sites to a wider network and a head-end gateway (also known as a virtual internet gateway) connects core sites to a wider network.
WANs are often implemented using software-defined wide area network (SD-WAN) technology. The SD-WAN decouples the control aspects of switching and routing from the physical routing of network traffic (either logically or physically). In some SD-WAN implementations, each gateway controls certain aspects of the routing of its respective LAN, but the network orchestrator controls the overall switching and routing across the WAN via the overlay network.
Drawings
For a more complete understanding of this disclosure, examples in accordance with the various features described herein may be more readily understood by reference to the following detailed description when taken in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
fig. 1A is an illustration of an example SD-WAN topology focused on a single advertisement area;
FIG. 1B is a data flow diagram illustrating the operation of the example SD-WAN topology of FIG. 1A;
FIG. 2 is a flow diagram illustrating an example method for configuring an SD-WAN topology;
FIG. 3 is a diagram of an example network infrastructure device;
fig. 4 is a diagram of an example SD-WAN network including multiple advertised areas;
some examples have additional or alternative features to those shown in the above-described reference figures. For clarity, some of the labels may be omitted from some of the figures.
Detailed Description
Typically, SD-WANs are either highly manually configured, or have relatively simple design rules (e.g., hub and spoke), full mesh, or a combination of both. This paradigm has been accepted by small and medium-sized enterprises with a limited number of network sites and a limited number of network services. Enterprises with large numbers of IT employees have also accepted this paradigm because they can devote a great deal of time and effort to manually configure large networks. However, reducing the amount of manual work required to set up and maintain a large-scale SD-WAN is a particularly effective way to reduce IT costs for an organization.
One of the biggest obstacles to reducing the manual effort to configure large SD-WANs is that common routing methods (e.g., BGP, OSPF, proprietary alternatives) can only scale to thousands of devices before the routing tables grow beyond the capabilities of some network infrastructure devices, network convergence takes minutes rather than milliseconds, and the overhead of routing advertisements through flooding networks begins to consume a significant portion of the network bandwidth. Thus, SD-WAN providers set an upper limit on the number of overlay aware network infrastructure devices that can be simultaneously configured (provisioning) on a particular SD-WAN.
While all customers may benefit from improved topology and routing handling, certain types of enterprises are most affected by this limitation. In particular, it is difficult for an enterprise owning a small number of large corporate offices and a large number of small (often customer-facing) organizations to manage the myriad of devices used to establish the SD-WAN. IT personnel are often geographically centralized, while IT issues are often geographically dispersed. In such networks, thousands of sites make network management using current methods functionally impossible. Any existing solution has its own limitations and disadvantages, making any solution a trade-off between advantages and disadvantages to balance network characteristics with the size of the network.
In examples consistent with the present disclosure, the network orchestrator receives route advertisements from all Branch Gateways (BG) and VPN concentrators (VPNC) in the SD-WAN. From the route advertisement, the network orchestrator is able to determine the status, type and termination point of all overlay tunnels in the SD-WAN. With this information, the network orchestrator can automatically partition the SD-WAN into advertisement areas. An advertisement area is a region of the SD-WAN that is substantially interconnected, but substantially disconnected, from other advertisement areas. For example, the advertised area may include an overlay tunnel between member devices that allows data traffic to be routed from any member device to any other member device, but the advertised area may not include an overlay tunnel to any device outside of the advertised area. In another example, there may be some coverage tunnels to devices outside of the advertising area, but those tunnels may be low capacity, rarely used, configured as spares, or otherwise have characteristics such that they are not considered substantive connections. Likewise, certain member devices of the advertised area may not be interconnected via an overlay tunnel, but due to other characteristics of the device or network, these member devices may be classified as members of the advertised area.
The network orchestrator also divides the VPNC into connected sites (connected sites). Connection sites are logical sites that are part of a physical site and/or multiple physical sites that are aggregated for the purpose of sharing network services. For example, a network administrator may configure settings in a network orchestrator dashboard to treat a first data center and a second data center as connected. The network orchestrator then translates that intent (the first and second data centers are connected) into an action of adding the VPNC of the first data center and the VPNC of the second data center to the connection site representing the combined first and second data centers.
By dividing the network into advertisement areas and connection sites, the network orchestrator may generate a series of network connectivity graphs (network connectivity graphs), one graph for each advertisement area. The network orchestrator then provides the VPNC and BG with an appropriate network connectivity graph in view of their advertised area membership. The BG, which is typically not computationally powerful, may receive a network connectivity graph that includes a routing list that includes the next overlay hop (overlay hop) for each route. VPNCs, which typically have more computational power and are often each connected to many BGs, are better suited to compute their own routes, therefore the network connectivity graph from the network orchestrator provides a list of routes including the destination device for each respective route, and the VPNC computes a preferred path for each route based on the network connectivity graph. For example, the VPNC may use Dijkstra's algorithm to determine the lowest cost path to reach the destination device using the overlay tunnel.
This solution is a "fair-fruited" combination of the benefits of centralized topology management and distributed computing. Thus, this solution allows network expansion without unacceptable performance degradation. In one example embodiment consistent with the present disclosure, a network with over 16000 coverage-enabled network infrastructure devices improves from a network convergence time of 33 minutes to 125 milliseconds. The features of the present disclosure not only increase the speed at which the network is ready after a setup or topology change, but also reduce the manual effort required to manage a large number of SD-WAN devices and allow more complex deployment topologies (e.g., hub Mesh) to be more fully automated.
Fig. 1A is an illustration of an example SD-WAN topology focused on a single advertised area. Although fig. 1A may represent the entire SD-WAN 100, fig. 1A may represent a partial view of SD-WAN 100 that focuses only on a single advertised area or a portion of a single advertised area of SD-WAN 100. For clarity, not all devices and all connections are shown in FIG. 1A. As one of ordinary skill in the art can appreciate, the features taught via the simplified illustration in fig. 1A may also be applied to SD-WANs 100 of different sizes, complexities, and topologies.
SD-WAN 100 includes network interface devices 102 and 104. The network interface devices 102 and 104 include two types of devices: a Branch Gateway (BG) 102 and a Virtual Private Network Concentrator (VPNC) 104. In some examples, the BGs 102a and 102b are less computationally powerful devices deployed in branches, home offices, and retail stores, while the VPNCs 104a and 104b are more computationally powerful devices deployed in core sites (such as headquarters and data centers). The BG 102a is coupled to a client device 110a via a LAN 108 a. The LAN 108a may be as simple as an ethernet cable coupling the BG 102a to the client device 110a, or as complex as a multi-tiered network for a large branch office park. Likewise, the BG 102b is coupled to a client device 110b via a LAN 108 b. VPNCs 104a and 104b can also be coupled to respective LANs, although such LANs are outside the scope of this disclosure.
The network infrastructure devices 102 and 104 are connected to the WAN 112 via uplinks (not shown). The uplink may be wired or wireless, public or private, and is part of the underlying network of SD-WAN 100. A network orchestrator 116 is also connected to the WAN 112, and each network infrastructure device 102 and 104 is connected (not shown) to the network orchestrator 116. Since SD-WAN 100 cannot be guaranteed to be composed entirely of private connections and some data may be transmitted across the public internet, overlay tunnel 114 is used, one reason for this is to simplify routing and improve security.
The VPNC 104 and BG 102 periodically advertise (advertise) their terminated tunnels to the network orchestrator 116. The network orchestrator 116 splices advertisements together (stitch) after receiving the advertisements to obtain a more holistic view of the topology. For example, network orchestrator 116 may pair tunnel advertisements identified by respective end devices with the same UUID. The network orchestrator 116 may also receive additional information in the advertisement from the network infrastructure devices, such as tunnel type (e.g., MPLS, INET, LTE), tunnel operation status (e.g., up, down), tunnel capacity (e.g., 100 Mbps), and other characteristics. The network orchestrator 116 may also group tunnels for which end devices are the same, treating them as alternative links between two devices. In some examples, network orchestrator 116 may aggregate all tunnels between a pair of devices together and treat them as one logical link. The network orchestrator 116 determines link costs for each link and associates these costs with the links. The network orchestrator 116 may use an intent (intent) (not shown) provided by a network administrator to set the cost of the link in order to ensure that data traffic flows between the desired devices using the desired link.
For example, the network orchestrator 116 identifies that the overlay tunnel 114a advertised by the BG 102a is the same tunnel as the overlay tunnel 114b advertised by the VPNC 104a. Using this information, the network orchestrator 116 determines that a tunnel exists between the BG 102a and the VPNC 104a. For example, if both BG 102a and VPNC 104a advertise their tunnels using a common UUID of "102a104a," then network orchestrator 116 may pair the two advertisements to identify the single tunnel. Similarly, the network orchestrator 116 identifies overlay tunnels 114c and 114d as tunnels connecting VPNC 104a to VPNC 104b and overlay tunnels 114e and 114f as tunnels connecting VPNC 104b to BG 102b.
Network administrator's intents may include an intention to conform SD-WAN 100 to a certain topology (e.g., a hub mesh topology), an intention to aggregate certain network sites into logical network sites, an intention to split certain networks into separate network sites, an intention of certain VPNCs 104 as primary VPNCs in sites where multiple VPNCs 104 operate redundantly, and other intents of the general operation and topology of SD-WAN 100.
The network orchestrator 116 then uses the link information and the network administrator's intent to build a network connectivity map. The network orchestrator 116 then analyzes the network connectivity profile for the overlay interconnect. The network orchestrator 116 groups devices into advertisement areas based on the coverage interconnection relationships of the devices within each advertisement area 118. For example, the network interface devices 102 and 104 within a particular announcement region 118 may be substantially interconnected, while the network interface devices in different announcement regions may be substantially unconnected.
After dividing SD-WAN 100 into advertisement areas, network orchestrator 116 creates a network connectivity graph for each advertisement area 118. In some examples, the network connectivity map may be part of a network connectivity map. In some examples, the network connectivity graph is a link state database. In other examples, the network connectivity graph is a modified link state database that includes additional characteristics about the nodes or links, and/or includes additional modifications (such as splitting the links into directional link pairs), not all of which may be advertised. For example, a link (not shown) between VPNC 104b and BG 102a may be advertised only in the VPNC 104b to BG 102a direction so that the BG 102a does not use VPNC 104b as a transport link for traffic from LAN 108a, but other devices on SD-WAN 100 may access devices on LAN 108a via VPNC 104b to BG 102a links.
The network orchestrator 116 then sends ("advertises") the network connectivity graph to the network infrastructure devices 102 and 104. In the particular example of fig. 1A, the network orchestrator 116 sends a network connectivity graph for the advertised area 118 to the BGs 102a, 102b, VPNC 104a, and VPNC 104b. However, the network connectivity map will not be sent to devices on the LAN 108 because those LAN-side devices are not managed as part of the SD-WAN. It is possible to manage the LAN side devices in the same way as the SD-WAN devices by a technique such as software defined branching (SD-Branch), and the same features of the present disclosure will apply to the LAN side software defined branching devices. In some examples, the network connectivity graph is sent only to the VPNC 104, and the BG 102 does not receive the network connectivity graph. As described in more detail below, the BG 102 may receive a fully computed routing table from the network orchestrator 116, thereby eliminating the need for a network connectivity graph. This serves two purposes. First, each BG 102 typically has relatively few (about 10 or less) coverage tunnels connected to the VPNC 104, so the time required for the network orchestrator 116 to compute routing tables for a large number of BGs 102 is still quite low. Second, because BGs typically have low computational power, relying on each BG 102 to compute their own routing table can overburden their CPU, memory, or other capacity. Instead, VPNCs 104 are better suited to compute their own routing tables. First, each VPNC 104 typically has many (hundreds or thousands) of overlay tunnels, so the time required for the network orchestrator 116 to compute routing tables for a relatively small number of VPNCs 104 is quite long. Second, because VPNCs typically have higher computing power, relying on each VPNC 104 to compute their own routing tables is unlikely to overburden their computing resources, and the benefits of allocating the computing workload may reduce the overall time for network convergence (network convergence).
The network orchestrator 116 also creates connection sites, such as connection site 120. Using information from the network connectivity profile and network administrator intents (such as an intent to connect a pair of data centers), the network orchestrator 116 divides the VPNC into connection sites. In some examples, a connection site is created to include all VPNCs interconnected together by overlay links. In some examples, each VPNC 104 becomes a member of a connection site, even though the connection site is simply the wrapper (wrapper) of the VPNC. In some other examples, a connection site is created only when the VPNCs in separate network sites are intended to operate together or only when multiple VPNCs in a single network site are intended to operate individually. In FIG. 1A, VPNC 104a and VPNC 104b are both members of a connection site 120. In some examples, VPNC 104a and VPNC 104b are located in a first data center and a second data center, respectively, and a network administrator has provided an intent to collectively handle the first data center and the second data center. The VPNC 104a can be a member of any number of connection sites, but for clarity this disclosure shows and describes connection sites as disjoint sets. As will be clear to one of ordinary skill in the art, the features of the present disclosure are also applicable to overlapping connection sites.
Network orchestrator 116 also advertises routes for SD-WAN 100. As with the network connectivity graph, the advertised routes are limited in scope to reduce the load on the network infrastructure devices 102 and 104, and to reduce the computational load on the network orchestrator 116. The routes advertised to a particular network infrastructure device 102 or 104 are limited to routes within that device's corresponding advertised region 118. Further, routes advertised to VPNC 104 as a member of a connectivity site 120 may not include routes to VPNCs as members of different connectivity sites. Thus, each BG 102 in the announcement area 118 receives information from the network orchestrator 116 to populate (output) the pre-computed next hop routing tables for all BGs 102 and VPNCs 104 in the announcement area 118, but each VPNC 104 in the announcement area 118 receives only destination device information for routing to each BG 102 and the next hop routing to the VPNC 104 in the same connectivity site 120. For example, the VPNC 104a would not be provided with next hop routing to a VPNC (not shown) in a different connectivity site, but still in the announcement area 118. Continuing with the example, VPNC 104a will be provided with destination device routes for BGs (not shown) that are only connected (in overlay) to VPNCs in different connection sites, but VPNC 104a will calculate routes to that BG based on the network connectivity graph.
When the BG 102a receives a request to forward (forward) data traffic from the client device 110a to the client device 110b, the BG 102a then looks up the identification information (e.g., IP address) of the client device 110b in a routing table populated based on the route advertisement from the network orchestrator 116. The request to forward data traffic may be, among other things, a request to form a session between client device 110a and client device 110b, an ARP request, or any other suitable data unit for initiating communication between client devices 110, etc. The routing table includes a next overlay hop field that corresponds to the VPNC 104a (via the overlay tunnel 114 a). The BG 102a then forwards the request to the VPNC 104a. In the example shown in FIG. 1A, VPNC 104a, upon receiving the request, looks up client device 110b and finds the next overlay hop field corresponding to VPNC 104b. This is because VPNC 104a and VPNC 104b are in the same connection site 120 and advertise routes between VPNCs of the same connection site. In another example where VPNC 104a and VPNC 104b are at different connection sites, VPNC 104a simply obtains the overlay destination device (BG 102 b) from the routing table and determines that BG 102b is not directly reachable via any interface of VPNC 104a. The VPNC 104a then computes a route using a path computation algorithm, such as dijkstra algorithm, to determine a preferred path to the BG 102b (e.g., the next coverage hop to the VPNC 104b via the coverage tunnel 114 c). Upon receiving the request, the VPNC 104b looks up the route in the routing table and finds a direct connection to the BG 102b via the overlay tunnel 114e, and forwards the request accordingly. The VPNC 104b, which is the overlay destination device of the route, then forwards the request across the LAN 108b to the client device 110b.
FIG. 1B illustrates an example data flow across SD-WAN 100 in more detail. Although the data flows are shown in fig. 1B in a particular order, it is contemplated that the data flows may occur in a different order or in a different configuration depending on the characteristics of SD-WAN 100 or various devices.
Over a period of time, advertisements 122 are sent from the network infrastructure devices 102 and 104 to the network orchestrator 116. The advertisement 122 may be a typical routing advertisement that is forwarded to the network orchestrator 116 rather than being flooded across the network. In some examples, the advertisement 122 may be modified to include additional or different information than that typically included in a routing advertisement. For example, the advertisement 122 may include a list of IP subnets reachable via the advertising device, a list of overlay tunnels terminated at the advertising device, and additional information related to the overlay tunnels (such as tunnel IDs, tunnel types, and tunnel states). Although fig. 1B illustrates a single advertisement 122 sent from each device, multiple advertisements 122 may be sent from each device, divided by topic (e.g., IP subnet in one advertisement and overlay tunnel in another advertisement). As the network changes, multiple advertisements 122 may be sent from each device over time, with the updated advertisements 122 reflecting the network changes. The announcement 122 may be sent periodically from the device.
Upon receiving the advertisement 122, the network orchestrator 116 builds a network connectivity profile. The network connectivity profile may include all areas of SD-WAN 100, including many different advertisement areas. The network connectivity profile may represent a graph data structure, connecting nodes (e.g., overlay-enabled network infrastructure devices) via links (e.g., overlay links). Link costs may be set on various links of the network connectivity profile to include routing preferences, capture link capabilities (e.g., link type, link operational status, link termination point), reflect characteristics and preferences of SD-WAN 100 in the network connectivity profile, and so forth. From the network connectivity profile, the network orchestrator 116 obtains a network connectivity map for each advertised area. The network orchestrator 116 may also use information included in the network connectivity profile and information received from the advertisements 122 sent from the network infrastructure devices 102 and 104 to generate a routing list for each network infrastructure device of the advertised region 118. In some examples, the advertisement 122 is a single advertisement from each network infrastructure device 102 or 104, including tunnel information and routing information. In some other examples, each network infrastructure device 102 or 104 sends multiple advertisements 122, one or more advertisements including tunnel information and one or more advertisements including routing information.
The routing list generated by the network orchestrator 116 may vary depending on the desired network infrastructure device 102 or 104 and whether the device is a branch gateway or a VPNC. For example, a routing list for a breakout gateway 102 may include routes to all other breakout gateways 102 of the advertisement area 118 and to all accessible VPNCs 104 of the advertisement area 118. On the other hand, the routing list for a VPNC 104 may only include routes to the BG 102 in the announcement area 118 and to other VPNCs 104 in the connection sites 120. In some examples, the routing list for the BG 102 may include a next overlay hop parameter that specifies through which overlay tunnel traffic destined for an address in the subnet of the relevant route is sent, while the routing list for the VPNC 104 may include a final overlay destination parameter that specifies a final overlay device to which traffic destined for an address in the subnet of the relevant route should reach for final underlying routing. In such an example, VPNC 104 may take different actions than BG 102 to forward traffic based on information provided in each respective type of routing list.
The network orchestrator 116 sends the network connectivity graph and the routing list 124 to the network infrastructure devices 102 and 104. As described above, the network connectivity and routing lists 124a sent to the VPNCs 104a and 104b may be different in nature from the network connectivity graphs and routing lists 104b sent to the BGs 102a and 102b. Although a network connectivity graph is shown coupled with a routing list, in some examples, the graph and list may be sent together, separately, or even combined into a single data structure. As mentioned above, each network connectivity graph and routing list sent to each device may be different from each other. For example, the network connectivity map and routing list 124a sent to VPNC 104a may be different from the network connectivity map and routing list 124a sent to VPNC 104b. As an example of why this may occur, VPNC 104b does not require overlay routing to client devices attached thereto. Such routing terminates at VPNC 104b and overlay routing is not applicable. However, the VPNC 104a needs to know the overlay route that terminates at the VPNC 104b in order to route the relevant traffic towards the VPNC 104b.
Although no additional activity is shown in figure 1B between the network orchestrator 116 sending the network connectivity graph and routing list 124 and the BG 102a receiving the request 126, additional actions may occur within the network. For example, network infrastructure devices 102 and 104 may integrate network connectivity graphs and route lists into their internal data structures, VPNC 104 may pre-compute the next hops for some routes, and may send updated advertisements 122 from network infrastructure devices 102 and 104 to network orchestrator 116.
A request 126 is received at the BG 102a from a client device 110a attached via the LAN 108 a. Request 126 may be a request to initiate a connection from client device 110a to another "final destination" device (e.g., client device 110 b) on SD-WAN 100. In response to receiving the request 126, the bg 102a looks up 128 the final destination device via an IP address in a routing list (e.g., a routing table that includes information provided in the routing list 124 b) to determine which route applies. Routing lists often do not list individual IP addresses, but rather specify routes by subnet or aggregate multiple subnets together even when the next hop is the same. The BG 102a may employ various techniques to match an IP address to a particular route in a route list. The BG 102a identifies 130 a next overlay hop (e.g., VPNC 104 a) to route the request 126 to the client device 110b according to the identified route associated with the request 126. As mentioned previously, in some examples, the network orchestrator 116 has pre-calculated the next coverage hop for each route in the list of routes sent to the BG 102a. After identifying 130 the next coverage hop, the BG 102a may need to determine the specific link or interface to reach the next coverage hop. For example, the information from the network connectivity graph 124b may indicate that the next overlay hop VPNC 104a is reachable via the overlay tunnel 114f, which terminates the BG 102a at the overlay tunnel 114 f.
The BG 102a sends a request 126 across the overlay tunnel 114f to the VPNC 104a. The VPNC 104a participates in a similar process as the BG 102a, but with some differences. The VPNC 104a first looks up 132 the IP address of the final destination device in the routing list. The VPNC 104a then identifies 134 the overlay destination device. Note that "final destination device" refers in this disclosure to the client device 110b that is to receive the request 126, while "overlay destination device" (or "destination device") is the final overlay hop before any further routing of the request 126 is completed via the underlying layer. The overlay destination device to client device 110a is the BG 102b, which is not directly coupled with VPNC 104a via overlay tunnel 114. This results in VPNC 104a not being able to select an interface through which to forward request 126. Instead, VPNC 104a calculates preferred route 136 using received network connectivity graph 124 a. In some examples, the VPNC 104a can use a route calculation algorithm, such as the dijkstra algorithm, to determine the lowest cost route from the VPNC 104a to the BG 102b. As mentioned previously, even if the routes are computed by each VPNC 104 individually, the network connectivity graph 124a has link costs embedded in the graph to ensure that traffic is routed correctly according to the network administrator's intent. When processing power is available for such tasks, VPNC 104 may pre-compute their routes in order to improve latency in forwarding requests 126, but such pre-computed details of the VPNC are beyond the scope of this disclosure. In the example of sending pre-computed route lists from the network orchestrator 116 to the BG 102 and not to the VPNC 104, the types of network infrastructure devices are treated differently for some different purposes. In large networks where disproportionate benefits are expected from the features of the present disclosure, a network topology like hub mesh results with relatively few coverage connections (e.g., 2) per BG 102 and relatively many coverage connections (e.g., 10000) per VPNC 104. The computational complexity of calculating routes for a large number of devices (BG 102) each having a small number of connections is relatively low and therefore can be adjusted to be done by the network orchestrator 116. Furthermore, many BGs 102 are compute-constrained devices that may not compute routes as quickly as the network orchestrator 116. Alternatively, when each of the devices has a relatively large number of overlay connections, it can be very complex for a single network orchestrator 116 to compute routes for even a small number of such devices (VPNC 104). Rather than spending a significant amount of time (sometimes many minutes) pre-computing paths for all VPNCs 104, the network orchestrator 116 provides the VPNCs 104 with enough information to compute or pre-compute their own paths. Since the VPNC 104 tends to be a computationally powerful device, any loss of computational speed on a single route is offset by more immediate (sometimes millisecond) network convergence.
The VPNC 104a identifies the VPNC 104b as the next hop based on the calculated 136 preferred path and forwards the request 126 to the VPNC 104b via the overlay tunnel 114 c.
VPNC 104b operates similarly to VPNC 104a, with some key differences. Note that VPNC 104b is the penultimate hop in the overlay path, and there is a direct overlay interface between VPNC 104b and BG 102b (overlay destination device). Similar to VPNC 104a, VPNC 104b looks up 132 a route in the route list corresponding to the final destination device identified in the request 126. Similar to VPNC 104a, VPNC 104b identifies 134 the overlay destination device (BG 102 b) of the request 126. However, unlike VPNC 104a, VPNC 104b determines 138 that the next coverage hop device is the same as the coverage destination device, namely BG 102b. There is a direct overlay link between the VPNC 104b and the BG 102b, so no path computation is required. The VPNC 104b forwards the request 126 to the BG 102b. In some examples, path computation may still occur when there is a direct overlay link between the forwarding device and the overlay destination device, such as when multiple overlay links connect two devices.
Upon receiving the request 126, the BG 102b routes to the client device 110b via the underlying network LAN 108 b. The specifics of the underlying route are beyond the scope of this disclosure.
While fig. 1A illustrates unidirectional flow of data traffic, it is understood that the connection is generally bidirectional. As can be appreciated, the return path of traffic from client device 110b to client device 110a may take a different path through the overlay than the initial path followed by request 126. However, the network orchestrator 116 may be configured to avoid this behavior (known as fishtailing) and ensure that the return path is "pinned" to the request route by adjusting the link cost in the network connectivity graph. The fixed return path may prevent the network security appliance from unduly blocking data traffic along the return path due to the data traffic taking an unintended path.
It is to be understood that SD-WAN 100 is not required to be a fully locally deployed network. Any device may be implemented as a cloud device or cloud service, depending on the particular network topology. For example, the network orchestrator 116 may be provided as a cloud service and the client device 110b may be a cloud device deployed on a public or private cloud ecosystem.
Fig. 2 is a flow diagram illustrating an example method for configuring an SD-WAN topology. The method 200 may be stored as instructions in a non-transitory computer readable medium and executed on a processing circuit of a device such as a network orchestrator.
In block 202, advertisements are received from a set of network infrastructure devices of a software defined wide area network (SD-WAN), each advertisement including information about a SD-WAN coverage tunnel terminated at the respective network infrastructure device. In some examples, different information is received in a series of advertisements from each network infrastructure device. For example, tunnel information from each network infrastructure device may be received in a first advertisement and routing information from each network infrastructure device may be received in a second advertisement. In some examples, advertisements are updated and/or re-created (renew) periodically or when updates to the network cause previous advertisements to no longer be accurate. In some examples, the network orchestrator may send a request for an advertisement, while in other examples, the advertisement may be provided by the network infrastructure device without a request from the network orchestrator.
In block 204, based in part on the received advertisement, an overlay tunnel of the SD-WAN is identified, including a type, an operational state, and a termination point. In some examples, the received advertisement includes a Universally Unique Identifier (UUID) for each advertised tunnel, and the tunnel advertised by each terminating network infrastructure device is identified as a single tunnel based on the matching UUID. In some other examples, other characteristics advertised to the network orchestrator are used to identify the overlay tunnel.
In block 206, an advertising area for a set of SD-WANs is determined based in part on the identified coverage tunnels. In determining the set of advertised regions, each network infrastructure device in the set of network infrastructure devices is classified as a member of the advertised region. In some examples, the advertised area is a collection of network infrastructure devices between which SD-WAN overlay tunnels are interconnected to form a continuous SD-WAN overlay network. In some examples, the set of network infrastructure devices includes a BG and a VPNC. In some examples, each advertisement area is substantially interconnected by an overlay tunnel, but is not substantially connected to other advertisement areas by overlay tunnels.
In block 208, a set of connection sites is determined based in part on the identified coverage tunnels. In some examples, each network device in the subset of the set of network devices is classified to at least one connection site in the set of connection sites. In some examples, each connection site of the set of connection sites is a collection of network infrastructure devices that collectively operate based on an intent provided by a network administrator. The intent provided by the network administrator may include an intent to connect to a network site associated with one or more network infrastructure devices in the set of network infrastructure devices. In some examples, the subset of network infrastructure devices is a VPNC and the network infrastructure devices in the set of network infrastructure devices but not in the subset are BGs. The connection sites may be used to group the interconnected VPNCs to limit the size of the routing table in each VPNC to include only routes reachable by that particular VPNC. Due to the type of topology used in large networks, such as hub meshes, the BG may not have a fairly large routing table because it has relatively few overlay connections and is able to aggregate subnetworks in its routing table, thereby reducing the number of entries in the table.
In block 210, a set of network connectivity graphs is constructed based in part on the identified coverage tunnels, the set of advertised areas, and the set of connected sites. In some examples, each network connectivity graph is associated with a respective advertised region of the set of advertised regions. In some examples, wherein the set of network connectivity graphs provides information regarding overlay tunnel connectivity between network infrastructure devices of the SD-WAN.
In some examples, a first set of routing lists is constructed. Each route list in the first set of route lists indicates a destination device for each route from a particular network infrastructure device. In some examples, each routing list of the first set of routing lists corresponds to a respective VPNC of the SD-WAN. The destination device may be a final overlay device to reach the final destination device, and the final overlay device may route data traffic to the final destination device separately via the underlying link.
In some examples, a second set of routing lists is constructed. Each routing list in the second set of routing lists indicates a next hop device for each route from the particular network infrastructure device. In some examples, each routing list of the second set of routing lists corresponds to a respective BG of the SD-WAN. The next-hop device may be a next-hop overlay device connected to the respective BG via a network overlay tunnel.
In some examples, the network connectivity graph includes routes indicating which network infrastructure devices forwarded data traffic associated with the respective routes. In some other examples, routes are created separately from the network connectivity graph, and the routes may include the next overlay hop or overlay destination device, depending on which network infrastructure device will receive a particular route. In some examples, the network orchestrator generates a network connectivity profile that includes the coverage devices and links of the SD-WAN, as well as link costs and other relevant information. From the network connectivity profile, a network connectivity map and routing list for each advertised area and for each receiving network infrastructure device may be constructed.
In block 212, the network connectivity graph is sent to a first subset of network infrastructure devices. The network connectivity graph sent to each network infrastructure device in the first subset of network infrastructure devices is a network connectivity graph associated with an advertisement area of which the respective network infrastructure device is a member. In some examples, the first subset of network infrastructure devices is a VPNC.
In block 214, the first set of routing lists is sent to each respective network infrastructure device. The first set of routing lists is sent to a network infrastructure device that is classified to one of the set of connection sites. In some examples, each routing list in the first set of routing lists is sent to the VPNC. In some examples, each route list in the first set of route lists includes an overlay destination device for each route.
In block 216, the network connectivity graph is sent to a second subset of network infrastructure devices. The network connectivity graph sent to each network infrastructure device in the second subset of network infrastructure devices is a network connectivity graph associated with an advertisement area of which the respective network infrastructure device is a member. In some examples, the second subset of network infrastructure devices is a BG. In some examples, the network connectivity map sent to the second subset of network infrastructure devices is substantially the same as the network connectivity map sent to the first subset of network infrastructure devices on a per advertised area basis.
In block 218, a second set of routing lists is sent to each respective network infrastructure device. The second set of routing lists is sent to network infrastructure devices not classified to one of the set of connection sites, but not to other network infrastructure devices. In some examples, each routing list in the second set of routing lists is sent to the BG. In some examples, each route list in the second set of route lists includes a next hop device for each route.
Fig. 3 is an illustration of an example network infrastructure device. The network infrastructure device 300 may be a physical device, a virtualized device, a cloud service, or any other computing device or combination of computing devices. Network infrastructure device 300 includes processing circuitry 302, memory 304, and interface 308. The memory 304 contains instructions 306 that are executed by the processing circuitry to cause the network infrastructure device 300 to take certain actions. The instructions 306 may be executed in a different order or in parallel and still implement features of the present disclosure. Additional instructions 306f represent additional instructions for implementing features of the present disclosure. As will be apparent to those of ordinary skill in the art, even more additional instructions may be present in memory 304 to cause network infrastructure device 300 to take actions that do not directly implement features of the present disclosure. Such additional instructions are not within the scope of the present disclosure. The network infrastructure device 300 may be a Virtual Private Network Concentrator (VPNC).
The advertised instructions 306a cause the network infrastructure device 300 to advertise to the network orchestrator a set of SD-WAN coverage tunnels that terminate at the network infrastructure device 300, specifically at the interface 308. The announcement may be sent to the network orchestrator via interface 308. The advertisement may include information about the overlay tunnel terminated at the network infrastructure device 300, including the tunnel type, the tunnel status, and the tunnel UUID. This information may be collected for the tunnel by querying the respective interface 308 associated with the tunnel. In some examples, additional information may be collected, including tunnel health parameters.
The instructions 306b to receive a network connectivity map cause the network infrastructure device 300 to receive the network connectivity map from the network orchestrator. The network connectivity graph includes a categorized group of network infrastructure devices advertising members of the area and links between the group of network infrastructure devices. Each network infrastructure device in the subset of the set of network infrastructure devices is classified into at least one connection site in the set of connection sites. In some examples, the advertised area is a collection of network infrastructure devices between which SD-WAN overlay tunnels are interconnected to form a continuous SD-WAN overlay network. In some examples, each connection site in the set of connection sites is a collection of network infrastructure devices that operate together based on an intent provided by a network administrator. The intent provided by the network administrator may include an intent to connect network sites associated with one or more network infrastructure devices in the set of network infrastructure devices.
In some examples, the additional instructions 306f include instructions to receive a routing list from a network orchestrator. In some examples, each route of the route list includes an overlay destination device. The routing list may include routes to destination devices connected to the network infrastructure device via the tunnel, routes to destination devices classified to the same connection sites as the network infrastructure device, and routes to destination devices connected to at least one device classified to the same connection sites as the network infrastructure device via the tunnel.
The instructions 306c to receive data traffic cause the network infrastructure device 300 to receive data traffic intended for an overlay destination device in the set of network infrastructure devices via the interface 308. In some examples, the data traffic includes an address of the final destination device, but the final overlay device that forwards the data traffic is the overlay destination device. The network infrastructure device 300, when receiving the data traffic, may not know that the data traffic is intended for an overlay destination device. In some examples, the network infrastructure device 300 identifies the overlay destination device by reference to a routing list received from a network orchestrator.
The instructions to determine a preferred path 306d cause the network infrastructure device 300 to determine a preferred path to the overlay destination device via a device classified to the same connection site as the network infrastructure device 300. For example, the overlay destination device may be a BG that is not directly connected to the network interface device 300 via a tunnel that terminates at interface 308. However, using the network connectivity map and the route calculation algorithm, the network interface device 300 may determine a preferred path to the overlay destination device via a next hop device that is classified to the same connection site as the network infrastructure device 300. An example of this is the topology of fig. 1A, where a request received from a BG (e.g., BG 102 a) directly linked to a network infrastructure device 300 (e.g., VPNC 104 a) cannot be forwarded directly to an overlay destination device (e.g., BG 102 b), but must be forwarded via another device (e.g., VPNC 104 b) in the same connection site. Determining the preferred path may include identifying the destination device in a routing list received from the network orchestrator, calculating a lowest cost path from the network infrastructure device to the destination device based on the network connectivity map, and selecting the lowest cost path as the preferred path.
The instructions 306e to send data traffic cause the network infrastructure device 300 to send the data traffic via the interface 308 associated with the preferred path. Referring back to the example topology of fig. 1A, the interface 308 associated with the preferred path may correspond to, for example, the overlay tunnel 114 c.
Fig. 4 is an illustration of an example SD-WAN network including multiple advertised areas. Fig. 4 is a simplified diagram for describing the general operation of an SD-WAN having multiple advertisement areas. This illustration is not intended to fully illustrate and describe the operation of a full SD-WAN network. The SD-WAN 400 includes a network orchestrator 402, BGs 404a-e, VPNCs 406a-h, advertisement areas 408a-b, and connection sites 410a-d. For the purposes of this figure, the VPNCs (e.g., VPNC 406a and VPNC 406 b) that are arranged immediately vertically next are VPNCs in the same network site. The lines between the network infrastructure devices represent overlay tunnels.
Although this figure is simplified, it illustrates some of the challenges of a large scale hub mesh network. Hub mesh networks feature both full mesh networks and hub-and-spoke networks to avoid the scaling problem of either architecture. It can be appreciated that VPNC manages many times more tunnels and routes than BG manages in a hub mesh topology. For example, VPNC 406b manages 5 tunnels, while each BG manages 2 tunnels. If each BG shown in FIG. 4 instead represents 1000 BG's, then it can be appreciated that each BG will still only manage 2 tunnels, but the VPNC 406b will now manage more than 2000 tunnels. The load growth imbalance due to the expansion means that VPNC 406 should be treated differently from BG 404.
When network orchestrator 402 collects information about the topology of SD-WAN 400 from the advertisements, orchestrator 402 can determine which devices are interconnected via the overlay tunnels and which devices are not connected to each other. This is the basis for determining the announcement area 408. For example, all devices in the announcement region 408a are interconnected to each other via the overlay tunnel, but none of the devices in the announcement region 408a are connected to any of the devices of the announcement region 408b via the overlay tunnel. Based in part on those connections and the lack of connections, the network orchestrator 402 builds an advertisement area 408. In some networks, the partitioning may not be so "clean". There may be limited coverage tunnels between announcement areas. However, thresholds and other logic may be employed to determine when such limited coverage tunnels are considered "immaterial" and still allow for the partitioning of the advertisement area. One such example would be whether an inactive tunnel connects VPNC 406d to VPNC 406g for disaster recovery purposes.
The network connectivity graph advertised by the network orchestrator 402 shows all coverage-aware network infrastructure devices within the corresponding advertised area 408. For example, the network connectivity graph sent to the VPNC 406b would contain information that, if converted to visual form, would appear substantially similar to the portion of fig. 4 enclosed within the advertisement area 408a within the dashed box. Additional information such as link cost, link type, and other characteristics of the network may also be included in the network connectivity graph.
The network orchestrator 402 collects routing information from advertisements from network infrastructure devices. From this routing information along with the network map, the network orchestrator 402 builds a routing list for each BG 404 and each VPNC 406. The routing list is a list of routes that includes the subnet of the IP address and the associated overlay device. The associated overlay devices are different for the BG 404 and VPNC 406. Focusing first on the advertisement area 408b, the routing list is fairly simple. An example routing list for BG 404e may contain similar information as follows:
Figure BDA0003326548560000201
traffic with a destination IP address in the range of 10.0.0.1 to 10.0.0.255 is sent to the VPNC 406g. Traffic with a destination IP address in the range of 10.0.1.1 to 10.0.1.255 is sent to the VPNC 406h. Traffic having a destination IP address in the range of 10.1.0.1 to 10.0.0.255 is sent to the local router 10.1.0.1 (not shown) via the underlay. In some examples, the BG 404e may be the only router for its branch, and the routing table may include additional entries in the list for local devices.
There is a difference when looking at the routing table of VPNC 406g. Rather than capturing the next overlay hop, the routing table includes the overlay destination device. However, since the advertisement area 408b is so small, the next coverage hop for any route will likely be the coverage destination device. However, to show the differences more clearly, the preferred path from VPNC 406g to BG 404e would be through VPNC 406h. An example routing list for VPNC 406g may contain similar information to:
Figure BDA0003326548560000202
Figure BDA0003326548560000211
notably, because the routing list includes an overlay destination device instead of an overlay next hop device, subnet 10.1.0.1/8 is still associated with BG 404e even though the preferred path has the next overlay hop being VPNC 406h. In response to receiving traffic with a destination IP in a 10.1.0.1/8 subnet (or prior to receiving in the case of pre-computed routes), VPNC 406g determines the next hop by performing a path computation route that concludes that the path from VPNC 406g- > VPNC 406h- > BG 404e is the lowest cost overlay path (preferred path). From this preferred path, VPNC 406g knows to tunnel traffic to VPNC 406h.
Turning to the announcement area 408a, additional differences in how the BG 404 and VPNC 406 are handled begin to appear. For example, the BG 404 in the advertised region 408a would receive routes in the route list for all other BGs 404 in the advertised region 408a and for all VPNCs 406 in the advertised region 408 a. However, in some examples, the VPNC 406 only receives routes in the route list for all BGs 404 and other VPNCs 406 in the same connectivity site 410. As an example, VPNC 406a receives routes for the BGs 404a-d and devices on VPNC 406 a-d. However, the routes for devices on VPNC 406e-f are not in the list for VPNC 406 a. One purpose of this difference relates to the previous discussion about the load growth imbalance due to the expansion between VPNC and BG. Even when routes for many subnets are included, BG can aggregate many subnets into several routes because there are only a few overlay connections. Since there are relatively many overlay connections, the VPNC must compute many routes separately, and removing some of the unavailable routes reduces the processing burden of the VPNC. It is worth noting that paths exist between devices even if, for example, VPNC 406 a-406 e are asserted as unavailable routes. VPNC 406a- > VPNC 406b- > BG 404b- > VPNC 406e connects the two VPNCs. However, this is an unavailable route because it is undesirable to transport VPNC traffic across BGs (such as BG 404 b). BG is typically not computationally powerful enough to handle local and cross traffic between VPNCs (often located in data centers). Traffic from VPNC 406a may not be routed to VPNC 406e while ignoring the BG 404b path.
The VPNC may be in a variety of configurations within the connection site. Several examples are highlighted in fig. 4. For example, VPNC 406g and h are collocated, connected VPNCs, which are collectively considered a connection site 410d. It is also possible to divide the stations in a single network station. For example, VPNCs 406e and 406f are in the same network site, but they are not connected and they are members of connection sites 410b and 410c, respectively. This may occur, for example, when two separate systems are hosted on the same site or on the same cloud service (such as a payroll system and development strategy knowledge base). Multiple network sites may also be considered a single connection site, as is the case with connection site 410 a. The first network site (including VPNCs 406a and 406 b) is interconnected with the second network site (including VPNCs 406c and 406 d) and they are both members of the connection site 410 a. This may occur, for example, when two data centers host different enterprise-wide services and are interconnected to provide the ability for the services to interact.
Network convergence is the process by which network infrastructure devices gain an understanding of the topology of the network and how data traffic is routed across the network.
The bottom layer is a network of physical devices and links that form the network. The bottom layer may include network infrastructure devices, interfaces for these devices, and physical interconnections such as wired and wireless connections.
An overlay is an abstract logical network that is located on top of an underlying network. Overlays in a SD-WAN network abstract complex underlying paths through the Internet that are outside the administrative domain of the SD-WAN administrator. The overlay may use an overlay tunnel (encrypted VPN tunnel) to connect nodes (network infrastructure devices) of the overlay network.
The overlay tunnel is an encrypted connection established between the overlay aware devices of the SD-WAN. The encrypted connection may be a VPN tunnel, such as an IPSec tunnel. Data traffic may be forwarded between terminating devices (terminating devices) of the tunnel without the intermediary device being able to identify the content of the traffic.
When a tunnel connection is established at an interface of a device, the overlay tunnel is said to terminate at that device. The device is capable of encrypting data traffic to be sent through the tunnel and decrypting data traffic received from the tunnel.
A continuous overlay network is one in which all devices are directly or indirectly interconnected via an overlay tunnel. In some cases, some devices of the continuous overlay network may not be able to route traffic to other devices of the continuous overlay network, but those limitations are not because there is no overlay path between the devices.
A hub mesh topology is a network topology where BGs are connected to a VPNC, much like the hub-and-spoke model, where many BGs are connected to a VPNC and each BG is connected to relatively few VPNCs. VPNC (hubs) within the same connection site are meshed (mesh) together.
A routing list is a list of routes that are suitable for insertion into a routing table. In general, a routing list may include IP subnets and associated devices for receiving data traffic for those IP subnets.
A path is a succession of links and devices throughout the network that provide data traffic from one device to another. "route" and "path" are often used as synonyms. In this disclosure, "routing" encompasses logical end-to-end connections between devices across an SDN overlay network. A "path" refers to a series of physical or logical links and devices through which data traffic is forwarded in an SDN overlay network or a physical underlay network.
A network advertisement is a message sent from a network device to one or more other network devices that presents some information about the sending device that helps operate the network.
A network connectivity graph is a graphical data structure that captures characteristics of the network, including links between nodes and link costs.
Network infrastructure devices are said to be co-operating when data traffic having a destination associated with one of the network infrastructure devices can be received at any of the co-operating network infrastructure devices and routed to the destination.
The destination device is an overlay aware network infrastructure device, which is the final overlay aware device before the data traffic reaches the final destination device.
An interface is a logical and physical component of a network device that manages connections to a network.
A cloud service is an application or other executable file that executes as a service in the cloud.
A cloud device is a computing device deployed in the cloud.
A breakout gateway is a network infrastructure device placed at the edge of a breakout LAN. A breakout gateway is often a router that interfaces between a LAN and a broader network, whether other LANs connected directly to the WAN via a dedicated network link (e.g., MPLS) or connected to the WAN via the internet through a link provided by an internet service provider connection. Many breakout gateways may establish multiple uplinks to the WAN, either to multiple other LAN sites or redundant uplinks to a single other LAN site. The breakout gateway often also includes a network controller for the breakout LAN. In such an example, a breakout gateway used in the SD-WAN may include a network controller logically partitioned from the included routers. The network controller may control the infrastructure devices of the branch LANs and may receive routing commands from the network orchestrator.
A head-end gateway (sometimes referred to as a VPN concentrator) is a network infrastructure device placed at the edge of a core site LAN. The head-end gateway is often a router that interfaces between the LAN and the broader network, whether connected directly to other LANs of the WAN via a dedicated network link (e.g., MPLS), or connected to other LANs of the WAN via the internet through a link provided by an internet service provider connection. Many head-end gateways may establish multiple uplinks to the WAN, either to multiple other LAN sites or redundant uplinks to a single other LAN site. The head-end gateway often also includes a network controller for the core site LAN. In such an example, a head-end gateway used in the SD-WAN may include a network controller logically partitioned from the included routers. The network controller may control infrastructure equipment of the core site LAN and may receive routing commands from the network orchestrator.
A network orchestrator is a service executing on a computing device (e.g., instructions stored in a non-transitory, computer-readable medium and executed by processing circuitry) that orchestrates switching and routing across SD-WANs. In some examples, the network orchestrator executes on a computing device in a core site LAN of the SD-WAN. In some other examples, the network orchestrator executes on a cloud computing device. The network orchestrator may be provided as a service (aaS) to the SD-WAN. The network orchestrator collects network operation information, including network traffic load information, network topology information, network usage information, etc., from the various network infrastructure devices of the SD-WAN. The network orchestrator then sends commands to the various network infrastructure devices of the SD-WAN to alter the network topology and network routing in order to achieve various network efficiency and efficacy goals.
A network administrator is a person, network service, or combination thereof that has administrative access to network infrastructure devices and configures the devices to conform to the network topology.
A client device is a computing device operated or accessed by a network user. Client devices include laptop/desktop computers, tablet/phones/PDAs, servers, internet of things devices, sensors, and the like.
A network infrastructure device is a device that receives network traffic and forwards the network traffic to a destination. Network infrastructure devices may include controllers, access points, switches, routers, bridges, and gateways, among other devices. Some network infrastructure devices may have SDN capabilities and thus may receive network commands from a controller or orchestrator and adjust operations based on the received network commands. Some network infrastructure devices perform packet services, such as application classification and deep packet inspection, on certain network traffic received at the network infrastructure device. Some network infrastructure devices monitor load parameters of various physical and logical resources of the network infrastructure device and report the load information to a controller or orchestrator.
A processing circuit is a circuit that receives instructions and data and executes the instructions. The processing circuitry may include an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a microcontroller (uC), a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a microprocessor, or any other suitable circuitry capable of receiving instructions and data and executing the instructions. The processing circuitry may comprise one processor or a plurality of processors. The processing circuitry may include a cache. The processing circuitry may interface with other components of the device, including memory, network interfaces, peripherals, support circuits, a data bus, or any other suitable components. The processors of the processing circuit may communicate with each other through a shared cache, inter-processor communication, or any other suitable technique.
The memory is one or more non-transitory computer-readable media capable of storing instructions and data. The memory may include Random Access Memory (RAM), read Only Memory (ROM), processor cache, removable media (e.g., CD-ROM, USB flash drives), storage drives (e.g., hard Disk Drives (HDDs), solid State Drives (SSDs)), network storage (e.g., network Attached Storage (NAS)), and/or cloud storage. In this disclosure, unless otherwise specified, all references to memory and to instructions and data stored in memory may refer to the instructions and data stored in any non-transitory computer-readable medium or any combination of such non-transitory computer-readable media capable of storing the instructions and data.
A Software Defined Network (SDN) is a network of overlay physical networks that allows devices, such as network organizers, to dynamically configure the topology of the SDN overlay using flows through the underlying physical network to specific routes. Dynamic configuration may include changes to the network topology based on a number of factors, including network health and performance, data type, application type, quality of service limitations (e.g., service level agreements), device load, available bandwidth, cost of service, and other factors.
A software defined wide area network (SD-WAN) is an SDN that controls the interaction of the various sites of the WAN. Each site may have one or more LANs, and the LANs are connected to each other via WAN uplinks. Some WAN uplinks are dedicated lines (e.g., MPLS), while others are shared routes through the internet (e.g., DSL, T1, LTE, 5G, etc.). The SD-WAN dynamically configures the WAN uplink and data traffic through the WAN uplink to efficiently use resources of the WAN uplink.
The features of the present disclosure may be implemented using a variety of specific apparatus that incorporate a variety of different technologies and features. As an example, features including instructions to be executed by the processing circuit may store the instructions in a cache of the processing circuit, random Access Memory (RAM), a hard disk drive, a removable drive (e.g., a CD-ROM), a Field Programmable Gate Array (FPGA), a Read Only Memory (ROM), or any other non-transitory computer readable medium as applicable to the particular apparatus and the particular example embodiments. As will be apparent to those of ordinary skill in the art, the features of the present disclosure are not to be changed by the technology (whether known or unknown) and the characteristics of the particular apparatus on which they are implemented. Any modifications or alterations required to implement the features of the present disclosure on a particular device or in a particular example will be apparent to those of ordinary skill in the relevant art.
Although the present disclosure has been described in detail, it should be understood that various changes, substitutions, and alterations can be made hereto without departing from the spirit and scope of the present disclosure. In regard to features of the present disclosure, any use of the words "may" or "may" indicate that some examples include the feature and some other examples do not, depending on the context. In regard to features of the present disclosure, any use of the words "or" and "indicate that an example may include any combination of the listed features, depending on the context.
Phrases and parentheses beginning with "e.g.," or "i.e.," are used merely to provide examples for clarity. The present disclosure is not intended to be limited by the examples provided in these phrases and parentheses. The scope and understanding of the present disclosure may include certain examples not disclosed in such phrases and parentheses.

Claims (20)

1. A network infrastructure device of a software defined wide area network, SD-WAN, comprising:
a processing circuit; and
a memory comprising instructions that, when executed by the processing circuit, cause the network infrastructure device to:
advertising to a network orchestrator a set of SD-WAN coverage tunnels that terminate at the network infrastructure device;
receiving a network connectivity graph from the network orchestrator, the network connectivity graph comprising a categorized set of network infrastructure devices that are members of an advertised area and links between the set of network infrastructure devices, wherein each network infrastructure device in a subset of the set of network infrastructure devices is categorized to at least one connection site in a set of connection sites;
receiving, from a client device, data traffic intended for a destination device in the set of network infrastructure devices;
determining a preferred path to the destination device via a device classified to the same connection site as the network infrastructure device based on the network connectivity graph; and
sending the data traffic via an interface associated with the preferred path.
2. The network infrastructure device of claim 1, wherein the advertised area is a collection of network infrastructure devices between which SD-WAN overlay tunnels are interconnected to form a continuous SD-WAN overlay network.
3. The network infrastructure device of claim 1, wherein each connection site of the set of connection sites is a collection of network infrastructure devices that operate together based on an intent provided by a network administrator.
4. The network infrastructure device of claim 3, wherein the intent provided by the network administrator comprises: an intent to connect to a network site associated with one or more network infrastructure devices in the set of network infrastructure devices.
5. The network infrastructure device of claim 1, wherein determining the preferred path comprises: identifying a destination device in a routing list received from the network orchestrator; calculating a lowest cost path from the network infrastructure device to the destination device based on the network connectivity graph; and selecting the lowest cost path as the preferred path.
6. The network infrastructure device of claim 5, wherein the routing list comprises: a route to a destination device connected to the network infrastructure device via a tunnel, a route to a destination device classified to a same connection site as the network infrastructure device, and a route to a destination device connected to at least one device classified to a same connection site as the network infrastructure device via a tunnel.
7. The network infrastructure equipment of claim 1, wherein the network infrastructure equipment is a virtual private network concentrator, VPNC.
8. A method, comprising:
receiving, at a network orchestrator, advertisements from a set of network infrastructure devices of a software defined wide area network, SD-WAN, each advertisement including information about SD-WAN coverage tunnels terminated at the respective network infrastructure device;
identifying an overlay tunnel, including a type, an operational state, and a termination point, for the SD-WAN based in part on the received advertisement;
determining a set of advertising areas of the SD-WAN based in part on the identified coverage tunnels, wherein each network infrastructure device of the set of network infrastructure devices is classified as a member of an advertising area;
determining a set of connection sites based in part on the identified overlay tunnels, wherein each network infrastructure device in a subset of the set of network infrastructure devices is classified to at least one connection site in the set of connection sites;
construct a set of network connectivity graphs based in part on the identified coverage tunnels, the set of advertised regions, and the set of connected sites, each network connectivity graph associated with a respective advertised region of the set of advertised regions;
constructing a first set of routing lists, each routing list in the first set of routing lists indicating destination devices for each route from a particular network infrastructure device;
constructing a second set of routing lists, each routing list in the second set of routing lists indicating a next hop device for each route from a particular network infrastructure device; and
transmitting the network connectivity map to each network infrastructure device in the set of network infrastructure devices,
wherein the network connectivity graph sent to each network infrastructure device is a network connectivity graph associated with an advertising area of which the respective network infrastructure device is a member; and
sending the first set of routing lists and the second set of routing lists to respective network infrastructure devices, the first set of routing lists being sent to network infrastructure devices classified to one of the set of connection sites, and the second set of routing lists being sent to other network infrastructure devices.
9. The method of claim 8, wherein the advertised area is a set of network infrastructure devices between which SD-WAN overlay tunnels are interconnected to form a continuous SD-WAN overlay network.
10. The method of claim 8, wherein each connection site of the set of connection sites is a collection of network infrastructure devices that collectively operate based on an intent provided by a network administrator.
11. The method of claim 10, wherein the intent provided by the network administrator comprises: an intent to connect to a network site associated with one or more network infrastructure devices in the set of network infrastructure devices.
12. The method according to claim 8, wherein the subset of network infrastructure devices classified to one of the set of connection sites is a virtual private network concentrator, VPNC, and the other network infrastructure devices are branch gateways, BG.
13. The method of claim 12, wherein the network connectivity graph sent to BG comprises: a route indicating which network infrastructure device forwards data traffic associated with the route.
14. The method of claim 12, wherein the network connectivity graph sent to a VPNC comprises: a route that does not indicate which network infrastructure device forwards data traffic associated with the route.
15. The method of claim 12, wherein each connection site comprises a plurality of VPNCs that the network administrator intends to operate together.
16. The method of claim 8, wherein the set of network connectivity graphs provides information regarding overlay tunnel connectivity between network infrastructure devices of the SD-WAN, and wherein each advertised area is substantially interconnected by overlay tunnels but is not substantially connected to other advertised areas by overlay tunnels.
17. A system, comprising:
a network orchestrator comprising a memory, the memory comprising instructions that cause the network orchestrator to:
receiving an advertisement including information on an SD-WAN coverage tunnel terminated at each respective device from a first VPNC, a second VPNC, a first BG, and a second BG;
identifying an overlay tunnel, including a type, an operational state, and a termination point, for the SD-WAN based in part on the received advertisement;
determining a set of advertised regions of the SD-WAN based in part on the identified overlay tunnels, wherein the first VPNC, the second VPNC, the first BG, and the second BG are members of a first advertised region;
determining a set of connection sites based in part on the identified overlay tunnels, wherein the first VPNC and the second VPNC are members of a first connection site;
constructing a network connectivity graph for the first advertising area based in part on the identified coverage tunnel, the set of advertising areas, and the set of connecting stations;
sending a network connectivity graph having a first route list to the first VPNC and the second VPNC, the first route list indicating destination devices for each route; and
sending a network connectivity graph with a second route list to the first BG and the second BG, the second route list indicating a next hop device for each route; and
the first BG connected to the first VPNC via one or more overlay tunnels, the first BG including a memory including instructions that cause the first BG to:
receiving the network connectivity graph with the second route list from the network orchestrator;
receiving, from a first client device, a request to forward data traffic to a second client device connected to the second BG;
determining, based on the network connectivity graph, that the first VPNC is selected as a next overlay hop to forward the data traffic; and
forwarding the data traffic to the first VPNC;
a first VPNC comprising a memory comprising instructions that cause the first VPNC to:
receiving the network connectivity graph with the first routing list from the network orchestrator;
receiving the request originating from the first client device to forward the data traffic to the second client device;
determining, based on the network connectivity graph, that the second BG is an overlay destination device for the data traffic;
determining a preferred path from the first VPNC to the second BG via the second VPNC using a path computation algorithm; and
forwarding the data traffic along the preferred path; and
a second BG connected to the second VPNC via one or more overlay tunnels, the second BG including memory including instructions that cause the second BG to:
receiving the data traffic from the second VPNC; and
forwarding the data traffic to the second client device.
18. The system of claim 17, wherein the first VPNC is in a first data center and the second VPNC is in a second data center, and the intent provided by a network administrator connects the first data center and the second data center as a connection site.
19. The system of claim 17, wherein the network orchestrator is a cloud service and the second client device is a cloud device.
20. The system of claim 17, wherein the network orchestrator further comprises instructions to send an updated network connectivity graph comprising changes to an original network connectivity graph.
CN202111265048.1A 2021-08-12 2021-10-28 Scalable SD-WAN topology and routing automation Pending CN115941580A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202117400961A 2021-08-12 2021-08-12
US17/400,961 2021-08-12

Publications (1)

Publication Number Publication Date
CN115941580A true CN115941580A (en) 2023-04-07

Family

ID=85039894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111265048.1A Pending CN115941580A (en) 2021-08-12 2021-10-28 Scalable SD-WAN topology and routing automation

Country Status (2)

Country Link
CN (1) CN115941580A (en)
DE (1) DE102021127361A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987794B (en) * 2023-03-17 2023-05-12 深圳互联先锋科技有限公司 Intelligent shunting method based on SD-WAN

Also Published As

Publication number Publication date
DE102021127361A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
US11025543B2 (en) Route advertisement by managed gateways
US10742556B2 (en) Tactical traffic engineering based on segment routing policies
US11588683B2 (en) Stitching enterprise virtual private networks (VPNs) with cloud virtual private clouds (VPCs)
US9225597B2 (en) Managed gateways peering with external router to attract ingress packets
CN111478852B (en) Route advertisement for managed gateways
CN104685838B (en) Virtualized using abstract and interface the software defined network of particular topology is serviced
US9876685B2 (en) Hybrid control/data plane for packet brokering orchestration
US20040034702A1 (en) Method and apparatus for exchanging intra-domain routing information between VPN sites
WO2019108148A2 (en) System and method for convergence of software defined network (sdn) and network function virtualization (nfv)
US20180041396A1 (en) System and method for topology discovery in data center networks
US10033628B2 (en) Application controlled path selection over different transit providers
US10567252B1 (en) Network connection service high availability evaluation for co-location facilities
Ibrahim et al. A multi-objective routing mechanism for energy management optimization in SDN multi-control architecture
US11855893B2 (en) Tag-based cross-region segment management
Hua et al. Topology-preserving traffic engineering for hierarchical multi-domain SDN
Aibin Dynamic routing algorithms for cloud-ready elastic optical networks
CN104994019B (en) A kind of horizontal direction interface system for SDN controllers
CN115941580A (en) Scalable SD-WAN topology and routing automation
US11799755B2 (en) Metadata-based cross-region segment routing
US11570094B1 (en) Scaling border gateway protocol services
Kouicem et al. An enhanced path computation for wide area networks based on software defined networking
US11546257B1 (en) Scalable SD-WAN topology and route automation
Iqbal et al. On the design of network control and management plane
Lin et al. D 2 ENDIST-FM: Flow migration in routing of OpenFlow-based cloud networks
Fioccola et al. A PCE-based architecture for green management of virtual infrastructures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication