WO2022073583A1 - Distributed traffic engineering at edge devices in a computer network - Google Patents

Distributed traffic engineering at edge devices in a computer network Download PDF

Info

Publication number
WO2022073583A1
WO2022073583A1 PCT/EP2020/077895 EP2020077895W WO2022073583A1 WO 2022073583 A1 WO2022073583 A1 WO 2022073583A1 EP 2020077895 W EP2020077895 W EP 2020077895W WO 2022073583 A1 WO2022073583 A1 WO 2022073583A1
Authority
WO
WIPO (PCT)
Prior art keywords
tunnels
edge devices
link
network
information
Prior art date
Application number
PCT/EP2020/077895
Other languages
French (fr)
Inventor
Youcef MAGNOUCHE
Jeremie Leguay
Tran Anh Quang PHAM
Xu GONG
Feng ZENG
Wei Chen
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to PCT/EP2020/077895 priority Critical patent/WO2022073583A1/en
Priority to EP20789044.3A priority patent/EP4211884A1/en
Publication of WO2022073583A1 publication Critical patent/WO2022073583A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/263Rate modification at the source after receiving feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Definitions

  • the disclosure relates generally to traffic management in a computer network; and more specifically to an edge device that is configured to control one or more data rates in a computer network, for example a plurality of data rates of a corresponding plurality of tunnels over multiple paths. Moreover, the disclosure relates to a method for operating a device as an edge device of a computer network for controlling the one or more data rates in the computer network.
  • IP internet protocol
  • load balancing in IP networks is implemented inside network devices, such as switches or routers using two techniques including: 1) a hash-based splitting technique, where a hash is calculated over significant fields of packet headers and used to select the outgoing paths in an uniform manner or an uneven manner, and 2) a weighted cost multi pathing (WCMP) technique, where load balancing weights are used to make decisions.
  • a hash-based splitting technique where a hash is calculated over significant fields of packet headers and used to select the outgoing paths in an uniform manner or an uneven manner
  • WCMP weighted cost multi pathing
  • both the above-mentioned techniques are used to ensure that the network traffic on each of the outgoing paths meets a certain target ratio and once a decision is taken for a flow, all the packets from the flow follow the same decision (e.g., a same path).
  • rate control in IP networks is achieved using traffic schedulers in a traffic control layer to ensure a minimum amount of bandwidth per flow or per flow aggregate.
  • Such functionality is crucial to satisfy quality of service (QoS) requirements and allocate resources to heterogeneous traffic classes (e.g., real-time critical, elastic critical, elastic non-critical) that share same given bandwidth resource.
  • QoS quality of service
  • load balancing and rate control are realized using a centralized controller, e.g., software-defined networking (SDN) controllers or path computation elements (PCE).
  • SDN software-defined networking
  • PCE path computation elements
  • the centralized controller leverages on a global view of the computer network to decide whether or not it is necessary to split the traffic flows and to determine a most efficient way to split the traffic flows, given the statistics on network load and traffic flows.
  • the technique of rate control and load balancing using the centralized controller suffers from scalability and resilience issues and a centralized controller is sometimes simply not available in some fully-distributed network deployments.
  • the centralized controller is also not desirable in some scenarios such as for example, in a software defined wide area network (SD-WAN network), where a management entity deployed at a headquarter (i.e. a specific location) is only used for management, and not for real-time traffic engineering.
  • SD-WAN network software defined wide area network
  • Another technique currently used for load balancing and rate control in IP networks includes a distributed bandwidth reservation/rate allocation protocols, in which a bandwidth allocation request is sent from a source node and processed by one or more core switches and one or more intermediary nodes or the core switches are configured to take rate allocation decisions using a specific fairness objective.
  • this technique does not involve use of utility functions.
  • similar techniques are designed for asynchronous transfer mode (ATM) networks to allocate bandwidth for available bit rate (ABR) traffic.
  • ATM asynchronous transfer mode
  • ABR available bit rate
  • these techniques involve an active participation of the core nodes which add complexity to the computer network and also lead to a poor resource allocation in the computer network.
  • these techniques may converge to a near optimal solution in specific cases (such as, a single-path routing and a maximum-minimum fairness) alone and lead to scalability issues as intermediary nodes are involved.
  • the disclosure seeks to provide a device that is operable as one of a plurality of edge devices of a computer network, and a method of (namely, a method for) operating the device as one of the plurality of edge devices of the computer network.
  • the present disclosure seeks to provide a solution to the existing problem of load balancing and rate control in an IP network in a fully distributed manner without a centralized controller.
  • An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and to provide an efficient technique for load balancing and rate allocation using the edge devices alone.
  • each edge device manages a set of tunnels towards several destinations, such that each tunnel can be routed over multiple paths and the devices (or source nodes) decide how much bandwidth each tunnel should take over each path (e.g., load balancing and rate allocation decision).
  • the disclosure provides a device that is operable as one of a plurality of edge devices of a computer network, the computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other.
  • the device is configured to control one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among the plurality of edge devices.
  • the device of the present disclosure provides an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes).
  • the device of the disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration).
  • the device decides a rate allocation for the one or more tunnels iteratively.
  • a centralized controller is not required.
  • the precise knowledge of the number of tunnels sharing a link provides a speedup of convergence. Since, the load balancing and rate control decisions are taken locally at the device operating as one of the edge devices, the technique provides a scalable solution.
  • the edge devices collaborate through a small amount of information exchange occurring therebetween, thereby achieving a low data overhead.
  • the device of the disclosure also facilitates a convergence to the optimal solution with anytime feasibility.
  • the device of the disclosure can add new paths (for existing and new tunnels), can remove non-used path, can modify the path preferences and priority of tunnels, and is able to know a utility function.
  • the utility function being only known locally at the device (source node)
  • the utility function can be tuned and/or learnt over time.
  • other intermediary nodes do not need to be updated about changes in traffic demand.
  • the information associated with the device operating as the edge device is not shared on the network.
  • the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
  • the control of the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes enables there to be received lightweight information from the one or more intermediary nodes which helps the edge devices to converge faster to achieve an optimal solution.
  • the device is configured to transmit aggregated utility information to one or more of the plurality of source edge devices.
  • the one or more tunnels and corresponding one or more data rates are controlled, iteratively based on the aggregated utility information.
  • the device is configured to repeatedly determine the rate allocation to achieve an optimal convergence.
  • the repeated determination of the rate allocation to achieve an optimal convergence enables to maximize a network utility and control load balancing (e.g. Maximum Link Utilization) while preserving a feasibility during each step/iteration.
  • a network utility and control load balancing e.g. Maximum Link Utilization
  • the device is configured to receive aggregated utility information from the one or more of the source edge devices, and wherein each rate allocation update is based on the received aggregated utility information.
  • the device is configured to iteratively determine the rate allocation based on the aggregated utility information to maximize network utility and control load balancing (e.g., Maximum Link Utilization).
  • the device is configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information.
  • the computation of the feasible rate allocation based on the aggregated utility information enables iterative determining of the rate allocation based on the aggregated utility information to maximize network utility and control load balancing (e.g. Maximum Link Utilization). It is advantageous to compute a Lagrangian bound based on the aggregated utility information as the lagrangian bound allows an optimization to be solved without explicit parameterization in terms of any constraints.
  • the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with one or more tunnels that originate from the intermediary node on the computer network.
  • the lightweight and optional participation of intermediary nodes accelerates the convergence.
  • the aggregated utility information and link states are used to iteratively determine the rate allocation to maximize network utility and control load balancing (e.g., Maximum Link Utilization).
  • the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic.
  • the information associated with the link states that includes load information associated with network links enables the precise knowledge of the number of tunnels sharing a link to speedup convergence.
  • the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
  • link states that further includes a number of tunnels that utilize a particular link as the precise knowledge of the number of tunnels utilizing a particular link speeds-up convergence.
  • an initial transmission of the link state further includes capacity information associated with the particular link.
  • the capacity information facilitates maintaining a capacity constraint for links during the iterative process of determining the rate allocation and load balancing and thereby enables convergence to optimality and anytime feasibility (e.g., with no capacity constraint violations at intermediate iterations).
  • the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • MLU desired maximum link utilization
  • the one or more data rates that is determined based on user entered data enables to customize the load balancing and rate control based on user preferences and priorities.
  • the device is configured to maximize network utility associated with each of the one or more tunnels.
  • the maximizing of network utility associated with each of the one or more tunnels enables optimizing the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
  • the device is configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
  • the maximizing of network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths enables to optimize the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
  • the present disclosure provides a method of (namely, a method for) operating a device as one of a plurality of edge devices of a computer network, the computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other.
  • the method comprises controlling, by the device, one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among the plurality of edge devices.
  • the method of the present disclosure provides an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes).
  • the method of the present disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration).
  • the method determines a rate allocation for the one or more tunnels iteratively. Since only the device (source node) is aware of the utility function of the one or more tunnels associated with the device, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link speeds-up convergence. Since, the load balancing and rate control decisions are taken locally at the device operating as one of the edge devices, the technique provides a scalable solution.
  • the method involves the edge devices collaborating through a small amount of information thereby achieving a low overhead.
  • the method of the present disclosure also facilitates a convergence to the optimal solution with anytime feasibility.
  • method of the present disclosure can add new paths (for existing and new tunnels), can remove non-used path, modify the path preferences and priority of tunnels, and are able to know the utility function.
  • the utility function being only known locally at the device (source node)
  • the utility function can be tuned and/or learnt over time.
  • since only the device (source node) performs the load balancing and rate control other intermediary nodes do not need to be updated about changes in traffic demand.
  • the information associated with the device operating as the edge device is not shared on the computer network.
  • the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
  • the control of the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes enables to receive lightweight information which helps the edge devices to converge faster to achieve an optimal solution.
  • the data rate is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the one or more data rates.
  • the aggregated utility information is transmitted to the plurality of source edge devices.
  • the aggregated utility information that is transmitted to one or more of the plurality of source edge devices from the device can be used to control a data rate associated with one or more tunnels iteratively based on the aggregated utility information.
  • the rate allocation is enforced over the one or more paths managed by the device.
  • the rate allocation is enforced over the one or more paths managed by the device for optimizing the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
  • user entered data is received at each of the plurality of source edge devices, wherein the one or more data rates is further based on the user entered data.
  • the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with the one or more tunnels on the computer network.
  • the lightweight and optional participation of intermediary nodes accelerates the convergence. Additionally, the aggregated utility information and link states is used to iteratively determine the rate allocation to maximize network utility and control load balancing (e.g., Maximum Link Utilization).
  • the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
  • the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
  • the precise knowledge of the number of tunnels utilizing a particular link speeds-up convergence.
  • an initial transmission of the link state further includes capacity information associated with the particular link.
  • the information associated with the particular link in the form of the initial transmission of link state facilitates maintaining a capacity constraint for links during iterative process of determining the rate allocation and load balancing and thereby enables convergence to optimality and anytime feasibility (e.g., with no capacity constraint violations at intermediate iterations).
  • the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • MLU desired maximum link utilization
  • the determining of the data rate based on user entered data enables to customize the load balancing and rate control based on user preferences and priorities.
  • an aggregated utility information is received from one or more source edge devices, wherein the one or more data rates is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
  • the receiving of aggregated utility information from the one or more of the source edge devices enables to iteratively determine the rate allocation based on the aggregated utility information to maximize network utility and control load balancing (e.g., Maximum Link Utilization). Additionally, the computation of step size using a Polyak function and based on the received aggregated utility information has a faster convergence compared to other existing techniques.
  • the data rate is based on a computation of step size, where the step size is based on a number of iterations of a computation of the rate allocation.
  • the determining of the one or more data rates using a step size based on a number of iterations of a computation of the rate allocation enables to achieve better speed and accuracy of computation.
  • the device maximizes network utility associated with each of the one or more tunnels.
  • the maximizing of network utility associated with each of the one or more tunnels enable in optimizing the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
  • the network utility associated with each of the one or more tunnels is maximized based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
  • the maximizing of network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths enables to optimize the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
  • the present disclosure provides a computer program comprising executable instructions that when executed by a processor cause the processor to perform a method, the method comprising controlling, by a device, a data rate associated with one or more tunnels that use one or more paths between the device and a destination edge device among a plurality of edge devices, wherein the plurality of edge devices are associated with a computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other.
  • the computer program achieves all the advantages and effects of the method of the present disclosure.
  • the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
  • the one or more data rates is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the one or more data rates.
  • the aggregated utility information is transmitted to the plurality of edge devices.
  • the rate allocation is enforced over the one or more paths managed by the device.
  • user entered data is received at each of the plurality of source edge devices, wherein the data rate is further based on the user entered data.
  • the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with the one or more tunnels on the computer network.
  • the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
  • the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
  • an initial transmission of the link state further includes capacity information associated with the particular link.
  • the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • MLU desired maximum link utilization
  • an aggregated utility information is received from one or more source edge devices, wherein the data rate is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
  • the one or more data rates is based on a computation of step size, where the step size is based on a number of iterations of a computation of the rate allocation.
  • the device maximizes network utility associated with each of the one or more tunnels.
  • the device is configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
  • FIG. 1A is a network environment diagram of a system for distributed traffic engineering at a plurality of edge devices, in accordance with an example of the disclosure
  • FIG. IB is a block diagram that illustrates various exemplary components of a device operable as one of the pluralities of edge devices, in accordance with an example of the disclosure
  • FIG. 2 is a network architecture illustrating an exemplary scenario of rate control at a device operating as an edge device in a computer network, in accordance with an example of the disclosure
  • FIG. 3 is a functional architecture of the device of FIG. 2, in accordance with an example of the disclosure.
  • FIGs. 4A, 4B, and 4C collectively illustrate an exemplary scenario of controlling data rate by a device in a software defined wide area network (SD-WAN), in accordance with an example of the disclosure;
  • SD-WAN software defined wide area network
  • FIGs. 5A-5F illustrate exemplary graphical results obtained by implementing the present technology in an internet protocol radio access network (IPRAN) network, in accordance with an example of the disclosure.
  • IPRAN internet protocol radio access network
  • FIG. 6 is a flowchart of a method for controlling, by a device, one or more tunnels and corresponding one or more data rates, for example each path or tunnel having its corresponding data rate, that use one or more paths between the device and a destination edge device among a plurality of edge devices, in accordance with an example of the disclosure.
  • an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent.
  • a non-underlined number relates to an item identified by a line linking the nonunderlined number to the item.
  • the non-underlined number is used to identify a general item at which the arrow is pointing.
  • FIG. 1A is a network environment diagram of a system for distributed traffic engineering at edge devices, in accordance with an embodiment of the present disclosure. With reference to FIG. 1A, there is shown a system 100.
  • the system 100 includes a computer network 102, a plurality of intermediary nodes such as a first intermediary node 104A, up to N' h intermediary node 104N, a plurality of edge devices such as a first edge device (hereinafter simply reffered to as device 106A) up to N' h edge device 106N, and a plurality of wired/wireless end point devices such as a first wired/wireless end point device 108A, up to N' h wired/wireless end point device 108N.
  • the plurality of intermediary nodes 104A-N are communicatively coupled to one another via the computer network 102.
  • the plurality of edge devices 106A-N are communicatively coupled to the plurality of intermediary nodes 104A-N.
  • the plurality of wired/wireless end point devices 108A-N are connected to the computer network 102 via the plurality of intermediary nodes 104A-N and the plurality of edge devices 106A-N.
  • the computer network 102 may be a wired or wireless communication network.
  • Examples of the computer network 102 may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Local Area Network (LAN), a wireless personal area network (WPAN), a Wireless Local Area Network (WLAN), a wireless wide area network (WWAN), a cloud network, a cellular network, a Metropolitan Area Network (MAN), a software defined wide area network (SD-WAN), and/or the Internet.
  • the plurality of intermediary nodes 104A-N communicatively connect the plurality of edge devices 106A-N to one another and/or to the computer network 102.
  • Examples of the plurality of intermediary nodes 104A-N include switches, wireless access points, routers, firewalls (security) and the like.
  • Each of the plurality of edge devices 106A-N provide an entry point for the plurality of wired/wireless end point devices/users 108A-N into the computer network 102.
  • Examples of the plurality of edge devices 106A-N include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) or wide area network (WAN) access devices, and the like.
  • Examples of the plurality of wired/wireless end point devices 108A-N includes, but is not limited to user devices (such as cellular phones, personal digital assistants (PDAs), handheld devices, laptop computers, personal computers, an Internet-of- Things (loT) device, a smart phone, a machine type communication (MTC) device, a computing device, a drone, or any other portable or nonportable electronic device.
  • a device operable as one of the plurality of edge devices 106A-N is configured to control a data rate associated with one or more tunnels that use one or more paths between the device and a destination edge device among the plurality of edge devices 106A-N.
  • IB is a block diagram that illustrates various exemplary components of a device 106A operable as one of the plurality of edge devices 106A-N, in accordance with an embodiment.
  • FIG. IB is described in conjunction with elements from FIGs. 1 A.
  • FIG. IB there is shown block diagram of the device that includes a rate allocation module 110, a routing module 112, a data monitoring module 114, a traffic management module 116, and a database 118.
  • the rate allocation module 110, the routing module 112, the data monitoring module 114, and the traffic management module 116 are communicatively associated with the database 118 and are executable via a processor (not shown).
  • the database 118 stores a set of instructions for execution by the processor (not shown).
  • the device 106A is operable as one of a plurality of edge devices 106A-N of a computer network 102, the computer network 102 comprising the plurality of edge devices 106A-N and one or more intermediary nodes 104A-N, the one or more intermediary nodes 104A-N being configured to communicatively couple the edge devices 106A-N to each other.
  • the device 106A is configured to control a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device among the plurality of edge devices 106A-N.
  • the device 106A provides an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes).
  • the device 106A is configured to execute various operations.
  • the rate allocation module 110 of the device 106A is configured to control a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device among the plurality of edge devices 106A-N.
  • the rate allocation module 110 is configured to repeatedly determine the rate allocation based on the user entered data and the aggregated utility information, for example only in the case where the step size is based on Polyak function, to achieve an optimal convergence.
  • the rate allocation module 110 maximizes network utility associated with each of the one or more tunnels.
  • the rate allocation module 110 further maximizes a network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
  • the rate allocation module 110 associated with one or more edge devices 106A-N operating as source nodes e.g., plurality of edge devices 106A-N
  • the rate allocation module 110 is configured to transmit aggregated utility information, for example only in the case where Poliak step size function is used, to one or more of the plurality of edge devices 106A-N. This is further described in detail, for example, in FIGs. 2 and 3.
  • the device 106A is configured to control the data rate associated with one or more tunnels based on information received from the one or more intermediary nodes 104A-N.
  • the rate allocation module 110 is configured to control the data rate associated with one or more tunnels based on information received from the one or more intermediary nodes 104A-N.
  • the information received via the one or more intermediary nodes 104A-N includes aggregated utility information, link states generated by the intermediary nodes 104A-N and a traffic load associated with one or more tunnels that originate from the intermediary node on the computer network 102.
  • the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic.
  • the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
  • an initial transmission of the link state further includes capacity information associated with the particular link.
  • the device 106A is configured to transmit aggregated utility information to one or more of the plurality of source edge devices (e.g., plurality of edge devices 106A-N.
  • the rate allocation module 110 transmits the aggregated utility information to one or more of the plurality of source edge devices (e.g. plurality of edge devices 106A-N).
  • the routing module 112 is configured to periodically propagate the link states including, for example, a link load and/or a number of tunnels using links.
  • the device 106A is configured to repeatedly determine the rate allocation to achieve an optimal convergence.
  • the rate allocation module 110 iteratively computes rate allocation based on the one or more lagrangian multiplers, the user entered data and/or the aggregated utility information to achieve an optimal convergence.
  • the device 106A is configured to receive aggregated utility information from the one or more of the source edge devices (e..g., plurality of edge devices 106A-N), wherein each rate allocation update is based on the received aggregated utility information.
  • the rate allocation module 110 is also configured to receive aggregated utility information from the one or more of the source edge devices (e.g., plurality of edge devices 106A-N), wherein each rate allocation update is based on the received aggregated utility information.
  • the device 106A is configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information.
  • the rate allocation module 110 is also configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information and /or the user entered data.
  • the information received via the one or more intermediary nodes 104A-N comprises aggregated utility information, link states generated by the intermediary nodes 104A-N and a traffic load associated with one or more tunnels that originate from the intermediary node (e.g., 104A) on the computer network 102.
  • the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic.
  • the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
  • the routing module 112 propagates the link states using a routing protocol such as, for example, open shortest path first (OSPF), intermediate system to intermediate system (IS-IS, also written ISIS), border gateway protocol (BGP). This is further described in detail, for example, in FIGs. 2, and 3.
  • OSPF open shortest path first
  • IS-IS intermediate system to intermediate system
  • BGP border gateway protocol
  • an initial transmission of the link state further includes capacity information associated with the particular link.
  • the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • the rate allocation module 110 is configured to receive a user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • the device 106A is configured to maximize network utility associated with each of the one or more tunnels.
  • the device 106A is further configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
  • the data monitoring module 114 captures traffic demands of the tunnels based on local monitoring in the data plane. In an embodiment, the data monitoring module 114, provides updated throughput information for each outgoing tunnel (e.g., local information at a source node). In an embodiment, the data monitoring module 114, also measures the number of tunnels using each link on intermediary nodes 104A-N to accelerate convergence) and provides the measured number of tunnels to the routing module 112. In an embodiment, the measure of the number of tunnels is realized by counting source and destination node pairs (e.g., netflow solution, or statistical sketches). This is further described in detail, for example, in FIGs. 2, and 3.
  • source and destination node pairs e.g., netflow solution, or statistical sketches
  • the rate allocation module 110 computes a rate allocation and a lagrangian bound based on the aggregated utility information and/or the user entered data.
  • the algorithm for computation of the rate allocation and the lagrangian bound is as explained below.
  • U k (Xp) d k Xpf(p, k or alpha fairness function for max throuput, max-min, proportional fairness).
  • Equation 1 Xp G [0,1] is a rate allocation of traffic on a path p G P k for a tunnel k, then the rate allocation problem is given by equations (equation 1 and equation 2):
  • the Lagrangian relaxation method penalizes violations of inequality constraints using a Lagrange multipliers u E IR + , which imposes a cost on violations.
  • Lagrangian relaxation problem decomposes into
  • Lagrangian sub-problem P 1 associated with tunnel k E K If U k is linear, P 1 can be solved in a linear time by setting the variable of having the maximum weight in the objective function to 1 and all other variables to 0.
  • the rate allocation module 110 in order to solve the Lagrangian relaxation converging to the optimal solution and preserving the feasibility of the solutions, the following distributed algorithm is used by the rate allocation module 110:
  • the aggregated utility information exchanged between the edge devices (agents) 106A-N are used to compute the step-size s l using a Polyak function.
  • one or more other known techniques can be used to compute s l that do not use the aggregated utilities (e.g., - where i is the iteration of the sub-gradient algorithm, periodically reseted). Based on the used method to compute s l , the edge devices (agents) 106A- N may exchange both, one or none of the aggregated utilities.
  • the traffic management module 116 receives the rate allocations from the rate allocation module 110 and enforces the rate allocations in the data plane.
  • the rate allocation module 110, the routing module 112, the data monitoring module 114, and/or the traffic management module 116 are potentially implemented as a software component or a combination of software and circuitry.
  • the device 106A of the present disclosure provides an efficient technique of load balancing and rate control in IP networks (such as computer network 102) that yields a minimum overhead (i.e. no or limited participation of core nodes) and provides autonomy to the edge devices 106A- N.
  • the device 106A of the present disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration). Since only the device 106A (source node) is aware of the utility function of the one or more tunnels associated with the device 106A, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link speeds-up convergence.
  • the technique provides a scalable solution. Additionally, the plurality of edge devices 106A-N collaborate through a small amount of information thereby achieving a low overhead.
  • the device 106A of the present disclosure also facilitates a convergence to the optimal solution with anytime feasibility. Additionally, the device 106A of the present disclosure can add new paths (for one or more existing tunnels and one or more new tunnels), can remove un-used paths, modify the path preferences and priority of tunnels, and is able to know the utility function. Owing to the utility function being only known locally at the device 106A (source node), the utility function can be tuned and/or learnt over time.
  • the device 106A source node
  • other intermediary nodes do not need to be updated about changes in traffic demand.
  • the information associated with the device 106A operating as an edge device is not shared on the computer network 102.
  • FIG. 2 depicts a network architecture illustrating an exemplary scenario of rate control at a device (such as a device 106A) operating as an edge device in a computer network (such as computer network 102 of FIG. 1), in accordance with an embodiment of the present disclosure.
  • FIG. 2 is described in conjunction with elements from FIGs. 1A, and IB.
  • a network architecture 200 that includes a device A 106A, a device B 106B, a device C 106C, a device D 106D and an intermediary node 104A communicatively coupled through a computer network (such as computer network 102).
  • Each of the device A 106A, the device B 106B, the device C 106C, and the device D 106D are operable as an edge device.
  • optional candidate paths from the device A 106A to the device D 106D are calculated locally or are installed using a path computation element.
  • the intermediary node 104A monitors a number of tunnels sharing each link and sends it as link states with a link state protocol to each of the devices, namely the device A 106A, the device B 106B, the device C 106C, and the device D 106D.
  • a rate allocation module 110 associated with each of the edge devices namely the device A 106A, the device B 106B, the device C 106C, and the device D 106D, periodically receives link states and some additional information from other edge devices 106A-N (i.e., aggregated utilities for the tunnels managed by each edge device (i.e. the device 106A), that constitute two scalars per edge device) and executes an iteration of a rate control algorithm (described earlier along with FIG. IB) based on utility functions of the respective outgoing tunnels (only) and enforces rate allocations for each outgoing tunnel.
  • the rate allocation module 110 of the device A 106A enforces rate allocations over the optional paths for each outgoing tunnel thereby preserving feasibility.
  • the rate allocation module 110 executes an iteration of the rate control algorithm knowing the utility functions of the outgoing tunnels (only), enforces the rate allocation over multiple paths for each outgoing tunnel (preserving feasibility) and communicates information such as aggregated utilities for the tunnels managed by the device A 106A to other three devices (or agents), that is the device B 106B, the device C 106C, and the device D 106D.
  • FIG. 3 is a functional architecture of the device of FIG. 2, in accordance with an embodiment of the disclosure.
  • the functional architecture 300 includes a control plane 302 and a data plane 304 in which the device 106A of FIGs. IB and 2 operates.
  • the rate allocation module 110 of the device 106A receives a) link states 306, b) aggregated utilities 308 from other devices (agents) of the computer network 102, c) parameters for outgoing tunnels 310, and d)tunnel traffic 312 from the data monitoring module 114, as inputs and generates aggregated utilities 314 and rate allocations 316 as output.
  • the data monitoring module 114 provides updated throughput information for each outgoing tunnel (local information at the source) and optionally measures a number of tunnels 318 using each link on intermediary nodes to accelerate convergence).
  • the measure of the number of tunnels 318 can be realized by counting source and destination pairs (e.g., Netflow solution, Statistical sketches, and the like).
  • the link states 306 includes load capacity, number of tunnels on the links, the parameters for outgoing tunnels 310 includes priority, preferences by the paths/technologies (i.e. paths are provided) or service level agreements (SLA) requirements (that can be used to automatically tune preferences based on measurements), a desired maximum link utilization (MLU).
  • the routing module 112 captures a number of tunnels 318 on links from the data plane 304 and periodically propagates link states 320 including, for example, a link load and/or a number of tunnels using the links.
  • the routing module 112 propagates the link states 320 using a routing protocol such as, for example, open shortest path first (OSPF), intermediate system to intermediate system (IS-IS, also written ISIS), border gateway protocol (BGP).
  • OSPF open shortest path first
  • IS-IS intermediate system to intermediate system
  • BGP border gateway protocol
  • the traffic management module 116 receives the rate allocations 316 from the rate allocation module 110 and enforces the rate allocations 316 in the data plane 304.
  • FIGs. 4A-4C there are shown illustrates of an exemplary scenario of controlling data rate by a device (such as the device 106A) in a software defined wide area network (SD-WAN), in accordance with an embodiment of the disclosure.
  • SD-WAN software defined wide area network
  • the SD-WAN 400 depicted in FIGs 4A-4C includes three sites, namely a siteO 402, a sitel 404, and a site2 406 connected to an enterprise network 408 through ports 410, 412, 414, 416, 418, and 420 of edge devices (edge devices are represented by LB).
  • the ports 410, 414, and 418 of the edge devices connect the siteO 402, sitel 404, and site2 406 respectively to the enterprise network 408 via an Internet®.
  • the ports 412, 416, and 420 of the edge devices connect the siteO 402, sitel 404, and site2 406 respectively to the enterprise network 408 via an multiprotocol label switching (MPLS).
  • MPLS multiprotocol label switching
  • the SD-WAN 400 also includes ports 422 and 424 of the edge device (associated with the enterprise network 408) that communicatively associate with the ports 410-420 of the edge devices to the enterprise network 408.
  • the SD-WAN 400 includes three tunnels, namely tunnel 0: Site 0 -> Head, tunnel 1 : Site 1 -> Head, tunnel 2: Site 2 -> Head. Each tunnel has a demand of 100 megabites per second.
  • FIG. 4B depicts a first exemplary scenario
  • FIG. 4C depicts a second exemplary scenario.
  • a link capacity is 180 megabytes and a target MLU provided by a user is 90%, then in the first exemplary scenario depicted in FIG. 4B, for tunnel preferences including:
  • Tunnel 0 1 (low Priority)
  • Tunnel 1 1 (low Priority)
  • Tunnel 2 1 (low Priority)
  • a value of 598.73 (optimality gap of 0.2%) is obtained by performing 12 iterations via simulation.
  • tunnel preferences including:
  • Tunnel 0: 3 high Priority
  • a value of 1161.11 (optimality gap 0.7%) is obtained by performing 203 iterations, via simulation and a value of 1161.11 (optimality gap 3.1%) is obtained by performing 20 iterations, via simulation.
  • the iterations can be executed at the rate at which the link states can be received (e.g., 200 milliseconds - 1 second).
  • FIGs. 5A-5F there are shown illustrates of exemplary graphical results obtained by implementing the present technology in an internet protocol radio access network (IPRAN) network, in accordance with an embodiment of the present disclosure.
  • IPRAN internet protocol radio access network
  • FIG. 5A-5F are described in conjunction with elements from FIGs. 1A, IB, 2, and 3.
  • the graphical results illustrated in FIGs 5A-5F correspond to an IPRAN network with 543 links, 477 nodes, 500 tunnels, and an average of demands of 147.6 megabytes. More particularly, FIG.
  • FIG. 5A is a graphical representation that illustrates a total traffic volume versus sub-gradient iterations curve 500A obtained by implementing the device of the present technology in an exemplary IPRAN network when a number of tunnels using each link (n e ) is used for computation of a data rate.
  • FIG. 5 A is described in conjunction with elements from FIGs. 1A to 3.
  • X-axes 502A represents sub-gradient iterations
  • Y-axes 504A represents total traffic volume of the IPRAN network.
  • FIG. 5B is a graphical representation that illustrates a total number of paths versus sub-gradient iterations curve 500B obtained by implementing the device of the present technology in an exemplary IPRAN network, when the number of tunnels using each link (n e ) is used for computation of the data rate.
  • X -axes 502B represents sub-gradient iterations
  • Y-axes 504A represents total number of paths of the IPRAN network, for a number of modified paths greater than 0.1 megabytes.
  • FIG. 5C is a graphical representation that illustrates a total traffic volume versus sub-gradient iterations curve 500C obtained in an exemplary IPRAN network when a number of tunnels using each link (n e ) is not used for computation of the data rate.
  • X-axes 502C represents sub-gradient iterations
  • Y-axes 504C represents total traffic volume of the IPRAN network.
  • FIG. 5D is a graphical representation that illustrates a total number of paths versus sub-gradient iterations curve 500D corresponding to an exemplary IPRAN network, when the number of tunnels using each link (n e ) is not used for computation of the data rate.
  • X-axes 502D represents sub-gradient iterations
  • Y-axes 504D represents total number of paths of the IPRAN network, for a number of modified paths greater than 0.1 megabytes.
  • 5E is a graphical representation that illustrates a first objective value versus sub-gradient iterations curve 502E obtained for a lagrangian bound and a second objective value versus subgradient iterations curve 504E obtained for a feasible solution, by implementing the device 106A of the present technology in an exemplary IPRAN network when a number of tunnels using each link (n e ) is used for computation of a data rate.
  • X-axes 502E represents sub-gradient iterations
  • Y-axes 504E represents the objective values, for lagrangian bound.
  • X-axes 502E represents sub-gradient iterations
  • Y-axes 504E represents the objective values for the feasible solution.
  • FIG. 5F is a graphical representation that illustrates a first objective value versus sub-gradient iterations curve 502F obtained for a lagrangian bound and a second objective value versus subgradient iterations curve 504F obtained for a feasible solution, by implementing the device of the present technology in an exemplary IPRAN network when a number of tunnels using each link (n e ) is not used for computation of a data rate.
  • X-axes 502F represents sub-gradient iterations
  • Y-axes 504F represents the objective values, for lagrangian bound.
  • X-axes 502F represents sub-gradient iterations and Y-axes 504F represents the objective values for the feasible solution.
  • each scalar (as a type length value (TLV)) consumes 32 bits and a duration of each iteration is 200 milliseconds (ms) and V s is a set of node sources, and when aggregated utilities are sent through the minimum spanning tree of (
  • TLV type length value
  • the data sent by each source at each iteration is given by: aggregated utility U (s) U k ( x p ) : 32 bits; aggregated utility 0(s) 6 k : 32 bits;
  • each source device i.e the device 106A
  • the data received, by each source device is given by: a) aggregated utilities : 64 x (
  • the link state is 0.01728 megabytes/second
  • the link states is 0.017376 megabytes per second.
  • the link state is 0.000896 Mb/second. It can be observed from the above values from the above exemplary scenarios that an overhead for the aggregated utilities and the link states is very low.
  • FIG. 6 is a flowchart of a method 600 for controlling, by a device, a data rate associated with one or more tunnels that use one or more paths between the device and a destination edge device among the plurality of edge devices, in accordance with an embodiment of the present disclosure.
  • FIG. 6 is described in conjunction with elements from FIGs. 1 A, IB, 2, 3, 4A-4C, and 5A-5F.
  • the present disclosure provides the method 600 for operating a device 106A as one of a plurality of edge devices 106A-N of a computer network 102, the computer network 102 comprising the plurality of edge devices 106A-N and one or more intermediary nodes 104A-N, the one or more intermediary nodes 104A-N being configured to communicatively couple the edge devices 106A-N to each other, wherein the method 600 comprises controlling, by the devicelO6A, a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device among the plurality of edge devices 106A-N.
  • the method 600 is executed at the device 106A described in detail, for example, in FIGs. 1A, 2, and 3.
  • the method 600 includes steps 602, 604, 606, and 608.
  • the method 600 comprises receiving by the device 106A a link state, a number of tunnels using each link, and/or aggregated utilities of other edge devices (agents).
  • the device 106A also receives a traffic demand based on a local traffic monitoring by the data monitoring module 114 of the device 106A.
  • the device 106A also receives user entered data.
  • the user entered data includes at least one of tunnel priority, a path preference, and/or a desired maximum link utilization (MLU) between 0 and 1.
  • MLU maximum link utilization
  • the user entered data may be global data or may be associated with each individual link. It is advantageous to determine the data rate based on user entered data to customize the load balancing and rate control based on user preferences and priorities.
  • the device 106A receives, the aggregated utility information, the link states (e.g., generated by the intermediary nodes) and a traffic load associated with the one or more tunnels on the computer network 102 from one or more intermediary nodes 104A-N.
  • the link states include a number of tunnels of the one or more tunnels that utilize a particular link.
  • an initial transmission of the link states further includes a link capacity information associated with the particular link.
  • the link capacity information can be retrieved from a management system.
  • the link capacity is shared as the link states.
  • the device 106A smoothes the received and measured values using a moving average.
  • the link states include a load information associated with network links, where a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
  • the link states include load capacity, number of tunnels on the links, the parameters for outgoing tunnels includes priority, preferences by the paths/technologies (i.e. paths are provided) or service level agreements (SLA) requirements (that can be used to automatically tune preferences based on measurements), a desired maximum link utilization (MLU).
  • the device 106A captures a number of tunnels on links from the data plane and periodically propagates link states including, for example, a link load and/or a number of tunnels using the links.
  • the device 106A propagates the link states using a routing protocol such as, for example, open shortest path first (OSPF), intermediate system to intermediate system (IS-IS, also written ISIS), border gateway protocol (BGP).
  • OSPF open shortest path first
  • IS-IS intermediate system to intermediate system
  • BGP border gateway protocol
  • the method 600 further comprises updating one or more lagrangian multipliers using the aggregated utilities based on the link state and the number of tunnels using each link.
  • the device 106A computes a feasible rate allocation and a Lagrangian bound (or Lagrangian multipliers) based on the aggregated utilities.
  • the Lagrangian bound includes an upper bound and/or a lower bound.
  • the device 106A updates the Lagrangian multipliers using the upper bounds and/or the lower bounds and the link states.
  • the updation of Lagrangian multipliers and feasible rate allocation has been described in detail, for example, in FIGs. IB and 2 and hence omitted here for the sake of brevity.
  • the method 600 further comprises iteratively computing rate allocation based on the updated one or more lagrangian multiplers, the user entered data and/or the aggregated utility information to achieve an optimal convergence.
  • the Lagrangian subproblem is solved to compute the rate allocation.
  • a modified Lagrangian sub-problem is solved to compute the rate allocation. The device 106A maximizes network utility associated with each of the one or more tunnels by computing the rate allocation iteratively.
  • a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that uses one or more paths is determined by the device 106A so as to maximize a network utility associated with each of the one or more tunnels. It is advantageous to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths so as to optimize the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
  • the aggregated utility information is transmitted by the device 106A to one or more other edge devices from among the plurality of edge devices (source edge devices) 106A-N.
  • the device 106A uses a multicast tree to transmit the aggregated utilities to the other edge devices 106A-N.
  • the iterative computing of rate allocation has been described in detail, for example, in FIGs. IB and 2 and hence omitted here for the sake of brevity.
  • the iterative computation of rate allocation maximizes network utility and controls load balancing (e.g., Maximum Link Utilization) while preserving feasibility at each step.
  • the rate allocation is computed using an algorithm based on a sub-gradient algorithm that converges to optimality and provides anytime feasibility (feasible bandwidth allocations at each iteration).
  • a variant of the algorithm that can use extra link-state information such as the number of tunnels sharing each link is used to compute the rate allocation, that has been described in detail, for example, in FIGs. IB and 2 and hence omitted here for the sake of brevity.
  • the iteration can be executed after a minimum number of new data is received or after a maximum idle time.
  • the iterative computation of rate allocation converges to an optimal solution and provides anytime feasibility (feasible bandwidth allocations at each iteration). Additionally, since only the source nodes (e.g., plurality of edge devices 106A-N ) decide rate allocations for their tunnels iteratively, only the source nodes (e.g. plurality of edge devices 106A-N) are aware of the utility function of their tunnels and no other external controllers are required. Moreover, the lightweight and optional participation of intermediary nodes 104A-N involving only measuring, and share as link states, the number of tunnels using the links accelerates a convergence.
  • the source nodes e.g., plurality of edge devices 106A-N
  • the method 600 further comprises controlling by the device 106A a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device from among the plurality of edge devices 106A-N, based on the rate allocation.
  • the data rate is controlled based on the user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • MLU desired maximum link utilization
  • the device 106A enforces rate allocation for each out going tunnel from the device thereby preserving feasibility.
  • the device 106A enforces rate allocation in the data plane.
  • the device 106A executes an iteration of the rate control algorithm knowing the utility functions of the outgoing tunnels (only), enforces the rate allocation over multiple paths for each outgoing tunnel (preserving feasibility) and communicates information such as aggregated utilities for the tunnels managed by the device 106A to other devices 106A-N in the computer network 102.
  • the device 106A uses different flavors or fairness in the rate allocation (e.g., maximum-minimum, proportional, alpha-fairness and the like).
  • the steps 602 to 608 are repeated periodically to achieve convergence.
  • the method 600 of the disclosure provides an efficient technique of load balancing and rate control in IP networks (such as computer network 102) that yields a minimum overhead (i.e. no or limited participation of core nodes).
  • the method 600 of the present disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration).
  • the method 600 determines a rate allocation for the one or more tunnels iteratively. Since only the device (source node) 106A is aware of the utility function of the one or more tunnels associated with the device 106, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link speeds-up convergence.
  • the technique provides a scalable solution. Additionally, the method 600 involves the edge devices 106A-N collaborating through a small amount of information thereby achieving a low overhead. The method 600 of the present disclosure also facilitates a convergence to the optimal solution with anytime feasibility. Additionally, method 600 of the present disclosure can add new paths (for existing and new tunnels), can remove non-used path, modify the path preferences and priority of tunnels, and are able to know the utility function. Additionally, owing to the utility function being only known locally at the device (source node) 106A, the utility function can be tuned and/or learnt over time.
  • the device (source node) 106A since only the device (source node) 106A performs the load balancing and rate control other intermediary nodes do not need to be updated about changes in traffic demand. Moreover, there is no failure risk of the central node in the present technology. Furthermore, the information associated with the device 106A operating as the edge device 106N is not shared on the computer network 102.
  • the data rate associated with one or more tunnels is controlled based on information received from the one or more intermediary nodes 104A-N.
  • the data rate is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the data rate.
  • the method 600 comprises transmitting aggregated utility information to the plurality of source edge devices (e.g. plurality of edge devices 106A-N).
  • the plurality of source edge devices e.g. plurality of edge devices 106A-N.
  • the method 600 comprises enforcing the rate allocation over the one or more paths managed by the device 106A.
  • the method 600 comprises receiving user entered data at each of the plurality of source edge deviceslO6A-N, wherein the data rate is further based on the user entered data.
  • the information received via the one or more intermediary nodes 104A-N comprises aggregated utility information, link states generated by the intermediary nodes 104A-N and a traffic load associated with the one or more tunnels on the computer network 102.
  • the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
  • the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
  • An initial transmission of the link state further includes capacity information associated with the particular link.
  • the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
  • the method 600 comprises receiving aggregated utility information from one or more source edge devices 106A-N, wherein the data rate is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
  • the data rate is based on a computation of a step size, where the step size is based on a number of iterations of a computation of the rate allocation.
  • the device 106A maximizes network utility associated with each of the one or more tunnels.
  • the network utility is maximised associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
  • Various embodiments, operations, and variants disclosed in the device 106A of FIG. 1A, IB, 2, and 3 apply mutatis mutandis to the method 600.
  • Various embodiments of the present technology can be implemented in various applications requiring a distributed traffic control via edge devices of a computer network in a scalable manner and efficient manner. For example, in critical applications service level agreements (SLA), each vEdge router continuously monitors path performance and adjusts forwarding and requires a configurable probing interval. Additionally, several app aware routing policies may require an application path to have a low latency (e.g., less than 150 milli seconds (ms), loss less than 2% and jitter less than 10 ms).
  • SLA critical applications service level agreements
  • the distributed load balancing and rate control at edge devices 106A-N as disclosed in the method 600 and the device 106A of the present technology is applicable in such scenarios.
  • the method 600 and device 106A of the present technology are also applicable in a dynamic circuit network (DCN) to implement a distributed solution.
  • DCN dynamic circuit network
  • several internal solutions from Google® (e.g., bandwidth enforcer (BwE)), Microsoft (Swan) and Facebook® are centralized but there are use cases where no centralized coordination is available and the method and device of the present technology can be used in such scenarios for distributed traffic engineering based on load balancing and rate control at one or more edge devices.
  • the embodiments described herein can include both hardware and software elements.
  • the embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc.
  • the embodiments herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk - read only memory (CD- ROM), compact disk - read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, Subscriber Identity Module (SIM) card, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input/output (VO) devices including but not limited to keyboards, displays, pointing devices, remote controls, camera, microphone, temperature sensor, accelerometer, gyroscope, etc.
  • I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • the system, method, computer program product, and propagated signal described in this application may, of course, be embodied in hardware; e.g., within or coupled to a Central Processing Unit (“CPU”), microprocessor, microcontroller, System on Chip (“SOC”), or any other programmable device.
  • the system, method, computer program product, and propagated signal may be embodied in software (e.g., computer readable code, program code, instructions and/or data disposed in any form, such as source, object or machine language) disposed, for example, in a computer usable (e.g., readable) medium configured to store the software.
  • software e.g., computer readable code, program code, instructions and/or data disposed in any form, such as source, object or machine language
  • a computer usable (e.g., readable) medium configured to store the software.
  • Such software enables the function, fabrication, modeling, simulation, description and/or testing of the apparatus and processes described herein.
  • Such software can be disposed in any known computer usable medium including semiconductor, magnetic disk, optical disc (e.g., CD-ROM, DVD-ROM, and the like) and as a computer data signal embodied in a computer usable (e.g., readable) transmission medium (e.g., carrier wave or any other medium including digital, optical, or analog-based medium).
  • the software can be transmitted over communication networks including the Internet and intranets.
  • a system, method, computer program product, and propagated signal embodied in software may be included in a semiconductor intellectual property core (e.g., embodied in HDL) and transformed to hardware in the production of integrated circuits.
  • a system, method, computer program product, and propagated signal as described herein may be embodied as a combination of hardware and software.
  • a "computer-readable medium” for purposes of embodiments of the present invention may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device.
  • the computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.
  • a “processor” or “process” includes any human, hardware and/or software system, mechanism or component that processes data, signals or other information.
  • a processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in "real time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A device is operable as one of a plurality of edge devices of a computer network, wherein the computer network comprises the plurality of edge devices and one or more intermediary nodes, wherein the one or more intermediary nodes are configured to communicatively couple the edge devices to each other is disclosed. The device is configured to control one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among the plurality of edge devices.

Description

DISTRIBUTED TRAFFIC ENGINEERING AT EDGE DEVICES IN A COMPUTER
NETWORK
TECHNICAL FIELD
The disclosure relates generally to traffic management in a computer network; and more specifically to an edge device that is configured to control one or more data rates in a computer network, for example a plurality of data rates of a corresponding plurality of tunnels over multiple paths. Moreover, the disclosure relates to a method for operating a device as an edge device of a computer network for controlling the one or more data rates in the computer network.
BACKGROUND
Generally, in computer networks (e.g., internet protocol (IP) networks) traffic engineering plays a crucial role in improving a network utilization. Typically, splitting the network traffic across multiple paths facilitates a better use of a given network capacity and rate control is key to maximize network utility and fairness in case of congestion. Typically, load balancing in IP networks is implemented inside network devices, such as switches or routers using two techniques including: 1) a hash-based splitting technique, where a hash is calculated over significant fields of packet headers and used to select the outgoing paths in an uniform manner or an uneven manner, and 2) a weighted cost multi pathing (WCMP) technique, where load balancing weights are used to make decisions. Typically, both the above-mentioned techniques are used to ensure that the network traffic on each of the outgoing paths meets a certain target ratio and once a decision is taken for a flow, all the packets from the flow follow the same decision (e.g., a same path).
Currently, rate control in IP networks is achieved using traffic schedulers in a traffic control layer to ensure a minimum amount of bandwidth per flow or per flow aggregate. Such functionality is crucial to satisfy quality of service (QoS) requirements and allocate resources to heterogeneous traffic classes (e.g., real-time critical, elastic critical, elastic non-critical) that share same given bandwidth resource. Typically, load balancing and rate control are realized using a centralized controller, e.g., software-defined networking (SDN) controllers or path computation elements (PCE). The centralized controller leverages on a global view of the computer network to decide whether or not it is necessary to split the traffic flows and to determine a most efficient way to split the traffic flows, given the statistics on network load and traffic flows. However, the technique of rate control and load balancing using the centralized controller suffers from scalability and resilience issues and a centralized controller is sometimes simply not available in some fully-distributed network deployments. Moreover, the centralized controller is also not desirable in some scenarios such as for example, in a software defined wide area network (SD-WAN network), where a management entity deployed at a headquarter (i.e. a specific location) is only used for management, and not for real-time traffic engineering.
Another technique currently used for load balancing and rate control in IP networks includes a distributed bandwidth reservation/rate allocation protocols, in which a bandwidth allocation request is sent from a source node and processed by one or more core switches and one or more intermediary nodes or the core switches are configured to take rate allocation decisions using a specific fairness objective. However, this technique does not involve use of utility functions. Moreover, similar techniques are designed for asynchronous transfer mode (ATM) networks to allocate bandwidth for available bit rate (ABR) traffic. However, these techniques involve an active participation of the core nodes which add complexity to the computer network and also lead to a poor resource allocation in the computer network. Furthermore, these techniques may converge to a near optimal solution in specific cases (such as, a single-path routing and a maximum-minimum fairness) alone and lead to scalability issues as intermediary nodes are involved.
In yet another technique currently used for load balancing and rate control in IP networks includes distributed hop-by-hop multi-path routing in which each node decides split ratios for its outgoing links. Routing is then decided in a hop-by-hop fashion to minimize MLU (it can actually minimize any convex function). However, this technique actively involves the use of intermediary nodes in the decision- making process and thus leads to scalability and resilience issues. Additionally, this technique may only optimize routing. Moreover, several other currently known techniques of load balancing and rate control are based on user-defined rules for path selection and involves performance routing with user defined thresholds (e.g., delay and packet loss) and do not involve any proactive rate control and traffic congestion mitigation techniques. Thus, there exists a technical problem of performing load balancing and rate control in IP networks in a fully distributed manner without any centralized controller so as to achieve scalability and resilience; moreover, there is a need for an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes) and guarantees convergence to optimality and anytime feasibility (i.e. no capacity constraint violations at intermediate iterations).
Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with the conventional load balancing and rate control techniques used to manage IP networks.
SUMMARY
The disclosure seeks to provide a device that is operable as one of a plurality of edge devices of a computer network, and a method of (namely, a method for) operating the device as one of the plurality of edge devices of the computer network. The present disclosure seeks to provide a solution to the existing problem of load balancing and rate control in an IP network in a fully distributed manner without a centralized controller. An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and to provide an efficient technique for load balancing and rate allocation using the edge devices alone. In the present disclosure, each edge device (or source node) manages a set of tunnels towards several destinations, such that each tunnel can be routed over multiple paths and the devices (or source nodes) decide how much bandwidth each tunnel should take over each path (e.g., load balancing and rate allocation decision).
The object of the disclosure is achieved by the solutions provided in the enclosed independent claims. Advantageous implementations of the present disclosure are further defined in the dependent claims.
In one aspect, the disclosure provides a device that is operable as one of a plurality of edge devices of a computer network, the computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other. The device is configured to control one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among the plurality of edge devices.
The device of the present disclosure provides an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes). The device of the disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration). The device decides a rate allocation for the one or more tunnels iteratively. As only the device (source node) is aware of the utility function of the one or more tunnels associated with the device, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link provides a speedup of convergence. Since, the load balancing and rate control decisions are taken locally at the device operating as one of the edge devices, the technique provides a scalable solution. Additionally, the edge devices collaborate through a small amount of information exchange occurring therebetween, thereby achieving a low data overhead. The device of the disclosure also facilitates a convergence to the optimal solution with anytime feasibility. Additionally, the device of the disclosure can add new paths (for existing and new tunnels), can remove non-used path, can modify the path preferences and priority of tunnels, and is able to know a utility function. Additionally, owing to the utility function being only known locally at the device (source node), the utility function can be tuned and/or learnt over time. Additionally, since only the device (source node) performs the load balancing and rate control, other intermediary nodes do not need to be updated about changes in traffic demand. Moreover, there is no failure risk associated with operation of a central node in the technology of the disclosure. Furthermore, the information associated with the device operating as the edge device is not shared on the network.
In an implementation form, the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
The control of the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes enables there to be received lightweight information from the one or more intermediary nodes which helps the edge devices to converge faster to achieve an optimal solution.
In a further implementation form, the device is configured to transmit aggregated utility information to one or more of the plurality of source edge devices.
By virtue of the aggregated utility information that is transmitted to one or more of the plurality of source edge devices, the one or more tunnels and corresponding one or more data rates are controlled, iteratively based on the aggregated utility information. In a further implementation form, the device is configured to repeatedly determine the rate allocation to achieve an optimal convergence.
The repeated determination of the rate allocation to achieve an optimal convergence enables to maximize a network utility and control load balancing (e.g. Maximum Link Utilization) while preserving a feasibility during each step/iteration.
In a further implementation form, the device is configured to receive aggregated utility information from the one or more of the source edge devices, and wherein each rate allocation update is based on the received aggregated utility information.
By virtue of the aggregated utility information that is received from the one or more of the source edge devices, the device is configured to iteratively determine the rate allocation based on the aggregated utility information to maximize network utility and control load balancing (e.g., Maximum Link Utilization).
In a further implementation form, the device is configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information.
The computation of the feasible rate allocation based on the aggregated utility information enables iterative determining of the rate allocation based on the aggregated utility information to maximize network utility and control load balancing (e.g. Maximum Link Utilization). It is advantageous to compute a Lagrangian bound based on the aggregated utility information as the lagrangian bound allows an optimization to be solved without explicit parameterization in terms of any constraints.
In a further implementation form, the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with one or more tunnels that originate from the intermediary node on the computer network.
By virtue of the information received comprising aggregated utility information, link states generated by the intermediary nodes and a traffic load from one or more intermediary nodes at the device, the lightweight and optional participation of intermediary nodes accelerates the convergence. Additionally, the aggregated utility information and link states are used to iteratively determine the rate allocation to maximize network utility and control load balancing (e.g., Maximum Link Utilization). In a further implementation form, the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic.
The information associated with the link states that includes load information associated with network links enables the precise knowledge of the number of tunnels sharing a link to speedup convergence.
In a further implementation form, the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
It is advantageous to use information associated with the link states that further includes a number of tunnels that utilize a particular link as the precise knowledge of the number of tunnels utilizing a particular link speeds-up convergence.
In a further implementation form, an initial transmission of the link state further includes capacity information associated with the particular link.
The capacity information facilitates maintaining a capacity constraint for links during the iterative process of determining the rate allocation and load balancing and thereby enables convergence to optimality and anytime feasibility (e.g., with no capacity constraint violations at intermediate iterations).
In a further implementation form, the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
The one or more data rates that is determined based on user entered data enables to customize the load balancing and rate control based on user preferences and priorities.
In a further implementation form, the device is configured to maximize network utility associated with each of the one or more tunnels.
The maximizing of network utility associated with each of the one or more tunnels enables optimizing the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network. In a further implementation form, the device is configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
The maximizing of network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths enables to optimize the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
In another aspect, the present disclosure provides a method of (namely, a method for) operating a device as one of a plurality of edge devices of a computer network, the computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other. The method comprises controlling, by the device, one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among the plurality of edge devices.
The method of the present disclosure provides an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes). The method of the present disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration). The method determines a rate allocation for the one or more tunnels iteratively. Since only the device (source node) is aware of the utility function of the one or more tunnels associated with the device, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link speeds-up convergence. Since, the load balancing and rate control decisions are taken locally at the device operating as one of the edge devices, the technique provides a scalable solution. Additionally, the method involves the edge devices collaborating through a small amount of information thereby achieving a low overhead. The method of the present disclosure also facilitates a convergence to the optimal solution with anytime feasibility. Additionally, method of the present disclosure can add new paths (for existing and new tunnels), can remove non-used path, modify the path preferences and priority of tunnels, and are able to know the utility function. Additionally, owing to the utility function being only known locally at the device (source node), the utility function can be tuned and/or learnt over time. Additionally, since only the device (source node) performs the load balancing and rate control other intermediary nodes do not need to be updated about changes in traffic demand. Moreover, there is no failure risk of the central node in the present technology. Furthermore, the information associated with the device operating as the edge device is not shared on the computer network.
In a further implementation form the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
The control of the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes enables to receive lightweight information which helps the edge devices to converge faster to achieve an optimal solution.
In a further implementation form, the data rate is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the one or more data rates.
By virtue of periodically revising the one or more data rates based on a continuous improvement of solutions associated with inputs for determining a steady state of the data rate, a convergence to the optimal solution with anytime feasibility is enabled.
In a further implementation form, the aggregated utility information is transmitted to the plurality of source edge devices.
The aggregated utility information that is transmitted to one or more of the plurality of source edge devices from the device can be used to control a data rate associated with one or more tunnels iteratively based on the aggregated utility information.
In a further implementation form, the rate allocation is enforced over the one or more paths managed by the device.
The rate allocation is enforced over the one or more paths managed by the device for optimizing the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network. In a further implementation form, user entered data is received at each of the plurality of source edge devices, wherein the one or more data rates is further based on the user entered data.
It is advantageous to receive user entered data at each of the plurality of source edge devices and to determine the one or more data rates based on the user entered data to customize the load balancing and rate control based on user preferences and priorities.
In a further implementation form, the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with the one or more tunnels on the computer network.
By virtue of receiving the information comprising aggregated utility information, link states generated by the intermediary nodes and a traffic load from one or more intermediary nodes at the device, the lightweight and optional participation of intermediary nodes accelerates the convergence. Additionally, the aggregated utility information and link states is used to iteratively determine the rate allocation to maximize network utility and control load balancing (e.g., Maximum Link Utilization).
In a further implementation form, the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
It is advantageous to use an information associated with the link states that includes load information associated with network links as the precise knowledge of the number of tunnels sharing a link speeds-up convergence.
In a further implementation form, the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
By virtue of using information associated with the link states that further includes a number of tunnels that utilize a particular link, the precise knowledge of the number of tunnels utilizing a particular link speeds-up convergence.
In a further implementation form, an initial transmission of the link state further includes capacity information associated with the particular link. The information associated with the particular link in the form of the initial transmission of link state facilitates maintaining a capacity constraint for links during iterative process of determining the rate allocation and load balancing and thereby enables convergence to optimality and anytime feasibility (e.g., with no capacity constraint violations at intermediate iterations).
In a further implementation form, the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
The determining of the data rate based on user entered data enables to customize the load balancing and rate control based on user preferences and priorities.
In a further implementation form, an aggregated utility information is received from one or more source edge devices, wherein the one or more data rates is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
The receiving of aggregated utility information from the one or more of the source edge devices enables to iteratively determine the rate allocation based on the aggregated utility information to maximize network utility and control load balancing (e.g., Maximum Link Utilization). Additionally, the computation of step size using a Polyak function and based on the received aggregated utility information has a faster convergence compared to other existing techniques.
In a further implementation form, the data rate is based on a computation of step size, where the step size is based on a number of iterations of a computation of the rate allocation.
The determining of the one or more data rates using a step size based on a number of iterations of a computation of the rate allocation enables to achieve better speed and accuracy of computation.
In a further implementation form, the device maximizes network utility associated with each of the one or more tunnels.
The maximizing of network utility associated with each of the one or more tunnels enable in optimizing the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network. In a further implementation form, the network utility associated with each of the one or more tunnels is maximized based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
The maximizing of network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths enables to optimize the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
In another aspect, the present disclosure provides a computer program comprising executable instructions that when executed by a processor cause the processor to perform a method, the method comprising controlling, by a device, a data rate associated with one or more tunnels that use one or more paths between the device and a destination edge device among a plurality of edge devices, wherein the plurality of edge devices are associated with a computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other.
The computer program achieves all the advantages and effects of the method of the present disclosure.
In a further implementation form, the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
In a further implementation form, the one or more data rates is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the one or more data rates.
In a further implementation form, the aggregated utility information is transmitted to the plurality of edge devices.
In a further implementation form, the rate allocation is enforced over the one or more paths managed by the device. In a further implementation form, user entered data is received at each of the plurality of source edge devices, wherein the data rate is further based on the user entered data.
In a further implementation form, the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with the one or more tunnels on the computer network.
In a further implementation form, the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
In a further implementation form, the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
In a further implementation form, an initial transmission of the link state further includes capacity information associated with the particular link.
In a further implementation form, the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
In a further implementation form, an aggregated utility information is received from one or more source edge devices, wherein the data rate is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
In a further implementation form, the one or more data rates is based on a computation of step size, where the step size is based on a number of iterations of a computation of the rate allocation.
In a further implementation form, the device maximizes network utility associated with each of the one or more tunnels.
In a further implementation form, the device is configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths. The various implementation forms of the computer program achieve all the advantages and effects of the corresponding implementation forms of the method of the present disclosure.
It has to be noted that all devices, elements, circuitry, units and means described in the present application could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative implementations construed in conjunction with the appended claims that follow.
BRIEF DESCRIPTION OF THE DRAWINGS
The summary above, as well as the following detailed description of illustrative examples, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Examples of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1A is a network environment diagram of a system for distributed traffic engineering at a plurality of edge devices, in accordance with an example of the disclosure; FIG. IB is a block diagram that illustrates various exemplary components of a device operable as one of the pluralities of edge devices, in accordance with an example of the disclosure;
FIG. 2 is a network architecture illustrating an exemplary scenario of rate control at a device operating as an edge device in a computer network, in accordance with an example of the disclosure;
FIG. 3 is a functional architecture of the device of FIG. 2, in accordance with an example of the disclosure;
FIGs. 4A, 4B, and 4C, collectively illustrate an exemplary scenario of controlling data rate by a device in a software defined wide area network (SD-WAN), in accordance with an example of the disclosure;
FIGs. 5A-5F illustrate exemplary graphical results obtained by implementing the present technology in an internet protocol radio access network (IPRAN) network, in accordance with an example of the disclosure; and
FIG. 6 is a flowchart of a method for controlling, by a device, one or more tunnels and corresponding one or more data rates, for example each path or tunnel having its corresponding data rate, that use one or more paths between the device and a destination edge device among a plurality of edge devices, in accordance with an example of the disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the nonunderlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF EMBODIMENTS
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the disclosure have been disclosed, those skilled in the art would recognize that other examples for carrying out or practicing the disclosure are also possible. FIG. 1A is a network environment diagram of a system for distributed traffic engineering at edge devices, in accordance with an embodiment of the present disclosure. With reference to FIG. 1A, there is shown a system 100. The system 100 includes a computer network 102, a plurality of intermediary nodes such as a first intermediary node 104A, up to N'h intermediary node 104N, a plurality of edge devices such as a first edge device (hereinafter simply reffered to as device 106A) up to N'h edge device 106N, and a plurality of wired/wireless end point devices such as a first wired/wireless end point device 108A, up to N'h wired/wireless end point device 108N. The plurality of intermediary nodes 104A-N are communicatively coupled to one another via the computer network 102. The plurality of edge devices 106A-N are communicatively coupled to the plurality of intermediary nodes 104A-N. The plurality of wired/wireless end point devices 108A-N are connected to the computer network 102 via the plurality of intermediary nodes 104A-N and the plurality of edge devices 106A-N.
The computer network 102 may be a wired or wireless communication network. Examples of the computer network 102 may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Local Area Network (LAN), a wireless personal area network (WPAN), a Wireless Local Area Network (WLAN), a wireless wide area network (WWAN), a cloud network, a cellular network, a Metropolitan Area Network (MAN), a software defined wide area network (SD-WAN), and/or the Internet. The plurality of intermediary nodes 104A-N communicatively connect the plurality of edge devices 106A-N to one another and/or to the computer network 102. Examples of the plurality of intermediary nodes 104A-N include switches, wireless access points, routers, firewalls (security) and the like. Each of the plurality of edge devices 106A-N provide an entry point for the plurality of wired/wireless end point devices/users 108A-N into the computer network 102. Examples of the plurality of edge devices 106A-N include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) or wide area network (WAN) access devices, and the like. Examples of the plurality of wired/wireless end point devices 108A-N includes, but is not limited to user devices (such as cellular phones, personal digital assistants (PDAs), handheld devices, laptop computers, personal computers, an Internet-of- Things (loT) device, a smart phone, a machine type communication (MTC) device, a computing device, a drone, or any other portable or nonportable electronic device. In an embodiment, a device operable as one of the plurality of edge devices 106A-N is configured to control a data rate associated with one or more tunnels that use one or more paths between the device and a destination edge device among the plurality of edge devices 106A-N. FIG. IB is a block diagram that illustrates various exemplary components of a device 106A operable as one of the plurality of edge devices 106A-N, in accordance with an embodiment. FIG. IB is described in conjunction with elements from FIGs. 1 A. With reference to FIG. IB, there is shown block diagram of the device that includes a rate allocation module 110, a routing module 112, a data monitoring module 114, a traffic management module 116, and a database 118. In an embodiment, the rate allocation module 110, the routing module 112, the data monitoring module 114, and the traffic management module 116 are communicatively associated with the database 118 and are executable via a processor (not shown). In an embodiment, the database 118 stores a set of instructions for execution by the processor (not shown).
The device 106A is operable as one of a plurality of edge devices 106A-N of a computer network 102, the computer network 102 comprising the plurality of edge devices 106A-N and one or more intermediary nodes 104A-N, the one or more intermediary nodes 104A-N being configured to communicatively couple the edge devices 106A-N to each other. The device 106A is configured to control a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device among the plurality of edge devices 106A-N. The device 106A provides an efficient technique of load balancing and rate control in IP networks that yields a minimum overhead (i.e. no or limited participation of core nodes).
By virtue of the rate allocation module 110, the routing module 112, the data monitoring module 114, and the traffic management module 116, the device 106A is configured to execute various operations. For example, the rate allocation module 110 of the device 106A, is configured to control a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device among the plurality of edge devices 106A-N. The rate allocation module 110 is configured to repeatedly determine the rate allocation based on the user entered data and the aggregated utility information, for example only in the case where the step size is based on Polyak function, to achieve an optimal convergence. The rate allocation module 110 maximizes network utility associated with each of the one or more tunnels. The rate allocation module 110 further maximizes a network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths. In accordance with an embodiment, the rate allocation module 110 associated with one or more edge devices 106A-N operating as source nodes (e.g., plurality of edge devices 106A-N) periodically exchanges aggregated utilities. The rate allocation module 110, is configured to transmit aggregated utility information, for example only in the case where Poliak step size function is used, to one or more of the plurality of edge devices 106A-N. This is further described in detail, for example, in FIGs. 2 and 3.
In accordance with an embodiment, the device 106A is configured to control the data rate associated with one or more tunnels based on information received from the one or more intermediary nodes 104A-N. The rate allocation module 110 is configured to control the data rate associated with one or more tunnels based on information received from the one or more intermediary nodes 104A-N. In an embodiment, the information received via the one or more intermediary nodes 104A-N includes aggregated utility information, link states generated by the intermediary nodes 104A-N and a traffic load associated with one or more tunnels that originate from the intermediary node on the computer network 102. In an embodiment, the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic. In an embodiment, the link states further include a number of tunnels of the one or more tunnels that utilize the particular link. In an embodiment, an initial transmission of the link state further includes capacity information associated with the particular link.
In accordance with an embodiment, the device 106A is configured to transmit aggregated utility information to one or more of the plurality of source edge devices (e.g., plurality of edge devices 106A-N. The rate allocation module 110 transmits the aggregated utility information to one or more of the plurality of source edge devices (e.g. plurality of edge devices 106A-N). The routing module 112 is configured to periodically propagate the link states including, for example, a link load and/or a number of tunnels using links.
In accordance with an embodiment, the device 106A is configured to repeatedly determine the rate allocation to achieve an optimal convergence. The rate allocation module 110 iteratively computes rate allocation based on the one or more lagrangian multiplers, the user entered data and/or the aggregated utility information to achieve an optimal convergence.
In accordance with an embodiment, the device 106A is configured to receive aggregated utility information from the one or more of the source edge devices (e..g., plurality of edge devices 106A-N), wherein each rate allocation update is based on the received aggregated utility information. The rate allocation module 110 is also configured to receive aggregated utility information from the one or more of the source edge devices (e.g., plurality of edge devices 106A-N), wherein each rate allocation update is based on the received aggregated utility information.
In accordance with an embodiment, the device 106A is configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information. The rate allocation module 110 is also configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information and /or the user entered data.
In accordance with an embodiment, the information received via the one or more intermediary nodes 104A-N comprises aggregated utility information, link states generated by the intermediary nodes 104A-N and a traffic load associated with one or more tunnels that originate from the intermediary node (e.g., 104A) on the computer network 102.
In accordance with an embodiment, the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic.
In accordance with an embodiment, the link states further include a number of tunnels of the one or more tunnels that utilize the particular link. The routing module 112 propagates the link states using a routing protocol such as, for example, open shortest path first (OSPF), intermediate system to intermediate system (IS-IS, also written ISIS), border gateway protocol (BGP). This is further described in detail, for example, in FIGs. 2, and 3.
In accordance with an embodiment, an initial transmission of the link state further includes capacity information associated with the particular link.
In accordance with an embodiment, the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1. The rate allocation module 110 is configured to receive a user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
In accordance with an embodiment, the device 106A is configured to maximize network utility associated with each of the one or more tunnels. The device 106A is further configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
In an embodiment, the data monitoring module 114 captures traffic demands of the tunnels based on local monitoring in the data plane. In an embodiment, the data monitoring module 114, provides updated throughput information for each outgoing tunnel (e.g., local information at a source node). In an embodiment, the data monitoring module 114, also measures the number of tunnels using each link on intermediary nodes 104A-N to accelerate convergence) and provides the measured number of tunnels to the routing module 112. In an embodiment, the measure of the number of tunnels is realized by counting source and destination node pairs (e.g., netflow solution, or statistical sketches). This is further described in detail, for example, in FIGs. 2, and 3.
In an embodiment, the rate allocation module 110 computes a rate allocation and a lagrangian bound based on the aggregated utility information and/or the user entered data. In an embodiment, the algorithm for computation of the rate allocation and the lagrangian bound is as explained below.
Consider a network 6 = (f U £) where V and E represent the sets of nodes and links, respectively. Consider a set of tunnels K such that for all k G K, dk denotes the associated traffic demand and Prk G N denotes the associated priority. Given an input maximum link utilization mlu < 1.0 (given by the user), let Ce be the capacity of e and Ce = h(mlu, Ce~) (e.g., h mlu, Ce) = mlu X C_e or h mlu, Ce) = tanhfmlu X 4.0) x C_e), for all link e G E. If suppose, Pk is the set of technologies/paths associated with tunnel k such that for all p G P, f(p, k) G N represents the preference of tunnel k for path p. A function Uk(Xp) represents the utility of tunnel k given for rate x (e.g., Uk(Xp) = dkXpf(p, k or alpha fairness function for max throuput, max-min, proportional fairness). The centralized problem can be formulated as follows:
Compact linear program (Centralized model):
Suppose Xp G [0,1] is a rate allocation of traffic on a path p G Pk for a tunnel k, then the rate allocation problem is given by equations (equation 1 and equation 2):
Figure imgf000022_0001
The constraints indicated in equation 1 guarantee that all the traffic is split onto the outgoing paths and the constraints indicated in equation 2 ensure that all the capacity constraints are satisfied. The constraints indicated in equation 2 can be relaxed to obtain the following lagrangian relaxation of the problem: b) Lagrangian relaxation method:
The Lagrangian relaxation method penalizes violations of inequality constraints using a Lagrange multipliers u E IR+, which imposes a cost on violations.
After relaxing capacity constraints indicated in equation 2, the following Lagrangian relaxation problem is obtained:
Figure imgf000022_0002
Xp > 0, Vk E K, p E Pk.
Given Lagrange multipliers u E IR+, the Lagrangian relaxation problem decomposes into |/<| sub-problems that can be solved independently from each other, defined as follows:
Lagrangian sub-problem P1 associated with tunnel k E K
Figure imgf000022_0003
If Uk
Figure imgf000023_0001
is linear, P1 can be solved in a linear time by setting the variable of having the maximum weight in the objective function to 1 and all other variables to 0.
Lagrangian sub-problem P2 :
Solutions returned by P1 do not guarantee the satisfaction of capacity constraints (2). Starting from a feasible solution x*, the following program guarantees the feasibility of the solutions:
Figure imgf000023_0002
where ne represents the number of tunnels using link e and LUe the link utilization of e and ne can be set to |K| if the number of tunnels using e cannot be provided.
In an embodiment, in order to solve the Lagrangian relaxation converging to the optimal solution and preserving the feasibility of the solutions, the following distributed algorithm is used by the rate allocation module 110:
Output: Feasible solution x
1) Initialize u(0) = 0, i = 0;
2) Solve problem P1 and get solution x;
3) Solve problem P2 and get solution x ;
4) Enfore rate allocation using x ;
5 ) Reception of pePfc Uk( xp) and 6k from all tunnels;
6 ) Update of the Lagrangian multipliers :
Figure imgf000023_0003
7) i = i + 1 and go to step 1; Choose
Figure imgf000024_0001
where 0 < a < 2
In an embodiment, the aggregated utility information exchanged between the edge devices (agents) 106A-N are used to compute the step-size sl using a Polyak function.
In another embodiment, one or more other known techniques can be used to compute sl that do not use the aggregated utilities (e.g., - where i is the iteration of the sub-gradient algorithm, periodically reseted). Based on the used method to compute sl, the edge devices (agents) 106A- N may exchange both, one or none of the aggregated utilities.
In an embodiment, the traffic management module 116, receives the rate allocations from the rate allocation module 110 and enforces the rate allocations in the data plane.
In an implementation, the rate allocation module 110, the routing module 112, the data monitoring module 114, and/or the traffic management module 116 are potentially implemented as a software component or a combination of software and circuitry.
The device 106A of the present disclosure provides an efficient technique of load balancing and rate control in IP networks (such as computer network 102) that yields a minimum overhead (i.e. no or limited participation of core nodes) and provides autonomy to the edge devices 106A- N. The device 106A of the present disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration). Since only the device 106A (source node) is aware of the utility function of the one or more tunnels associated with the device 106A, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link speeds-up convergence. Since, the load balancing and rate control decisions are taken locally at the device 106A operating as one of the plurality of edge devices 106A-N, the technique provides a scalable solution. Additionally, the plurality of edge devices 106A-N collaborate through a small amount of information thereby achieving a low overhead. The device 106A of the present disclosure also facilitates a convergence to the optimal solution with anytime feasibility. Additionally, the device 106A of the present disclosure can add new paths (for one or more existing tunnels and one or more new tunnels), can remove un-used paths, modify the path preferences and priority of tunnels, and is able to know the utility function. Owing to the utility function being only known locally at the device 106A (source node), the utility function can be tuned and/or learnt over time. Additionally, since only the device 106A (source node) performs the load balancing and rate control, other intermediary nodes do not need to be updated about changes in traffic demand. Moreover, there is no failure risk of central node in the present technology. Additionally, the information associated with the device 106A operating as an edge device is not shared on the computer network 102.
FIG. 2 depicts a network architecture illustrating an exemplary scenario of rate control at a device (such as a device 106A) operating as an edge device in a computer network (such as computer network 102 of FIG. 1), in accordance with an embodiment of the present disclosure. FIG. 2 is described in conjunction with elements from FIGs. 1A, and IB. With reference to FIG. 2, there is shown a network architecture 200 that includes a device A 106A, a device B 106B, a device C 106C, a device D 106D and an intermediary node 104A communicatively coupled through a computer network (such as computer network 102). Each of the device A 106A, the device B 106B, the device C 106C, and the device D 106D are operable as an edge device. In an embodiment, optional candidate paths from the device A 106A to the device D 106D are calculated locally or are installed using a path computation element. The intermediary node 104A monitors a number of tunnels sharing each link and sends it as link states with a link state protocol to each of the devices, namely the device A 106A, the device B 106B, the device C 106C, and the device D 106D. A rate allocation module 110 associated with each of the edge devices, namely the device A 106A, the device B 106B, the device C 106C, and the device D 106D, periodically receives link states and some additional information from other edge devices 106A-N (i.e., aggregated utilities for the tunnels managed by each edge device (i.e. the device 106A), that constitute two scalars per edge device) and executes an iteration of a rate control algorithm (described earlier along with FIG. IB) based on utility functions of the respective outgoing tunnels (only) and enforces rate allocations for each outgoing tunnel. For example, the rate allocation module 110 of the device A 106A enforces rate allocations over the optional paths for each outgoing tunnel thereby preserving feasibility.
Whenever new information is available, the rate allocation module 110 (of for example the device A 106A), executes an iteration of the rate control algorithm knowing the utility functions of the outgoing tunnels (only), enforces the rate allocation over multiple paths for each outgoing tunnel (preserving feasibility) and communicates information such as aggregated utilities for the tunnels managed by the device A 106A to other three devices (or agents), that is the device B 106B, the device C 106C, and the device D 106D.
FIG. 3 is a functional architecture of the device of FIG. 2, in accordance with an embodiment of the disclosure. As shown in FIG. 3, the functional architecture 300 includes a control plane 302 and a data plane 304 in which the device 106A of FIGs. IB and 2 operates. In an embodiment, in the control plane 302 part of operation of the device 106A, the rate allocation module 110 of the device 106A receives a) link states 306, b) aggregated utilities 308 from other devices (agents) of the computer network 102, c) parameters for outgoing tunnels 310, and d)tunnel traffic 312 from the data monitoring module 114, as inputs and generates aggregated utilities 314 and rate allocations 316 as output. The data monitoring module 114 provides updated throughput information for each outgoing tunnel (local information at the source) and optionally measures a number of tunnels 318 using each link on intermediary nodes to accelerate convergence). The measure of the number of tunnels 318 can be realized by counting source and destination pairs (e.g., Netflow solution, Statistical sketches, and the like).
In an embodiment, the link states 306 includes load capacity, number of tunnels on the links, the parameters for outgoing tunnels 310 includes priority, preferences by the paths/technologies (i.e. paths are provided) or service level agreements (SLA) requirements (that can be used to automatically tune preferences based on measurements), a desired maximum link utilization (MLU). The routing module 112 captures a number of tunnels 318 on links from the data plane 304 and periodically propagates link states 320 including, for example, a link load and/or a number of tunnels using the links. In an embodiment, the routing module 112 propagates the link states 320 using a routing protocol such as, for example, open shortest path first (OSPF), intermediate system to intermediate system (IS-IS, also written ISIS), border gateway protocol (BGP). The traffic management module 116, receives the rate allocations 316 from the rate allocation module 110 and enforces the rate allocations 316 in the data plane 304. In FIGs. 4A-4C, there are shown illustrates of an exemplary scenario of controlling data rate by a device (such as the device 106A) in a software defined wide area network (SD-WAN), in accordance with an embodiment of the disclosure. The SD-WAN 400 depicted in FIGs 4A-4C includes three sites, namely a siteO 402, a sitel 404, and a site2 406 connected to an enterprise network 408 through ports 410, 412, 414, 416, 418, and 420 of edge devices (edge devices are represented by LB). The ports 410, 414, and 418 of the edge devices connect the siteO 402, sitel 404, and site2 406 respectively to the enterprise network 408 via an Internet®. The ports 412, 416, and 420 of the edge devices connect the siteO 402, sitel 404, and site2 406 respectively to the enterprise network 408 via an multiprotocol label switching (MPLS). The SD-WAN 400 also includes ports 422 and 424 of the edge device (associated with the enterprise network 408) that communicatively associate with the ports 410-420 of the edge devices to the enterprise network 408. The SD-WAN 400 includes three tunnels, namely tunnel 0: Site 0 -> Head, tunnel 1 : Site 1 -> Head, tunnel 2: Site 2 -> Head. Each tunnel has a demand of 100 megabites per second. FIG. 4B depicts a first exemplary scenario and FIG. 4C depicts a second exemplary scenario. Suppose, a link capacity is 180 megabytes and a target MLU provided by a user is 90%, then in the first exemplary scenario depicted in FIG. 4B, for tunnel preferences including:
Tunnel 0: internet = 1, MPLS = 2
Tunnel 1 : internet = 2, MPLS = 2
Tunnel 2: internet = 2, MPLS = 1, and priority of tunnels including:
Tunnel 0: 1 (low Priority), Tunnel 1 : 1 (low Priority), Tunnel 2: 1 (low Priority), a value of 598.73 (optimality gap of 0.2%) is obtained by performing 12 iterations via simulation.
In the second exemplary scenario depicted in FIG. 4C, for tunnel preferences including:
Tunnel 0: internet = 2, MPLS = 1
Tunnel 1 : internet = 2, MPLS = 2
Tunnel 2: internet = 2, MPLS = 1 and priority of tunnels including:
Tunnel 0: 3 (high Priority), Tunnel 1 : 2 (moderate Priority), and Tunnel 2: 1 (low Priority), a value of 1161.11 (optimality gap 0.7%) is obtained by performing 203 iterations, via simulation and a value of 1161.11 (optimality gap 3.1%) is obtained by performing 20 iterations, via simulation. The iterations can be executed at the rate at which the link states can be received (e.g., 200 milliseconds - 1 second).
In FIGs. 5A-5F, there are shown illustrates of exemplary graphical results obtained by implementing the present technology in an internet protocol radio access network (IPRAN) network, in accordance with an embodiment of the present disclosure. FIG. 5A-5F are described in conjunction with elements from FIGs. 1A, IB, 2, and 3. The graphical results illustrated in FIGs 5A-5F correspond to an IPRAN network with 543 links, 477 nodes, 500 tunnels, and an average of demands of 147.6 megabytes. More particularly, FIG. 5A is a graphical representation that illustrates a total traffic volume versus sub-gradient iterations curve 500A obtained by implementing the device of the present technology in an exemplary IPRAN network when a number of tunnels using each link (ne) is used for computation of a data rate. FIG. 5 A is described in conjunction with elements from FIGs. 1A to 3. In the total traffic volume versus sub-gradient iterations curve 500A, X-axes 502A represents sub-gradient iterations and Y-axes 504A represents total traffic volume of the IPRAN network.
FIG. 5B is a graphical representation that illustrates a total number of paths versus sub-gradient iterations curve 500B obtained by implementing the device of the present technology in an exemplary IPRAN network, when the number of tunnels using each link (ne) is used for computation of the data rate. In the total number of paths versus sub-gradient iterations curve 500B, X -axes 502B represents sub-gradient iterations and Y-axes 504A represents total number of paths of the IPRAN network, for a number of modified paths greater than 0.1 megabytes.
FIG. 5C is a graphical representation that illustrates a total traffic volume versus sub-gradient iterations curve 500C obtained in an exemplary IPRAN network when a number of tunnels using each link (ne) is not used for computation of the data rate. In the total traffic volume versus sub-gradient iterations curve 500C, X-axes 502C represents sub-gradient iterations and Y-axes 504C represents total traffic volume of the IPRAN network.
FIG. 5D is a graphical representation that illustrates a total number of paths versus sub-gradient iterations curve 500D corresponding to an exemplary IPRAN network, when the number of tunnels using each link (ne) is not used for computation of the data rate. In the total number of paths versus sub-gradient iterations curve 500D, X-axes 502D represents sub-gradient iterations and Y-axes 504D represents total number of paths of the IPRAN network, for a number of modified paths greater than 0.1 megabytes. FIG. 5E is a graphical representation that illustrates a first objective value versus sub-gradient iterations curve 502E obtained for a lagrangian bound and a second objective value versus subgradient iterations curve 504E obtained for a feasible solution, by implementing the device 106A of the present technology in an exemplary IPRAN network when a number of tunnels using each link (ne) is used for computation of a data rate. In the first objective value versus sub-gradient iterations curve 502E, X-axes 502E represents sub-gradient iterations and Y-axes 504E represents the objective values, for lagrangian bound. In the second objective value versus sub-gradient iterations curve 504E, X-axes 502E represents sub-gradient iterations and Y-axes 504E represents the objective values for the feasible solution.
FIG. 5F is a graphical representation that illustrates a first objective value versus sub-gradient iterations curve 502F obtained for a lagrangian bound and a second objective value versus subgradient iterations curve 504F obtained for a feasible solution, by implementing the device of the present technology in an exemplary IPRAN network when a number of tunnels using each link (ne) is not used for computation of a data rate. In the first objective value versus subgradient iterations curve 502F, X-axes 502F represents sub-gradient iterations and Y-axes 504F represents the objective values, for lagrangian bound. In the second objective value versus subgradient iterations curve 504F, X-axes 502F represents sub-gradient iterations and Y-axes 504F represents the objective values for the feasible solution. As can be observed from the graphical representations of FIGs 5A to 5F, using the number of tunnels using each link (ne) for computation of the data rate accelerates a bit convergence and not using the number of tunnels using each link (ne) for computation of the data rate slows down the convergence.
Considering that each scalar (as a type length value (TLV)) consumes 32 bits and a duration of each iteration is 200 milliseconds (ms) and Vs is a set of node sources, and when aggregated utilities are sent through the minimum spanning tree of (|V|-1) links (multicast, worst case scenario). The data received at the beginning of the optimization is given by:
Link capacity Ce : 32 x IE] bits;
The data sent by each source at each iteration is given by: aggregated utility U (s)
Figure imgf000029_0001
Uk( xp) : 32 bits; aggregated utility 0(s) 6k : 32 bits;
The data received, by each source device (i.e the device 106A), at each iteration is given by: a) aggregated utilities : 64 x (|l^| — 1) bits; b) link state LUe 32 x \E\ bits; and c) number of tunnels using each link ne 32 X \E\ bits .
Consider for example, in a network of 500 nodes (all are sources), 1000 links and 1000 demands:
The data received at the beginning of the optimization : 0.032 megabytes (Mb);
The data sent by each source at each iteration : 0.00032 Mb/second; and
The data received, by each source, at each iteration of 0.48 Mb/second includes: a) aggregated utilities :0.00032 x 499 = 0.159 Mb/second (0.00016
Mb/second/link); b) link state LUe : 32 X 1000 = 0.16 Mb/second; and c) number of tunnels using each link ne : 32 x 1000 = 0.16 Mb/second.
In an exemplary scenario of the IPRAN network, for aggregated utilities of 0.00027793 megabytes/ second per link, the link state is 0.01728 megabytes/second, for aggregated utilities of 0.00028052 megabytes/second per link, the link states is 0.017376 megabytes per second. In an exemplary scenario of SD-WAN, for aggregated utilities of 0.000122571 Mb/s/link, the link state is 0.000896 Mb/second. It can be observed from the above values from the above exemplary scenarios that an overhead for the aggregated utilities and the link states is very low.
FIG. 6 is a flowchart of a method 600 for controlling, by a device, a data rate associated with one or more tunnels that use one or more paths between the device and a destination edge device among the plurality of edge devices, in accordance with an embodiment of the present disclosure. FIG. 6 is described in conjunction with elements from FIGs. 1 A, IB, 2, 3, 4A-4C, and 5A-5F.
In one aspect, the present disclosure provides the method 600 for operating a device 106A as one of a plurality of edge devices 106A-N of a computer network 102, the computer network 102 comprising the plurality of edge devices 106A-N and one or more intermediary nodes 104A-N, the one or more intermediary nodes 104A-N being configured to communicatively couple the edge devices 106A-N to each other, wherein the method 600 comprises controlling, by the devicelO6A, a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device among the plurality of edge devices 106A-N. With reference to FIG. 6, the method 600 is executed at the device 106A described in detail, for example, in FIGs. 1A, 2, and 3. The method 600 includes steps 602, 604, 606, and 608.
At a step 602, the method 600 comprises receiving by the device 106A a link state, a number of tunnels using each link, and/or aggregated utilities of other edge devices (agents). In an embodiment, the device 106A also receives a traffic demand based on a local traffic monitoring by the data monitoring module 114 of the device 106A. In another embodiment, the device 106A also receives user entered data. The user entered data includes at least one of tunnel priority, a path preference, and/or a desired maximum link utilization (MLU) between 0 and 1. In an embodiment, the user entered data may be global data or may be associated with each individual link. It is advantageous to determine the data rate based on user entered data to customize the load balancing and rate control based on user preferences and priorities.
In an embodiment, the device 106A receives, the aggregated utility information, the link states (e.g., generated by the intermediary nodes) and a traffic load associated with the one or more tunnels on the computer network 102 from one or more intermediary nodes 104A-N. In an embodiment, the link states include a number of tunnels of the one or more tunnels that utilize a particular link. In an embodiment, an initial transmission of the link states further includes a link capacity information associated with the particular link. In an embodiment, the link capacity information can be retrieved from a management system. In another embodiment, the link capacity is shared as the link states. In an embodiment, the device 106A smoothes the received and measured values using a moving average. In another embodiment, the link states include a load information associated with network links, where a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
In an embodiment, the link states include load capacity, number of tunnels on the links, the parameters for outgoing tunnels includes priority, preferences by the paths/technologies (i.e. paths are provided) or service level agreements (SLA) requirements (that can be used to automatically tune preferences based on measurements), a desired maximum link utilization (MLU). The device 106A captures a number of tunnels on links from the data plane and periodically propagates link states including, for example, a link load and/or a number of tunnels using the links. In an embodiment, the device 106A propagates the link states using a routing protocol such as, for example, open shortest path first (OSPF), intermediate system to intermediate system (IS-IS, also written ISIS), border gateway protocol (BGP). At a step 604, the method 600 further comprises updating one or more lagrangian multipliers using the aggregated utilities based on the link state and the number of tunnels using each link. In an embodiment, the device 106A computes a feasible rate allocation and a Lagrangian bound (or Lagrangian multipliers) based on the aggregated utilities. The Lagrangian bound includes an upper bound and/or a lower bound. The device 106A updates the Lagrangian multipliers using the upper bounds and/or the lower bounds and the link states. The updation of Lagrangian multipliers and feasible rate allocation has been described in detail, for example, in FIGs. IB and 2 and hence omitted here for the sake of brevity.
At a step 606, the method 600 further comprises iteratively computing rate allocation based on the updated one or more lagrangian multiplers, the user entered data and/or the aggregated utility information to achieve an optimal convergence. In an embodiment, the Lagrangian subproblem is solved to compute the rate allocation. In another embodiment, a modified Lagrangian sub-problem is solved to compute the rate allocation. The device 106A maximizes network utility associated with each of the one or more tunnels by computing the rate allocation iteratively. In an embodiment, a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that uses one or more paths is determined by the device 106A so as to maximize a network utility associated with each of the one or more tunnels. It is advantageous to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths so as to optimize the use of resources available, maximizing throughput, minimizing response time, and avoiding overload of any single resource in the network.
The aggregated utility information is transmitted by the device 106A to one or more other edge devices from among the plurality of edge devices (source edge devices) 106A-N. In an embodiment, the device 106A uses a multicast tree to transmit the aggregated utilities to the other edge devices 106A-N. The iterative computing of rate allocation has been described in detail, for example, in FIGs. IB and 2 and hence omitted here for the sake of brevity. The iterative computation of rate allocation maximizes network utility and controls load balancing (e.g., Maximum Link Utilization) while preserving feasibility at each step. The rate allocation is computed using an algorithm based on a sub-gradient algorithm that converges to optimality and provides anytime feasibility (feasible bandwidth allocations at each iteration). In another embodiment, in order to accelerate convergence, a variant of the algorithm that can use extra link-state information such as the number of tunnels sharing each link is used to compute the rate allocation, that has been described in detail, for example, in FIGs. IB and 2 and hence omitted here for the sake of brevity. The iteration can be executed after a minimum number of new data is received or after a maximum idle time.
The iterative computation of rate allocation converges to an optimal solution and provides anytime feasibility (feasible bandwidth allocations at each iteration). Additionally, since only the source nodes (e.g., plurality of edge devices 106A-N ) decide rate allocations for their tunnels iteratively, only the source nodes (e.g. plurality of edge devices 106A-N) are aware of the utility function of their tunnels and no other external controllers are required. Moreover, the lightweight and optional participation of intermediary nodes 104A-N involving only measuring, and share as link states, the number of tunnels using the links accelerates a convergence.
At a step 608, the method 600 further comprises controlling by the device 106A a data rate associated with one or more tunnels that use one or more paths between the device 106A and a destination edge device from among the plurality of edge devices 106A-N, based on the rate allocation. In an embodiment, the data rate is controlled based on the user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1. The device 106A enforces rate allocation for each out going tunnel from the device thereby preserving feasibility. The device 106A enforces rate allocation in the data plane. Whenever new information is available, the device 106A executes an iteration of the rate control algorithm knowing the utility functions of the outgoing tunnels (only), enforces the rate allocation over multiple paths for each outgoing tunnel (preserving feasibility) and communicates information such as aggregated utilities for the tunnels managed by the device 106A to other devices 106A-N in the computer network 102. The device 106A uses different flavors or fairness in the rate allocation (e.g., maximum-minimum, proportional, alpha-fairness and the like). The steps 602 to 608 are repeated periodically to achieve convergence.
The method 600 of the disclosure provides an efficient technique of load balancing and rate control in IP networks (such as computer network 102) that yields a minimum overhead (i.e. no or limited participation of core nodes). The method 600 of the present disclosure converges to an optimal solution and provides anytime feasibility (e.g., feasible bandwidth allocations at each iteration). The method 600 determines a rate allocation for the one or more tunnels iteratively. Since only the device (source node) 106A is aware of the utility function of the one or more tunnels associated with the device 106, a centralized controller is not required. The precise knowledge of the number of tunnels sharing a link speeds-up convergence. Since, the load balancing and rate control decisions are taken locally at the device 106A operating as one of the edge devices 106A-N, the technique provides a scalable solution. Additionally, the method 600 involves the edge devices 106A-N collaborating through a small amount of information thereby achieving a low overhead. The method 600 of the present disclosure also facilitates a convergence to the optimal solution with anytime feasibility. Additionally, method 600 of the present disclosure can add new paths (for existing and new tunnels), can remove non-used path, modify the path preferences and priority of tunnels, and are able to know the utility function. Additionally, owing to the utility function being only known locally at the device (source node) 106A, the utility function can be tuned and/or learnt over time. Additionally, since only the device (source node) 106A performs the load balancing and rate control other intermediary nodes do not need to be updated about changes in traffic demand. Moreover, there is no failure risk of the central node in the present technology. Furthermore, the information associated with the device 106A operating as the edge device 106N is not shared on the computer network 102.
In accordance with an embodiment, the data rate associated with one or more tunnels is controlled based on information received from the one or more intermediary nodes 104A-N.
In accordance with an embodiment, the data rate is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the data rate.
In accordance with an embodiment, the method 600 comprises transmitting aggregated utility information to the plurality of source edge devices (e.g. plurality of edge devices 106A-N).
In accordance with an embodiment, the method 600 comprises enforcing the rate allocation over the one or more paths managed by the device 106A.
In accordance with an embodiment, the method 600 comprises receiving user entered data at each of the plurality of source edge deviceslO6A-N, wherein the data rate is further based on the user entered data.
In accordance with an embodiment, the information received via the one or more intermediary nodes 104A-N comprises aggregated utility information, link states generated by the intermediary nodes 104A-N and a traffic load associated with the one or more tunnels on the computer network 102. In accordance with an embodiment, the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
In accordance with an embodiment, the link states further include a number of tunnels of the one or more tunnels that utilize the particular link. An initial transmission of the link state further includes capacity information associated with the particular link. In accordance with an embodiment, the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1. In accordance with an embodiment, the method 600 comprises receiving aggregated utility information from one or more source edge devices 106A-N, wherein the data rate is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
In accordance with an embodiment, the data rate is based on a computation of a step size, where the step size is based on a number of iterations of a computation of the rate allocation. In accordance with an embodiment, the device 106A maximizes network utility associated with each of the one or more tunnels. In accordance with an embodiment, the network utility is maximised associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
The various embodiments, operations, and variants disclosed in the device 106A of FIG. 1A, IB, 2, and 3 apply mutatis mutandis to the method 600. Various embodiments of the present technology can be implemented in various applications requiring a distributed traffic control via edge devices of a computer network in a scalable manner and efficient manner. For example, in critical applications service level agreements (SLA), each vEdge router continuously monitors path performance and adjusts forwarding and requires a configurable probing interval. Additionally, several app aware routing policies may require an application path to have a low latency (e.g., less than 150 milli seconds (ms), loss less than 2% and jitter less than 10 ms). The distributed load balancing and rate control at edge devices 106A-N as disclosed in the method 600 and the device 106A of the present technology is applicable in such scenarios. The method 600 and device 106A of the present technology are also applicable in a dynamic circuit network (DCN) to implement a distributed solution. Additionally, several internal solutions from Google® (e.g., bandwidth enforcer (BwE)), Microsoft (Swan) and Facebook® are centralized but there are use cases where no centralized coordination is available and the method and device of the present technology can be used in such scenarios for distributed traffic engineering based on load balancing and rate control at one or more edge devices.
The embodiments described herein can include both hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. Furthermore, the embodiments herein can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk - read only memory (CD- ROM), compact disk - read/write (CD-R/W) and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, Subscriber Identity Module (SIM) card, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output (VO) devices (including but not limited to keyboards, displays, pointing devices, remote controls, camera, microphone, temperature sensor, accelerometer, gyroscope, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
The system, method, computer program product, and propagated signal described in this application may, of course, be embodied in hardware; e.g., within or coupled to a Central Processing Unit ("CPU"), microprocessor, microcontroller, System on Chip ("SOC"), or any other programmable device. Additionally, the system, method, computer program product, and propagated signal may be embodied in software (e.g., computer readable code, program code, instructions and/or data disposed in any form, such as source, object or machine language) disposed, for example, in a computer usable (e.g., readable) medium configured to store the software. Such software enables the function, fabrication, modeling, simulation, description and/or testing of the apparatus and processes described herein.
Such software can be disposed in any known computer usable medium including semiconductor, magnetic disk, optical disc (e.g., CD-ROM, DVD-ROM, and the like) and as a computer data signal embodied in a computer usable (e.g., readable) transmission medium (e.g., carrier wave or any other medium including digital, optical, or analog-based medium). As such, the software can be transmitted over communication networks including the Internet and intranets. A system, method, computer program product, and propagated signal embodied in software may be included in a semiconductor intellectual property core (e.g., embodied in HDL) and transformed to hardware in the production of integrated circuits. Additionally, a system, method, computer program product, and propagated signal as described herein may be embodied as a combination of hardware and software.
A "computer-readable medium" for purposes of embodiments of the present invention may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.
A "processor" or "process" includes any human, hardware and/or software system, mechanism or component that processes data, signals or other information. A processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in "real time," "offline," in a "batch mode," etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems. Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". It is appreciated that certain features of the present disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the present disclosure, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as suitable in any other described embodiment of the disclosure.

Claims

1. A device (106A) that is operable as one of a plurality of edge devices (106A-N) of a computer network (102), the computer network (102) comprising the plurality of edge devices (106A-N) and one or more intermediary nodes (104A-N), the one or more intermediary nodes (104A-N) being configured to communicatively couple the edge devices (106A-N) to each other, the device (106A) being configured to control one or more tunnels and corresponding one or more data rates that use one or more paths between the device (106 A) and a destination edge device among the plurality of edge devices (106A-N).
2. The device (106A) of claim 1, wherein the device (106A) is configured to control the one or more data rates associated with one or more tunnels based on information received from the one or more intermediary nodes (104A-N).
3. The device (106 A) of claim 1, wherein the device (106 A) is configured to transmit aggregated utility information to one or more of the plurality of source edge devices (106A-N).
4. The device (106A) of claim 1, wherein the device (106A) is configured to repeatedly determine a rate allocation to achieve an optimal convergence.
5. The device (106 A) of claim 1, wherein the device (106 A) is configured to receive aggregated utility information from the one or more of the source edge devices (106A-N), wherein each rate allocation update is based on the received aggregated utility information.
6. The device (106A) of claim 5, configured to compute a feasible rate allocation and a Lagrangian bound based on the aggregated utility information.
7. The device (106A) of claim 2, wherein the information received via the one or more intermediary nodes (104A-N) comprises aggregated utility information, link states generated
37 by the intermediary nodes (104A-N) and a traffic load associated with one or more tunnels that originate from the intermediary node on the computer network (102).
8. The device (106 A) of claim 7, wherein the link states include load information associated with network links and wherein a load of a particular link is induced by all paths associated with the one or more tunnels that utilize the particular link and background traffic.
9. The device (106A) of claim 8, wherein the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
10. The device (106A) of claim 8, wherein an initial transmission of the link state further includes capacity information associated with the particular link.
11. The device (106A) of claims 1 or 2, wherein the one or more data rates is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
12. The device (106A) of claim 1, wherein the device (106A) is configured to maximize network utility associated with each of the one or more tunnels.
13. The device (106 A) of claim 1, wherein the device (106 A) is configured to maximize network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
14. A method (600) for operating a device as one of a plurality of edge devices of a computer network, the computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other,
38 wherein the method (600) comprises controlling, by the device, one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among the plurality of edge devices.
15. The method (600) of claim 14, wherein the one or more data rates associated with one or more tunnels is controlled based on information received from the one or more intermediary nodes.
16. The method (600) of claim 14, wherein the one or more data rates is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the one or more data rates.
17. The method (600) of claim 14, wherein the method (600) further comprises transmitting aggregated utility information to the plurality of source edge devices.
18. The method (600) of claim 14, wherein the method (600) further comprises enforcing a rate allocation over the one or more paths managed by the device.
19. The method (600) of claim 14, wherein the method (600) further comprises: receiving user entered data at each of the plurality of source edge devices, wherein the data rate is further based on the user entered data.
20. The method (600) of claim 14, wherein the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with the one or more tunnels on the computer network.
21. The method (600) of claim 20, wherein the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
22. The method (600) of claim 20, wherein the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
23. The method (600) of claim 20, wherein an initial transmission of the link state further includes capacity information associated with the particular link.
24. The method (600) of claim 14, wherein the data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
25. The method (600) of claim 14, wherein the method (600) further comprises: receiving aggregated utility information from one or more source edge devices, wherein the data rate is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
26. The method (600) of claim 14, wherein the data rate is based on a computation of step size, wherein the step size is based on a number of iterations of a computation of the rate allocation.
27. The method (600) of claim 14, wherein the device maximizes network utility associated with each of the one or more tunnels.
28. The method (600) of claim 14, wherein the device maximizes network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
29. A computer program comprising executable instructions that when executed by a processor cause the processor to perform a method, wherein the method comprises: controlling, by a device, one or more tunnels and corresponding one or more data rates that use one or more paths between the device and a destination edge device among a plurality of edge devices, wherein the plurality of edge devices are associated with a computer network comprising the plurality of edge devices and one or more intermediary nodes, the one or more intermediary nodes being configured to communicatively couple the edge devices to each other.
30. The computer program of claim 29, wherein the device is configured to control the one or more tunnels and corresponding one or more data rates based on information received from the one or more intermediary nodes.
31. The computer program of claim 29, wherein the data rate is periodically revised based on a continuous improvement of solutions associated with inputs for determining a steady state of the one or more data rates.
32. The computer program of claim 29, further comprising: transmitting aggregated utility information to the plurality of edge devices.
33. The computer program of claim 29, further comprising enforcing a rate allocation over the one or more paths managed by the device.
34. The computer program of claim 29, further comprising: receiving user entered data at each of the plurality of source edge devices, wherein the data rate is further based on the user entered data.
35. The computer program of claim 30, wherein the information received via the one or more intermediary nodes comprises aggregated utility information, link states generated by the intermediary nodes and a traffic load associated with the one or more tunnels on the computer network.
36. The computer program of claim 35, wherein the link states include load information associated with network links and wherein a load of a particular link is induced by all tunnels of the one or more tunnels that utilize the particular link and background traffic.
37. The computer program of claim 35, wherein the link states further include a number of tunnels of the one or more tunnels that utilize the particular link.
38. The computer program of claim 35, wherein an initial transmission of the link state further includes capacity information associated with the particular link.
39. The computer program of claim 29, wherein data rate is further based on user entered data that includes at least one of tunnel priority, a path preference, and a desired maximum link utilization (MLU) between 0 and 1.
40. The computer program of claim 29, further comprising: receiving aggregated utility information from one or more source edge devices of the plurality of source edge devices, wherein the data rate is based on a computation of step size using a Polyak function and the step size is based on the received aggregated utility information.
41. The computer program of claim 29, wherein the data rate is based on a computation of step size, wherein the step size is based on a number of iterations of a computation of the rate allocation.
42. The computer program of claim 29, wherein the device maximizes network utility associated with each of the one or more tunnels.
43. The computer program of claim 29, wherein the device maximizes network utility associated with each of the one or more tunnels based on determining a sum of (tunnel priority multiplied by a path preference that is multiplied by a data rate) for each of the one or more tunnels that use one or more paths.
42
PCT/EP2020/077895 2020-10-06 2020-10-06 Distributed traffic engineering at edge devices in a computer network WO2022073583A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/EP2020/077895 WO2022073583A1 (en) 2020-10-06 2020-10-06 Distributed traffic engineering at edge devices in a computer network
EP20789044.3A EP4211884A1 (en) 2020-10-06 2020-10-06 Distributed traffic engineering at edge devices in a computer network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/077895 WO2022073583A1 (en) 2020-10-06 2020-10-06 Distributed traffic engineering at edge devices in a computer network

Publications (1)

Publication Number Publication Date
WO2022073583A1 true WO2022073583A1 (en) 2022-04-14

Family

ID=72811822

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/077895 WO2022073583A1 (en) 2020-10-06 2020-10-06 Distributed traffic engineering at edge devices in a computer network

Country Status (2)

Country Link
EP (1) EP4211884A1 (en)
WO (1) WO2022073583A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514649A (en) * 2022-08-24 2022-12-23 中国电信股份有限公司 Method and system for intelligent tunnel scheduling in enterprise SDWAN hub-spoke networking

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689695B2 (en) * 2006-06-28 2010-03-30 International Business Machines Corporation System and method for distributed utility optimization in a messaging infrastructure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689695B2 (en) * 2006-06-28 2010-03-30 International Business Machines Corporation System and method for distributed utility optimization in a messaging infrastructure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALIZADEH MOHAMMAD MOATTAR@CISCO COM ET AL: "CONGA", COMPUTER COMMUNICATION REVIEW, ACM, NEW YORK, NY, US, vol. 44, no. 4, 17 August 2014 (2014-08-17), pages 503 - 514, XP058493040, ISSN: 0146-4833, DOI: 10.1145/2740070.2626316 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514649A (en) * 2022-08-24 2022-12-23 中国电信股份有限公司 Method and system for intelligent tunnel scheduling in enterprise SDWAN hub-spoke networking

Also Published As

Publication number Publication date
EP4211884A1 (en) 2023-07-19

Similar Documents

Publication Publication Date Title
Chen et al. RL-routing: An SDN routing algorithm based on deep reinforcement learning
Lin et al. QoS-aware adaptive routing in multi-layer hierarchical software defined networks: A reinforcement learning approach
JP6278492B2 (en) A framework for traffic engineering in software-defined networking
Chua et al. Cloud radio access networks (C-RAN) in mobile cloud computing systems
Porxas et al. QoS-aware virtualization-enabled routing in software-defined networks
US11539673B2 (en) Predictive secure access service edge
Liu et al. Fronthaul-aware software-defined wireless networks: Resource allocation and user scheduling
Bouacida et al. Practical and dynamic buffer sizing using LearnQueue
Bera et al. Mobility-aware flow-table implementation in software-defined IoT
Qadeer et al. Flow-level dynamic bandwidth allocation in SDN-enabled edge cloud using heuristic reinforcement learning
Gao et al. Freshness-aware age optimization for multipath TCP over software defined networks
US20230124947A1 (en) Load balancing application traffic with predictive goodput adaptation
Iosifidis et al. Distributed storage control algorithms for dynamic networks
Tariq et al. Toward experience-driven traffic management and orchestration in digital-twin-enabled 6G networks
Cai et al. Optimal cloud network control with strict latency constraints
Tang et al. Constructing a DRL decision making scheme for multi-path routing in All-IP access network
Suzuki et al. Multi-agent deep reinforcement learning for cooperative computing offloading and route optimization in multi cloud-edge networks
EP4211884A1 (en) Distributed traffic engineering at edge devices in a computer network
Babu et al. A medium-term disruption tolerant SDN for wireless TCP/IP networks
Pinyoanuntapong et al. Distributed multi-hop traffic engineering via stochastic policy gradient reinforcement learning
US20230318977A1 (en) Predictive application-aware load-balancing based on failure uncertainty
Nguyen et al. Accumulative-load aware routing in software-defined networks
Huang et al. The optimality of two prices: Maximizing revenue in a stochastic network
Rahouti et al. Latencysmasher: a software-defined networking-based framework for end-to-end latency optimization
Hertiana et al. Effective Router Assisted Congestion Control for SDN.

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020789044

Country of ref document: EP

Effective date: 20230413

NENP Non-entry into the national phase

Ref country code: DE