WO2012130500A1 - A method and system for mutual traffic management and accounting - Google Patents

A method and system for mutual traffic management and accounting Download PDF

Info

Publication number
WO2012130500A1
WO2012130500A1 PCT/EP2012/051464 EP2012051464W WO2012130500A1 WO 2012130500 A1 WO2012130500 A1 WO 2012130500A1 EP 2012051464 W EP2012051464 W EP 2012051464W WO 2012130500 A1 WO2012130500 A1 WO 2012130500A1
Authority
WO
WIPO (PCT)
Prior art keywords
switch
network traffic
controller
isp
traffic
Prior art date
Application number
PCT/EP2012/051464
Other languages
French (fr)
Inventor
Rolf Winter
Original Assignee
Nec Europe Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nec Europe Ltd. filed Critical Nec Europe Ltd.
Publication of WO2012130500A1 publication Critical patent/WO2012130500A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/70Routing based on monitoring results

Definitions

  • Embodiments of the invention relate to managing network traffic between Internet Service Providers on a switched network.
  • the routers that are used for sending network traffic between ISPs are generally good at finding the lowest cost path according to a pre-set algorithm, that may determine the "lowest cost” or “best” path in terms, e.g., of path length or in economic terms.
  • the manner of finding the lowest cost path may, for example, be specified in the "local preferences" (local_pref) settings in a Border Gateway Protocol (BGP) router. Based on these settings, a best path algorithm may be selected to determine a "best path" to install in the IP routing table and to use for traffic forwarding.
  • BGP Border Gateway Protocol
  • Current routers are typically oblivious to the underlying cost or charging model, and are not good at doing traffic engineering on small time scales (e.g., below days or hours).
  • the "95th percentile" charging model one common charging model between ISPs is known as the "95th percentile" charging model.
  • the traffic volume is measured in time intervals, and the top 5% of the volume measurements will be discarded, and costs will be determined based on the next highest volume measurement.
  • the top 5 time intervals in terms of traffic volume will be taken out of those 100, and the ISP that generated the traffic will be charged based on the remaining largest volume interval.
  • the top 20 time intervals all average at about 5 Gb/s of traffic, and there are 4 time intervals left.
  • Another common form of traffic exchange arrangement between ISPs is a "peering agreement", based on network volume ratios. For example, two ISPs may agree that they exchange traffic which should not exceed the other ISP's traffic volume by a factor of two, or some other predetermined factor. In today's environment this is difficult to achieve, since the algorithms that route traffic between the ISPs may be unable to precisely match a such a predetermined ratio. In some instances, failure to adhere to the ratio can cause a peering agreement to be changed into a customer-provider relationship, which may be a poor outcome for at least one of the ISPs. Further problems with such peering agreements can arise around which side (i.e., which ISP) is actually measuring the rates and volumes on which charges between the ISPs are based.
  • a method of managing network traffic including setting one or more constraints on a flow of network traffic on a controller that determines the flow of network traffic through a switch.
  • the network traffic through the switch is monitored to produce network traffic data.
  • this method is done in realtime or near-real-time.
  • the switch is an OpenFlow switch.
  • network traffic is directed through the switch by one or more routers, each of which is associated with an Internet Service Provider (ISP).
  • ISPs set the constraints on the controller, with each ISP setting the constraints related to network traffic sent through the switch from or to that ISP.
  • a third party such as an Internet Exchange Point (IXP) provider, sets the constraints on behalf of the ISPs.
  • routing protocol messages are monitored and sent to the controller
  • the technical constraints may be defined in a contract between one or more ISPs.
  • a constraint might specify a ratio of traffic sent from a router associated with a first ISP to traffic sent from a router associated with a second ISP.
  • the monitored network traffic data may be used for other purposes, such as traffic engineering or accounting.
  • monitoring network traffic through the switch includes using a counter associated with a flow table entry on the switch to monitor network traffic sent through the switch to one or more network addresses associated with the flow table entry.
  • Rerouting network traffic through the switch may include adding or rewriting a flow table entry to redirect at least a portion of the traffic.
  • rerouting network traffic through the switch may further include rewriting data link layer headers of network packets that are being redirected.
  • Some embodiments provide a system of managing network traffic, including a switch through which network traffic is directed by one or more routers, each of which is associated with an ISP, and a controller connected to the switch, the controller configured to impose one or more constraints on a flow of network traffic.
  • the switch is configured to monitor network traffic through the switch to produce network traffic data and to provide the network traffic data to the controller.
  • the controller is configured to determine whether a constraint will be violated based on the network traffic data, and to instruct the switch to reroute network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated.
  • the system is configured to reroute network traffic to avoid violating the constraint in real-time or near-real-time.
  • the controller is internal to the switch.
  • the switch in the system is an OpenFlow switch.
  • the system further includes a controller component in each of the one or more routers.
  • the controller component is configured to provide the controller with information on router protocol messages.
  • FIG. 1 shows an overview of the topology of an example switched network in accordance with an embodiment of the invention
  • FIG. 2 shows an example using a portion of the network of FIG. 1, and an example switch flow table configuration
  • FIG. 3 shows a high-level flow chart of a method of network traffic management in accordance with an embodiment of the invention.
  • Embodiments of the present invention provide a system and methods that complement routing protocols, which are good at finding paths, with control of the flow of traffic between routers based on additional technical constraints.
  • This additional traffic management function is provided by a system outside of the routers.
  • such an intermediate node or switch could implement a strategy in which the traffic flow between ISPs depends on mutually agreed technical constraints, as well as on other factors, such as the time of day.
  • attempting to implement such a system within the routers would make complex routers even more complex.
  • Embodiments of the present invention operate in an environment in which a set of routers from different ISPs are interconnected through a switched network (e.g. at an Internet Exchange).
  • a switched network e.g. at an Internet Exchange.
  • FIG. 1 shows an example topology of such a switched network 100, in which ISP A, ISP B, ISP C, and ISP D, using routers 102a, 102b, 102c, and 102d, respectively, are connected to each other via a central switch 110.
  • Each of the routers 102a-d is connected to the switch 110 through a port.
  • router 102a for ISP A
  • router 102b for ISP B
  • router 102c for ISP C
  • router 102d for ISP D
  • the connections and flows of traffic between these ISPs are subject to technical constraints imposed either by the networks and devices being used, or by external considerations, such as the preferences of or contractual agreements between the ISPs that are interconnected. These technical constraints may be, e.g., that a certain traffic ratio (i.e. the ratio of traffic sent to and received from a network) should not be violated, or that one path is preferable to others. Such constraints can vary between any two ISPs that are connected. Thus, for example, the constraints between ISP A and ISP B may be different than the constraints between ISP A and ISP C. It will be understood that the source of these technical constraints - whether from the devices being used, the preferences of the ISPs, or (as is more typical) from contractual agreements between the ISPs - is not relevant to the method and system of the present invention.
  • management of the technical constraints is implemented on a controller 112, which may be either internal to the switch 110 or external to the switch 110.
  • the controller 112 alters the forwarding behavior of the switch 110 in order to manage traffic between the ISPs according to the technical constraints.
  • the main function of the switch 110 and controller 112 is network traffic management. Traffic management generally means changing the way that network traffic flows based primarily on the technical constraints.
  • the switch 110 should be capable of collecting data about the traffic that flows through it (e.g., the amount of traffic in bits per time period), and providing this information to the controller 112.
  • the switch 110 shown in FIG. 1 may include a single switch or multiple switches, which are either highly configurable or programmable.
  • One example of such a configurable or programmable switch is an OpenFlow switch, such as the NEC IP8800, produced by NEC Corporation, of Tokyo, Japan, but it will be understood that other programmable or configurable switches could be used.
  • the controller components 114a-114d inform the controller 112 of routing protocol messages, and permit the controller 112 to provide instructions directly to the routers.
  • the switch 110 may forward routing protocol messages to the controller 112, as well as to the router to which they are directed, permitting the controller 112 to "listen in" on the routing protocol information that is sent between the routers, and to inject commands to the routers into the stream of routing protocol messages that normally pass between the routers. This permits the controller 112 to react to routing events without needing separate components, such as controller components 114a-114d in the routers 102a-102d.
  • This ability of the controller 112 to monitor and control the routers 102a- 102d can be used, for example, when ISP A is withdrawing an IP prefix, which would normally be handled by the routers 102a- 102d using routing protocol messages. In this situation, the switch 110 should stop sending traffic for that prefix towards the ISP A, so it is useful if the controller 112, which controls the switch 110, also has access to information about the withdrawn IP prefix.
  • This router-based controller component 114a-114d can also initialize the IP prefixes, to alleviate the need to manually configure the IP prefixes on the routers. In order to scale, the controller 112 can aggregate prefixes, if possible.
  • controller 112 and router controller components 114a-114d could be implemented using controller 112 and router controller components 114a-114d.
  • protective mechanisms against flapping i.e. the frequent change/oscillation of switch table entries.
  • a variety of mechanisms for detecting and handling flapping are known in the art, and could be used in accordance with the methods and systems of various embodiments of the present invention. Providing such mechanisms to prevent flapping may be particularly useful on small time-scales, to avoid having an impact on the performance of TCP connections through, e.g., reordering.
  • Embodiments of the methods and systems of the present invention use the information that can be collected and the centralized control that can be implemented using a switch 110 and controller 112 between the routers 102a- 102d to attempt to optimize the way in which traffic flows between ISPs based on the technical constraints and the measurement of data about the traffic flows. In accordance with various embodiments of the invention, this is done in real-time or near-real-time. For example, using the point of view of ISP B of the network shown in FIG. 1, ISP D is a for-cost transit provider and ISP A is a peering partner (i.e., ISP B and ISP A exchange traffic on a cost-free basis).
  • the peering agreement between ISP A and ISP B specifies that ISP B is not allowed to send more than twice the amount of data traffic to ISP A that ISP A sends to ISP B (i.e., a fairly typical 2: 1 network volume ratio in a peering agreement).
  • the "optimal" traffic network strategy for ISP B is to send exactly twice the volume that is sent by ISP A through ISP A (i.e., fully utilizing the ratio allowed in the peering agreement - ISP A will not tolerate anything above this ratio).
  • Excess traffic can be forwarded to ISP D, since ISP D is offering, e.g., a default route to the global internet (i.e., it will forward traffic to all destinations on the Internet), but at a cost to ISP A. Routing protocols that find "optimal" routes without taking the technical constraints into considerations, and use them without regard to the amount of traffic, may have difficulties in this situation. They risk either violating the constraints (e.g., the 2: 1 ratio of the peering agreement), or underutilizing the preferred link. Embodiments of the methods and systems in accordance with the present invention complement these routing protocols, to optimize the traffic flow in a way that accounts for the constraints.
  • the ISPs involved configure the controller 112 according to the constraints discussed above.
  • the exact configuration method can vary, depending on the circumstances.
  • the constraints could be formalized by the ISPs involved and sent to a third party, such as an Internet Exchange Point (IXP) provider, that configures the switch 110 and controller 112.
  • IXP Internet Exchange Point
  • Another possibility is that the ISPs involved have login accounts for an interface to the controller 112, and the ISPs configure these constraints individually. In such cases, proper access control and rights management should be implemented so that, e.g., a single ISP is not be able to influence arbitrary switch entries, which could harm other ISPs.
  • One way to achieve this is to restrict the switch port numbers that each ISP is able to affect in the switch flow table entries to only those switch port numbers that are associated with that ISP.
  • the controller 112 could be instructed to configure the IP ranges that the ISPs are able to handle (i.e. forward traffic to) on the switch, and the traffic ratio.
  • Each of ISP A and ISP B may configure the controller to limit the amount of traffic sent between ISP A and ISP B to be not greater than a factor of two times the traffic that each receives from the other, and to add the IP prefixes that belong to their customers.
  • the controller 112 installs these prefixes into the switch 110 and, in operation, will either regularly poll the switch 110 for updated traffic data or the switch 110 will push this data regularly to the controller 112. Based on this data, the controller 112 will evaluate whether more or less data should be exchanged between ISP A and ISP B.
  • the controller 112 adds a new entry into the switch 110 that will send a fraction of the data from ISP B towards ISP D.
  • the exact fraction will depend on the traffic data that is received from the switch 110.
  • the controller 112 could start sending all User Datagram Protocol (UDP) traffic from ISP B towards ISP D, in order to decrease the volume of traffic being sent from ISP B to ISP A.
  • UDP User Datagram Protocol
  • traffic directed towards a certain IP range i.e. IP prefix
  • port numbers e.g.
  • the switch 110 could re- write layer-2 (data link layer) headers in network packets that pass through the switch 110. This would be done in order to change the destination Media Access Control (MAC) address, so that the new destination router would not simply drop the packets that are being redirected.
  • MAC Media Access Control
  • a switch 110 in accordance with some embodiments of the invention may have multiple counters for each flow table entry in the switch, to allow statistics on each flow table entry to be split. For example, for flow table entries using a /16 IP prefix (i.e., a fixed value for the first 16 of the 32 bits that make up an IPv4 address - permitting 16 free bits, or 65536 addresses), the switch 110 could provide counters for each /20 prefix (i.e., 20 fixed bits - or 16 counters per flow table entry, each of which collects statistics on traffic to 4096 addresses), to help the switch 110 and controller 112 to make a better decision on how to split traffic.
  • a /16 IP prefix i.e., a fixed value for the first 16 of the 32 bits that make up an IPv4 address - permitting 16 free bits, or 65536 addresses
  • the switch 110 could provide counters for each /20 prefix (i.e., 20 fixed bits - or 16 counters per flow table entry, each of which collects statistics on traffic to 4096
  • statistics could be collected on protocols, such as or on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), or upper layer protocols such as Hypertext Transfer Protocol (HTTP).
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • HTTP Hypertext Transfer Protocol
  • the switch 110 would communicate the collected statistics at regular intervals to the controller 112. Based on such counters or other statistics collected by the switch 110, the controller 112 could make better traffic management decisions.
  • the controller 112 could preemptively add new flow table entries to monitor traffic in finer- grained quantities. For example, the controller 112 could disaggregate one flow table entry into multiple entries, all of which have the same forwarding behavior, and once traffic needs to be redirected, it will pick one or more of the more specific entries (e.g., with longer, more specific prefixes) to move traffic towards another ISP.
  • the controller 112 could disaggregate one flow table entry into multiple entries, all of which have the same forwarding behavior, and once traffic needs to be redirected, it will pick one or more of the more specific entries (e.g., with longer, more specific prefixes) to move traffic towards another ISP.
  • the ability of the controller 112 to optimally or near-op timally fit the traffic flows through the switch 110 to the constraints depends, to some degree, on the granularity of the traffic information that is available to the controller 112.
  • the controller 112 might need to resort to some trial and error in redirecting traffic in order to manage the traffic in a manner that fits optimally or near-optimally with the technical constraints. This trial and error may cause some delay.
  • the controller 112 could start to split traffic earlier compared to the situation in which better or finer- grained information is available. The trade-off between these options is time to converge to a desirable outcome in terms of how traffic flows versus the space requirement on the switch (i.e. switching table size).
  • the controller 112 could be used to provide data about the way traffic flows to the respective ISPs for various other purposes, such as accounting. For example, if the 95 th percentile model, as described above, is used for accounting purposes, the controller could provide data on usage during time intervals of a charging period to the ISPs involved. Additional constraints, such as the time of day, could also be used to split traffic (e.g., at night time there might not be any ratio constraints).
  • FIG. 2 shows an example of how the controller 112 could react in operation. The configuration of the network 100 in FIG. 2 is the same as in FIG.
  • the flow table 202 depicts the situation where all traffic towards the IP address 192.0.2.0/24 (i.e., any address having the same first 24 bits as this address - a total of 256 addresses) and 203.0.113.0/24 (again, 256 addresses having the same first 24 bits) is sent from ISP B to ISP A (i.e., from port 1 to port 3 of the switch 110). This is represented by entries 204 and 206 in the flow table 202.
  • ISP D i.e., port 2 of the switch 110
  • entry 208 in the flow table 202 the controller 112 could add a more specific prefix into the switch so that all traffic towards that more specific prefix would flow to ISP D.
  • entry 210 in the flow table 202 which specifies that traffic to the destination address 203.0.113.0/26 - i.e., all 64 addresses that share the first 26 bits of this address - should be sent from ISP B (on port 1) to ISP D (on port 2).
  • the entry 210 specifies the same address as the entry 206, but is more specific (i.e., finer grained), in that only traffic from 64 addresses, which share the first 26 bit prefix with the specified address will be forwarded to ISP D. Traffic from the other 192 addresses specified in the table entry 206 will continue to be sent to ISP A.
  • FIG. 3 shows a high level flowchart of a method in accordance with embodiments of the present invention.
  • the method 300 includes a step 302 of setting one or more constraints, a step 304 of monitoring network traffic through the switch, a step 306 of determining whether a constraint will be violated, and a step 308 of rerouting network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated.
  • the switch monitors the network traffic through the switch, to produce network traffic data. These network traffic data may then be provided to the controller. As noted above, the network traffic data may be monitored, for example, by using counters associated with flow table entries in the switch. In some embodiments, each flow table entry may be associated with multiple counters, permitting traffic to be monitored at a finer granularity. When this capability is not available, additional flow table entries may be used to adjust the granularity at which network traffic is monitored.
  • the controller uses the network traffic data provided by the switch to determine whether one of the constraints will be violated. This determination can be made, for example, by checking aggregated network data obtained from the switch against the set of constraints that have been set in the controller. For example, if there is a constraint that ISP B may not send more than twice the data volume to ISP A that it has received from ISP A in a given time period, the controller can examine the aggregate network data on traffic flowing from ISP B to ISP A, and the data on traffic flowing from ISP A to ISP B during the time period, and determine if the ratio of 2: 1 is close to being reached.
  • step 308 network traffic is rerouted through the switch to avoid violating a constraint, if possible.
  • This step is typically taken by the controller if the controller has determined that a constraint is close to being reached, or has already been reached. In this case, the controller will determine how traffic should be rerouted, and send commands to the switch to cause the switch to reroute traffic in the manner determined. For example, these commands may cause the switch to rewrite or add entries to a switch flow table to redirect traffic. It also may be necessary, as described above, for the switch to take further actions to reroute traffic, such as rewriting destination information in the data link layer headers of network packets.
  • Determining how to reroute traffic can be done in a variety of ways. For example, a rule-based approach could be used, in which when a constraint is close to being reached or has been reached, a rule is triggered to specify the way in which traffic should be rerouted. This has the advantage of being simple and fast, but may require several iterations to ensure that all constraints are met, since triggering a rule to prevent one constraint from being violated could lead to the violation of another constraint. Alternatively, previously well-known constraint solving techniques could be applied to attempt to find a solution that would satisfy all constraints, if such a solution exists.
  • the entire process detailed above occurs in realtime or in near-real-time, so that decisions to reroute traffic can be made as early as possible. This may help to avoid violating constraints, by detecting such violations either before they occur, or as they are occurring, and immediately taking steps on the switch to avoid violating the constraints over any prolonged period of time.

Abstract

A method of managing network traffic is provided, including setting one or more constraints on a flow of network traffic on a controller that determines the flow of network traffic through a switch. The network traffic through the switch is monitored to produce network traffic data. The network traffic data are used to determine whether a constraint will be violated. If it was determined that a constraint would be violated, then network traffic is rerouted to avoid violating the constraint. A system of managing network traffic is also described. The system includes a switch through which network traffic is directed by one or more routers, each of which is associated with an ISP, and a controller connected to the switch, the controller configured to impose one or more constraints on a flow of network traffic. The switch is configured to monitor network traffic through the switch to produce network traffic data and to provide the network traffic data to the controller. The controller is configured to determine whether a constraint will be violated based on the network traffic data, and to instruct the switch to reroute network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated. The method and system may operate in real-time or near-real-time.

Description

A METHOD AND SYSTEM FOR MUTUAL TRAFFIC MANAGEMENT
AND ACCOUNTING
FIELD OF THE INVENTION
[0001] Embodiments of the invention relate to managing network traffic between Internet Service Providers on a switched network.
BACKGROUND
[0002] At present, network traffic management between Internet Service Providers (ISPs) can be difficult and somewhat imprecise. The routers that are used for sending network traffic between ISPs are generally good at finding the lowest cost path according to a pre-set algorithm, that may determine the "lowest cost" or "best" path in terms, e.g., of path length or in economic terms. The manner of finding the lowest cost path may, for example, be specified in the "local preferences" (local_pref) settings in a Border Gateway Protocol (BGP) router. Based on these settings, a best path algorithm may be selected to determine a "best path" to install in the IP routing table and to use for traffic forwarding. Unfortunately, current routers are typically oblivious to the underlying cost or charging model, and are not good at doing traffic engineering on small time scales (e.g., below days or hours).
[0003] For example, one common charging model between ISPs is known as the "95th percentile" charging model. Under this model, the traffic volume is measured in time intervals, and the top 5% of the volume measurements will be discarded, and costs will be determined based on the next highest volume measurement. Thus, if the traffic volume is measured for 100 time intervals, the top 5 time intervals in terms of traffic volume will be taken out of those 100, and the ISP that generated the traffic will be charged based on the remaining largest volume interval. Now consider an example where the top 20 time intervals all average at about 5 Gb/s of traffic, and there are 4 time intervals left. Under the 95th percentile model, it appears that for that charging period, charges will be based on the 5 Gb/s traffic volume, no matter what the volume is in the remaining time intervals. The sending ISP could now send arbitrary amounts of data and it will not be charged for this increased volume. For example, even if in the remaining time intervals the ISP is using the link at an average rate of lOGb/s, it will only be charged for 5Gb/s. Of course, in practice, doing this quickly and safely today is not easy.
[0004] Another common form of traffic exchange arrangement between ISPs is a "peering agreement", based on network volume ratios. For example, two ISPs may agree that they exchange traffic which should not exceed the other ISP's traffic volume by a factor of two, or some other predetermined factor. In today's environment this is difficult to achieve, since the algorithms that route traffic between the ISPs may be unable to precisely match a such a predetermined ratio. In some instances, failure to adhere to the ratio can cause a peering agreement to be changed into a customer-provider relationship, which may be a poor outcome for at least one of the ISPs. Further problems with such peering agreements can arise around which side (i.e., which ISP) is actually measuring the rates and volumes on which charges between the ISPs are based.
SUMMARY
[0005] Based on the above, it is an object of embodiments of the invention to provide a method and system for traffic management that can address these problems.
[0006] This is achieved in accordance with embodiments of the invention, by providing a method of managing network traffic, including setting one or more constraints on a flow of network traffic on a controller that determines the flow of network traffic through a switch. The network traffic through the switch is monitored to produce network traffic data. Next, it is determined (typically on the controller) whether a constraint will be violated based on the network traffic data. If it was determined that a constraint would be violated, then network traffic is rerouted to avoid violating the constraint. In some embodiments, this method is done in realtime or near-real-time. In some embodiments, the switch is an OpenFlow switch. [0007] In various embodiments, network traffic is directed through the switch by one or more routers, each of which is associated with an Internet Service Provider (ISP). In some embodiments, the ISPs set the constraints on the controller, with each ISP setting the constraints related to network traffic sent through the switch from or to that ISP. In some embodiments, a third party, such as an Internet Exchange Point (IXP) provider, sets the constraints on behalf of the ISPs. In some embodiments, routing protocol messages are monitored and sent to the controller
[0008] In some embodiments, the technical constraints may be defined in a contract between one or more ISPs. For example, a constraint might specify a ratio of traffic sent from a router associated with a first ISP to traffic sent from a router associated with a second ISP. In some embodiments, the monitored network traffic data may be used for other purposes, such as traffic engineering or accounting.
[0009] In some embodiments, monitoring network traffic through the switch includes using a counter associated with a flow table entry on the switch to monitor network traffic sent through the switch to one or more network addresses associated with the flow table entry. Rerouting network traffic through the switch may include adding or rewriting a flow table entry to redirect at least a portion of the traffic. In some embodiments, rerouting network traffic through the switch may further include rewriting data link layer headers of network packets that are being redirected.
[0010] Some embodiments provide a system of managing network traffic, including a switch through which network traffic is directed by one or more routers, each of which is associated with an ISP, and a controller connected to the switch, the controller configured to impose one or more constraints on a flow of network traffic. The switch is configured to monitor network traffic through the switch to produce network traffic data and to provide the network traffic data to the controller. The controller is configured to determine whether a constraint will be violated based on the network traffic data, and to instruct the switch to reroute network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated. In some embodiments, the system is configured to reroute network traffic to avoid violating the constraint in real-time or near-real-time. [0011] In some embodiments, the controller is internal to the switch. In some embodiments, the switch in the system is an OpenFlow switch.
[0012] In some embodiments, the system further includes a controller component in each of the one or more routers. The controller component is configured to provide the controller with information on router protocol messages.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the invention are described with reference to the following drawings, in which:
[0014] FIG. 1 shows an overview of the topology of an example switched network in accordance with an embodiment of the invention;
[0015] FIG. 2 shows an example using a portion of the network of FIG. 1, and an example switch flow table configuration; and
[0016] FIG. 3 shows a high-level flow chart of a method of network traffic management in accordance with an embodiment of the invention.
DESCRIPTION
[0017] Embodiments of the present invention provide a system and methods that complement routing protocols, which are good at finding paths, with control of the flow of traffic between routers based on additional technical constraints. This additional traffic management function is provided by a system outside of the routers. This has several advantages. First, since the routing system itself remains substantially unmodified, "standard" commercial routers can be used. Additionally, using an intermediate node, such as a switch and controller, that handles traffic management according to technical constraints permits the intermediate node to be mutually configured on the basis of agreed technical constraints. Further, such an intermediate switch and controller can monitor traffic between ISPs, and can use this information - which is greater than the information that would be collected by any individual router in the system - to implement complex traffic management strategies. For example, such an intermediate node or switch could implement a strategy in which the traffic flow between ISPs depends on mutually agreed technical constraints, as well as on other factors, such as the time of day. Finally, as a practical matter, attempting to implement such a system within the routers would make complex routers even more complex.
[0018] Embodiments of the present invention operate in an environment in which a set of routers from different ISPs are interconnected through a switched network (e.g. at an Internet Exchange). This is not an uncommon scenario, as illustrated in Pascal Merindol, Benoit Donnet, Jean-Jacques Pansiot, Olivier Bonaventure, "On the Impact of Layer-2 on Node Degree Distribution", IMC ' 10 Proceedings of the 10th Annual Conference on Internet Measurement, November 1-3, 2010, Melbourne, Australia.
[0019] FIG. 1 shows an example topology of such a switched network 100, in which ISP A, ISP B, ISP C, and ISP D, using routers 102a, 102b, 102c, and 102d, respectively, are connected to each other via a central switch 110. Each of the routers 102a-d is connected to the switch 110 through a port. In the examples discussed below, router 102a (for ISP A) is connected to port 3 on the switch 110, router 102b (for ISP B) is connected to port 1 on the switch 110, router 102c (for ISP C) is connected to port 4 of the switch 110, and router 102d (for ISP D) is connected to port 2 of the switch 110.
[0020] The connections and flows of traffic between these ISPs are subject to technical constraints imposed either by the networks and devices being used, or by external considerations, such as the preferences of or contractual agreements between the ISPs that are interconnected. These technical constraints may be, e.g., that a certain traffic ratio (i.e. the ratio of traffic sent to and received from a network) should not be violated, or that one path is preferable to others. Such constraints can vary between any two ISPs that are connected. Thus, for example, the constraints between ISP A and ISP B may be different than the constraints between ISP A and ISP C. It will be understood that the source of these technical constraints - whether from the devices being used, the preferences of the ISPs, or (as is more typical) from contractual agreements between the ISPs - is not relevant to the method and system of the present invention.
[0021] In accordance with various embodiments of the invention, management of the technical constraints is implemented on a controller 112, which may be either internal to the switch 110 or external to the switch 110. The controller 112 alters the forwarding behavior of the switch 110 in order to manage traffic between the ISPs according to the technical constraints. The main function of the switch 110 and controller 112 is network traffic management. Traffic management generally means changing the way that network traffic flows based primarily on the technical constraints. To achieve this, the switch 110 should be capable of collecting data about the traffic that flows through it (e.g., the amount of traffic in bits per time period), and providing this information to the controller 112. The switch 110 shown in FIG. 1 may include a single switch or multiple switches, which are either highly configurable or programmable. One example of such a configurable or programmable switch is an OpenFlow switch, such as the NEC IP8800, produced by NEC Corporation, of Tokyo, Japan, but it will be understood that other programmable or configurable switches could be used.
[0022] Additionally, in order to be able to react to routing events, small controller components 114a, 114b, 114c, and 114d inside of the routers 102a, 102b, 102c, and 102d, respectively, interface with the controller 112. The controller components 114a-114d inform the controller 112 of routing protocol messages, and permit the controller 112 to provide instructions directly to the routers. Alternatively, in some embodiments (not shown), the switch 110 may forward routing protocol messages to the controller 112, as well as to the router to which they are directed, permitting the controller 112 to "listen in" on the routing protocol information that is sent between the routers, and to inject commands to the routers into the stream of routing protocol messages that normally pass between the routers. This permits the controller 112 to react to routing events without needing separate components, such as controller components 114a-114d in the routers 102a-102d.
[0023] This ability of the controller 112 to monitor and control the routers 102a- 102d can be used, for example, when ISP A is withdrawing an IP prefix, which would normally be handled by the routers 102a- 102d using routing protocol messages. In this situation, the switch 110 should stop sending traffic for that prefix towards the ISP A, so it is useful if the controller 112, which controls the switch 110, also has access to information about the withdrawn IP prefix. This router-based controller component 114a-114d can also initialize the IP prefixes, to alleviate the need to manually configure the IP prefixes on the routers. In order to scale, the controller 112 can aggregate prefixes, if possible. Also, using controller 112 and router controller components 114a-114d, protective mechanisms against flapping, i.e. the frequent change/oscillation of switch table entries, could be implemented. A variety of mechanisms for detecting and handling flapping are known in the art, and could be used in accordance with the methods and systems of various embodiments of the present invention. Providing such mechanisms to prevent flapping may be particularly useful on small time-scales, to avoid having an impact on the performance of TCP connections through, e.g., reordering.
[0024] Embodiments of the methods and systems of the present invention use the information that can be collected and the centralized control that can be implemented using a switch 110 and controller 112 between the routers 102a- 102d to attempt to optimize the way in which traffic flows between ISPs based on the technical constraints and the measurement of data about the traffic flows. In accordance with various embodiments of the invention, this is done in real-time or near-real-time. For example, using the point of view of ISP B of the network shown in FIG. 1, ISP D is a for-cost transit provider and ISP A is a peering partner (i.e., ISP B and ISP A exchange traffic on a cost-free basis). Suppose that the peering agreement between ISP A and ISP B specifies that ISP B is not allowed to send more than twice the amount of data traffic to ISP A that ISP A sends to ISP B (i.e., a fairly typical 2: 1 network volume ratio in a peering agreement). Under these technical constraints, the "optimal" traffic network strategy for ISP B is to send exactly twice the volume that is sent by ISP A through ISP A (i.e., fully utilizing the ratio allowed in the peering agreement - ISP A will not tolerate anything above this ratio). Excess traffic can be forwarded to ISP D, since ISP D is offering, e.g., a default route to the global internet (i.e., it will forward traffic to all destinations on the Internet), but at a cost to ISP A. Routing protocols that find "optimal" routes without taking the technical constraints into considerations, and use them without regard to the amount of traffic, may have difficulties in this situation. They risk either violating the constraints (e.g., the 2: 1 ratio of the peering agreement), or underutilizing the preferred link. Embodiments of the methods and systems in accordance with the present invention complement these routing protocols, to optimize the traffic flow in a way that accounts for the constraints.
[0025] The ISPs involved configure the controller 112 according to the constraints discussed above. The exact configuration method can vary, depending on the circumstances. For example, the constraints could be formalized by the ISPs involved and sent to a third party, such as an Internet Exchange Point (IXP) provider, that configures the switch 110 and controller 112. Another possibility is that the ISPs involved have login accounts for an interface to the controller 112, and the ISPs configure these constraints individually. In such cases, proper access control and rights management should be implemented so that, e.g., a single ISP is not be able to influence arbitrary switch entries, which could harm other ISPs. One way to achieve this is to restrict the switch port numbers that each ISP is able to affect in the switch flow table entries to only those switch port numbers that are associated with that ISP.
[0026] As an example of configuration, the controller 112 could be instructed to configure the IP ranges that the ISPs are able to handle (i.e. forward traffic to) on the switch, and the traffic ratio. Each of ISP A and ISP B may configure the controller to limit the amount of traffic sent between ISP A and ISP B to be not greater than a factor of two times the traffic that each receives from the other, and to add the IP prefixes that belong to their customers. The controller 112 installs these prefixes into the switch 110 and, in operation, will either regularly poll the switch 110 for updated traffic data or the switch 110 will push this data regularly to the controller 112. Based on this data, the controller 112 will evaluate whether more or less data should be exchanged between ISP A and ISP B.
[0027] Continuing the example, if less data should be transmitted from ISP B to ISP A, e.g., because the technical constraint imposed by the 2: 1 ratio has been met, the controller 112 adds a new entry into the switch 110 that will send a fraction of the data from ISP B towards ISP D. The exact fraction will depend on the traffic data that is received from the switch 110. For example, the controller 112 could start sending all User Datagram Protocol (UDP) traffic from ISP B towards ISP D, in order to decrease the volume of traffic being sent from ISP B to ISP A. Alternatively, traffic directed towards a certain IP range (i.e. IP prefix), or based on port numbers (e.g. all HTTP traffic) could be redirected by the controller 112 and switch 110 from ISP A to ISP D, in order to decrease the traffic volume from ISP B to ISP A. Which of these options is selected by the controller 112 is implementation and configuration specific (i.e., it may be determined by the preferences of the ISPs), and is based on what the switch 110 is able to support (e.g., an OpenFlow switch would support all of these).
[0028] In some embodiments, to redirect traffic, the switch 110 could re- write layer-2 (data link layer) headers in network packets that pass through the switch 110. This would be done in order to change the destination Media Access Control (MAC) address, so that the new destination router would not simply drop the packets that are being redirected.
[0029] For monitoring and redirecting network traffic, a switch 110 in accordance with some embodiments of the invention may have multiple counters for each flow table entry in the switch, to allow statistics on each flow table entry to be split. For example, for flow table entries using a /16 IP prefix (i.e., a fixed value for the first 16 of the 32 bits that make up an IPv4 address - permitting 16 free bits, or 65536 addresses), the switch 110 could provide counters for each /20 prefix (i.e., 20 fixed bits - or 16 counters per flow table entry, each of which collects statistics on traffic to 4096 addresses), to help the switch 110 and controller 112 to make a better decision on how to split traffic. Alternatively (or, in some embodiments, additionally) statistics could be collected on protocols, such as or on Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), or upper layer protocols such as Hypertext Transfer Protocol (HTTP). The switch 110 would communicate the collected statistics at regular intervals to the controller 112. Based on such counters or other statistics collected by the switch 110, the controller 112 could make better traffic management decisions.
[0030] Alternatively, if the switch 110 itself does not have multiple counters per flow table entry, the controller 112 could preemptively add new flow table entries to monitor traffic in finer- grained quantities. For example, the controller 112 could disaggregate one flow table entry into multiple entries, all of which have the same forwarding behavior, and once traffic needs to be redirected, it will pick one or more of the more specific entries (e.g., with longer, more specific prefixes) to move traffic towards another ISP.
[0031] The ability of the controller 112 to optimally or near-op timally fit the traffic flows through the switch 110 to the constraints depends, to some degree, on the granularity of the traffic information that is available to the controller 112. In the absence of fine-grained information on flows, the controller 112 might need to resort to some trial and error in redirecting traffic in order to manage the traffic in a manner that fits optimally or near-optimally with the technical constraints. This trial and error may cause some delay. In order to compensate for this, the controller 112 could start to split traffic earlier compared to the situation in which better or finer- grained information is available. The trade-off between these options is time to converge to a desirable outcome in terms of how traffic flows versus the space requirement on the switch (i.e. switching table size).
[0032] In addition to managing traffic, the controller 112 could be used to provide data about the way traffic flows to the respective ISPs for various other purposes, such as accounting. For example, if the 95th percentile model, as described above, is used for accounting purposes, the controller could provide data on usage during time intervals of a charging period to the ISPs involved. Additional constraints, such as the time of day, could also be used to split traffic (e.g., at night time there might not be any ratio constraints). [0033] FIG. 2 shows an example of how the controller 112 could react in operation. The configuration of the network 100 in FIG. 2 is the same as in FIG. 1, but router 102c is not shown, for the sake of simplicity, and a portion of the flow table 202 of the switch 110 is shown for illustrative purposes. The flow table 202 depicts the situation where all traffic towards the IP address 192.0.2.0/24 (i.e., any address having the same first 24 bits as this address - a total of 256 addresses) and 203.0.113.0/24 (again, 256 addresses having the same first 24 bits) is sent from ISP B to ISP A (i.e., from port 1 to port 3 of the switch 110). This is represented by entries 204 and 206 in the flow table 202. All other traffic from ISP B is sent towards ISP D (i.e., port 2 of the switch 110), which is represented by entry 208 in the flow table 202. In a situation where parts of the traffic needs to be sent towards ISP D that normally would go to ISP A, the controller 112 could add a more specific prefix into the switch so that all traffic towards that more specific prefix would flow to ISP D. An example of this is shown as entry 210 in the flow table 202, which specifies that traffic to the destination address 203.0.113.0/26 - i.e., all 64 addresses that share the first 26 bits of this address - should be sent from ISP B (on port 1) to ISP D (on port 2). The entry 210 specifies the same address as the entry 206, but is more specific (i.e., finer grained), in that only traffic from 64 addresses, which share the first 26 bit prefix with the specified address will be forwarded to ISP D. Traffic from the other 192 addresses specified in the table entry 206 will continue to be sent to ISP A.
[0034] FIG. 3 shows a high level flowchart of a method in accordance with embodiments of the present invention. The method 300 includes a step 302 of setting one or more constraints, a step 304 of monitoring network traffic through the switch, a step 306 of determining whether a constraint will be violated, and a step 308 of rerouting network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated. Each of these steps will be described in greater detail below.
[0035] In step 304, the switch monitors the network traffic through the switch, to produce network traffic data. These network traffic data may then be provided to the controller. As noted above, the network traffic data may be monitored, for example, by using counters associated with flow table entries in the switch. In some embodiments, each flow table entry may be associated with multiple counters, permitting traffic to be monitored at a finer granularity. When this capability is not available, additional flow table entries may be used to adjust the granularity at which network traffic is monitored.
[0036] In step 306, the controller uses the network traffic data provided by the switch to determine whether one of the constraints will be violated. This determination can be made, for example, by checking aggregated network data obtained from the switch against the set of constraints that have been set in the controller. For example, if there is a constraint that ISP B may not send more than twice the data volume to ISP A that it has received from ISP A in a given time period, the controller can examine the aggregate network data on traffic flowing from ISP B to ISP A, and the data on traffic flowing from ISP A to ISP B during the time period, and determine if the ratio of 2: 1 is close to being reached.
[0037] In step 308, network traffic is rerouted through the switch to avoid violating a constraint, if possible. This step is typically taken by the controller if the controller has determined that a constraint is close to being reached, or has already been reached. In this case, the controller will determine how traffic should be rerouted, and send commands to the switch to cause the switch to reroute traffic in the manner determined. For example, these commands may cause the switch to rewrite or add entries to a switch flow table to redirect traffic. It also may be necessary, as described above, for the switch to take further actions to reroute traffic, such as rewriting destination information in the data link layer headers of network packets.
[0038] Determining how to reroute traffic can be done in a variety of ways. For example, a rule-based approach could be used, in which when a constraint is close to being reached or has been reached, a rule is triggered to specify the way in which traffic should be rerouted. This has the advantage of being simple and fast, but may require several iterations to ensure that all constraints are met, since triggering a rule to prevent one constraint from being violated could lead to the violation of another constraint. Alternatively, previously well-known constraint solving techniques could be applied to attempt to find a solution that would satisfy all constraints, if such a solution exists.
[0039] In some embodiments, the entire process detailed above occurs in realtime or in near-real-time, so that decisions to reroute traffic can be made as early as possible. This may help to avoid violating constraints, by detecting such violations either before they occur, or as they are occurring, and immediately taking steps on the switch to avoid violating the constraints over any prolonged period of time.
[0040] While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.

Claims

CLAIMS What is claimed is:
1. A method of managing network traffic, characterized in that the method includes: setting one or more constraints on a flow of network traffic on a controller that determines the flow of network traffic through a switch;
monitoring network traffic through the switch to produce network traffic data; determining whether a constraint will be violated based on the network traffic data; and
rerouting network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated.
2. The method according to claim 1, wherein rerouting network traffic through the switch is done in real-time or near-real-time.
3. The method according to claim 1 or claim 2, wherein switch is an OpenFlow switch.
4. The method according to any of claims 1 to 3, wherein network traffic is directed through the switch by one or more routers, each of said routers associated with an Internet Service Provider (ISP).
5. The method according to claim 4, wherein setting one or more constraints comprises an ISP setting one or more constraints related to network traffic sent through the switch from or to the ISP.
6. The method according to claim 4, wherein setting one or more constraints comprises a third party setting one or more constraints on behalf of one or more ISPs.
7. The method according to claim 4, wherein setting one or more constraints comprises setting one or more technical constraints defined in a contract between one or more ISPs.
8. The method according to claim 4, further comprising monitoring routing protocol messages through the switch and sending such messages to the controller.
9. The method according to any of claims 5 to 7, wherein setting one or more constraints comprises setting a ratio of traffic sent from a router associated with a first ISP to traffic sent from a router associated with a second ISP.
10. The method according to any of the preceding claims, wherein monitoring network traffic through the switch comprises using a counter associated with a flow table entry on the switch to monitor network traffic sent through the switch to one or more network addresses associated with the flow table entry.
11. The method according to claim 10, wherein rerouting network traffic through the switch comprises adding or rewriting a flow table entry to redirect at least a portion of the traffic.
12. The method of claim 11, wherein rerouting network traffic through the switch further comprises rewriting data link layer headers of network packets that are being redirected.
13. The method of any of the preceding claims, further comprising using the network traffic data for purposes of accounting.
14. A system of managing network traffic, the system comprising:
a switch through which network traffic is directed by one or more routers, each of said routers associated with an Internet Service Provider (ISP); and
a controller connected to the switch, the controller configured to impose one or more constraints on a flow of network traffic;
wherein the switch is configured to monitor network traffic through the switch to produce network traffic data and to provide said network traffic data to the controller; and wherein the controller is configured to determine whether a constraint will be violated based on the network traffic data, and to instruct the switch to reroute network traffic through the switch to avoid violating the constraint, if it was determined that the constraint would be violated.
15. The system according to claim 14, wherein the system is configured to reroute network traffic to avoid violating the constraint in real-time or near-real-time.
16. The system according to claim 14 or claim 15, wherein the controller is internal to the switch.
17. The system according to any of claims 14-16, wherein the system further comprises a controller component in each of the one or more routers, the controller component configured to provide the controller with information on router protocol messages.
18. The system according to any of claims 14-17, wherein the switch comprises an OpenFlow switch.
PCT/EP2012/051464 2011-03-31 2012-01-30 A method and system for mutual traffic management and accounting WO2012130500A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP11002662.2 2011-03-31
EP11002662 2011-03-31

Publications (1)

Publication Number Publication Date
WO2012130500A1 true WO2012130500A1 (en) 2012-10-04

Family

ID=45774157

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/051464 WO2012130500A1 (en) 2011-03-31 2012-01-30 A method and system for mutual traffic management and accounting

Country Status (1)

Country Link
WO (1) WO2012130500A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175868B1 (en) * 1998-05-15 2001-01-16 Nortel Networks Limited Method and apparatus for automatically configuring a network switch
US20030133443A1 (en) * 2001-11-02 2003-07-17 Netvmg, Inc. Passive route control of data networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175868B1 (en) * 1998-05-15 2001-01-16 Nortel Networks Limited Method and apparatus for automatically configuring a network switch
US20030133443A1 (en) * 2001-11-02 2003-07-17 Netvmg, Inc. Passive route control of data networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CASADO M ET AL: "Ethane: taking control of the enterprise", APPLICATIONS, TECHNOLOGIES, ARCHITECTURES, AND PROTOCOLS FOR COMPUTER COMMUNICATION: PROCEEDINGS OF THE 2007 CONFERENCE ON APPLICATIONS, TECHNOLOGIES, ARCHITECTURES, AND PROTOCOLS FOR COMPUTER COMMUNICATIONS, 27-31 AUG. 2007,, vol. 37, no. 4, 27 August 2007 (2007-08-27), pages 1 - 12, XP002531272, ISBN: 978-1-59593-713-1 *
NICK MCKEOWN ET AL: "OpenFlow: Enabling Innovation in Campus Networks", 14 March 2008 (2008-03-14), pages 1 - 6, XP055002028, Retrieved from the Internet <URL:http://www.openflow.org/documents/openflow-wp-latest.pdf> [retrieved on 20110705] *
PASCAL MÉRINDOL; BENOIT DONNET; JEAN-JACQUES PANSIOT; OLIVIER BONAVENTURE: "On the Impact of Layer-2 on Node Degree Distribution", IMC ' 10 PROCEEDINGS OF THE 10TH ANNUAL CONFERENCE ON INTERNET MEASUREMENT, 1 November 2010 (2010-11-01)

Similar Documents

Publication Publication Date Title
US11509582B2 (en) System and method for managing bandwidth usage rates in a packet-switched network
Awduche et al. Overview and principles of Internet traffic engineering
US8730806B2 (en) Congestion control and resource allocation in split architecture networks
EP1511220B1 (en) Non-intrusive method for routing policy discovery
US20060165009A1 (en) Systems and methods for traffic management between autonomous systems in the Internet
CN103329490B (en) Improve method and the communication network of data transmission quality based on packet communication network
Nucci et al. IGP link weight assignment for operational tier-1 backbones
Awduche et al. RFC3272: Overview and principles of Internet traffic engineering
Misseri et al. Internet routing diversity for stub networks with a Map-and-Encap scheme
Leonardi et al. Optimal scheduling and routing for maximum network throughput
Leonardi et al. Joint optimal scheduling and routing for maximum network throughput
WO2012130500A1 (en) A method and system for mutual traffic management and accounting
Mohi A comprehensive solution to cloud traffic tribulations
EP3785405A1 (en) Resource reservation and maintenance for preferred path routes in a network
Tang et al. Parallel LSPs for constraint-based routing and load balancing in MPLS networks
Alharbi SDN-based mechanisms for provisioning quality of service to selected network flows
US8441926B2 (en) Method and system for a novel flow admission control framework
WO2015056776A1 (en) Controller, communication node, communication system, communication method, and program
Mortier Multi-timescale internet traffic engineering
Farrel RFC 9522: Overview and Principles of Internet Traffic Engineering
Pham et al. Hybrid routing for scalable IP/MPLS traffic engineering
Gous et al. A Comparison of Approaches for Traffic Engineering in IP and MPLS Networks
Lucente–pmacct Best Practices in Network Planning and Traffic Engineering
Martin Resilience, Provisioning, and Control for the Network of the Future
Devetak Minimizing maximum path delay in multipath connections

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12706492

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12706492

Country of ref document: EP

Kind code of ref document: A1