US8503307B2 - Distributing decision making in a centralized flow routing system - Google Patents

Distributing decision making in a centralized flow routing system Download PDF

Info

Publication number
US8503307B2
US8503307B2 US12/662,885 US66288510A US8503307B2 US 8503307 B2 US8503307 B2 US 8503307B2 US 66288510 A US66288510 A US 66288510A US 8503307 B2 US8503307 B2 US 8503307B2
Authority
US
United States
Prior art keywords
flow
switch
particular flow
central controller
metric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/662,885
Other versions
US20110273988A1 (en
Inventor
Jean Tourrilhes
Praveen Yalagandula
Puneet Sharma
Jeffrey Clifford Mogul
Sujata Banerjee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US12/662,885 priority Critical patent/US8503307B2/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YALAGANDULA, PRAVEEN, BANERJEE, SUJATA, MOGUL, JEFFREY CLIFFORD, SHARMA, PUNEET, TOURRILHES, JEAN
Publication of US20110273988A1 publication Critical patent/US20110273988A1/en
Application granted granted Critical
Publication of US8503307B2 publication Critical patent/US8503307B2/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/06Generation of reports
    • H04L43/062Generation of reports related to network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/342Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames

Definitions

  • a centralized flow routing network consists of a set of switches and a logically centralized controller.
  • a flow comprises an aggregation of packets between a source and a destination in the centralized flow routing system. For instance, all hyper text transport protocol (HTTP) packets between two hosts may be defined as a flow.
  • HTTP hyper text transport protocol
  • a flow may be a subset of another flow. For example, a specific HTTP connection from the source to the destination can be a subset of all HTTP packets from the source to the destination.
  • a flow may be bidirectional or unidirectional.
  • Centralized flow routing systems provide a framework to enable finer grained, flow-level control of Ethernet (or other kinds of) switches from a global controller.
  • OpenFlow is one current centralized flow routing system.
  • a switch in an OpenFlow system Upon receiving a packet, a switch in an OpenFlow system extracts a flow identification (flow-ID), defined in one version of the OpenFlow specification by 10 packet header fields across various layers.
  • the switch searches for the flow-ID in its local flow table. The switch performs this search for every packet in the flow. If the flow-ID is found in the flow table, the flow table is known to provide actions such as “forward on the next-hop link I” and “drop packet”. If, however, the flow is unknown, the switch forwards the packet to the global controller.
  • the global controller then makes a decision about whether to admit the flow, and how to route the flow through the switches.
  • the global controller sets up the corresponding flow table entries for this new flow in all relevant switches, and sends back the packet to the switch.
  • Global control offers several benefits.
  • One benefit is the consistent implementation of global policies. For example, instead of having to ensure that firewall rules at each individual router are consistent across the network, in an OpenFlow network the global controller requires only one description of an global access control policy.
  • Another benefit is that the global controller, by participating in all flow-setup decisions, has better visibility of network conditions, and can make globally sound admission-control and quality of service (QoS) decisions.
  • QoS quality of service
  • FIG. 1 shows a simplified block diagram of a switch in a centralized flow routing system, according to an embodiment of the invention
  • FIG. 2 shows an implementation of local rules for managing flows at a switch, according to an embodiment of the invention
  • FIG. 3 illustrates a flowchart of a method for distributing decision making in a centralized flow routing system, according to an embodiment of the invention
  • FIG. 4 illustrates a flowchart of a method for distributing decision making in a centralized flow routing system, according to an embodiment of the invention.
  • FIG. 5 illustrates a block diagram of a computing apparatus configured to implement or execute the methods depicted in FIGS. 3 and 4 , according to an embodiment of the invention.
  • Local rules for managing flows are devolved from a central controller to switches in the system.
  • Each switch in turn manages flows received from a network.
  • the switch determines whether a metric for a packet in the flow satisfies a dynamic condition to trigger a metric report to the central controller.
  • the central controller may thereafter send an instruction to the switch to manage the flow. Additionally or alternatively, the central controller may send an instruction to the switch for managing future flows which match rules detailed in the instruction.
  • the central controller is operable to devolve per-flow controls to the switch and this allows the overall system to support higher flow-arrival rates and to reduce flow setup latency for the majority of flows.
  • FIG. 1 illustrates a switch 101 in a centralized flow routing system 100 , according to an embodiment.
  • the system 100 includes a network 130 and a central controller 120 .
  • the central controller 120 may be replicated or its function split among multiple central controllers throughout the network 130 .
  • the system 100 may include any number of switches, end hosts, and other types of network devices, which may include any device that can connect to the network 130 . Devices in the network may be referred to as nodes. Also, the end hosts may include source devices and destination devices.
  • the switch 101 includes a set of ports 107 a - n .
  • the ports 107 a - n are configured to receive and send flows in the network 130 .
  • the switch 101 also includes a chassis 102 , and a measurement circuit 108 .
  • the chassis 102 includes switch fabric 103 , a processor 104 , data storage 105 , and line cards 106 a - f .
  • the switch fabric 103 may include a high-speed transmission medium for routing packets between the ports 107 a - n internally in the switch 101 .
  • the line cards 106 a - f may store information for the routing and other tables and information described herein.
  • the line cards 106 a - f may also control the internal routing and perform other functions described herein.
  • the switch 101 may be configured to maximize a portion of packet-processing performed on the line cards 106 a - f .
  • the packets then travel between line-cards via the switch fabric 103 .
  • the processor 104 and data storage 105 that are not on the line cards are used as little as possible, since the available bandwidth between processor 104 and the line cards may be too low.
  • the processor 104 and the storage 105 may be used in cases where the switch 101 exceeds capacity for processing, or storing data, on the line cards 106 a - f .
  • Each of the line cards 102 a - f may include multiple ports and port capacities. For instance, in an HP ProCurve 5406z1 switch, a line-card may have 24 ports, each port supporting 1 Gigabit per second (Gbps) in the full-duplex mode, and/or a line-card may have 4 ports, each port supporting 10 Gbps.
  • Each of the line cards 106 a - f is connected to the chassis 103 .
  • the line cards 106 a - f may be pluggable line cards that can be plugged into the chassis 103 .
  • the chassis 103 may include a plurality of slots (not shown), wherein line-cards 106 a - f may be inserted as required.
  • the switch 101 may have between 4and 9 slots for inserting line cards as is known for switches deployed in data centers or as network edges. In other instances, the line cards 106 a - f are non-pluggable and integrated in the switch 101 .
  • the measurement circuit 108 may be used to measure bit rate and a number of packets for each flow received from the network 130 .
  • the measurement circuit 108 may be built into the line cards 106 a - f . Note that the measurement circuit 108 may sample the packets, count the packets or perform a combination of sampling and counting the packets.
  • the measurement circuit 108 may also sample or count: the number of bytes for each flow; the number of bytes for a flow during a given interval; the number of packets for a flow during a given interval; or the number of one or more kinds of events, including occurrences of packets with specific transmission control protocol (TCP) flags such as synchronous idle (SYN) or reset (RST), or occurrences of packets of specific types such as specific video frame types, or other such identifiable packet characteristics.
  • TCP transmission control protocol
  • SYN synchronous idle
  • RST reset
  • the measurement circuit 108 may also report flows whose duration exceeds a threshold.
  • the switch 101 is configured to control the flows using local rules devolved from the central controller 120 .
  • the central controller 120 determines the local rules for the switch 101 based on loads derived from measurements received from switches and based on global policies and network topology.
  • the central controller 120 sends the local rules to the switch 101 over the network 130 .
  • the local rules include normal flow forwarding rules, significant flow rules, and security flow rules.
  • a normal flow is a flow that is managed by the switch 101 using the normal flow forwarding rules without invoking the central controller 120 .
  • the switch 101 may receive a normal flow and thereafter manage the normal flow using the normal flow forwarding rules engine 117 as shown in FIG. 2 .
  • a significant flow is a flow for which the switch 101 determines that the flow exceeds a threshold triggering a metric report 118 .
  • the threshold is based upon a dynamic condition of the network 130 , for instance bit rate or packet count at the switch 101 .
  • the switch 101 is configured to thereafter invoke the central controller 120 .
  • the switch 101 may continue to forward packets for that flow according to the normal rules, in addition to sending a metric report 118 .
  • the rule provided by the central controller 120 may instruct the switch 101 to include a new local rule to stop forwarding packets for that flow if the metric report 118 is triggered. Thereafter, for packets received, the switch 101 stops forwarding packets for that flow.
  • the central controller 120 may use the metric report 118 and to determine whether the dynamic condition at the switch 101 affects the network 130 such that the dynamic condition requires adjustment in order to comply with global policies. For instance, the global policies may be related to congestion and QoS.
  • the central controller 120 may then send an instruction to the switch 101 to manage the flow that was the subject of the report.
  • the instruction may include a security flow entry 114 or a significant flow entry 115 .
  • a security flow is a flow for which the switch 101 is required to send a flow-setup request to the central controller 120 .
  • the switch 101 may be configured to delay the flow until an instruction is received from the central controller 120 .
  • Flows received at the switch 101 are looked up in the switch's flow table 113 to determine whether the flow is a normal flow, a significant flow, or a security flow, and the measurements are sent to the central controller 120 , for example, in metric measurement reports.
  • the measurement may be probabilistic, such as setting the sampling rate for a flow covered by a particular flow-ID pattern.
  • the central controller 120 may request a measurement at the end of each flow, or periodically during longer flows. Multiple metric reports between the switch 101 and the central controller 120 may be batched to improve communication efficiency.
  • the local rules are thereafter applied by the switch 101 according to a type of flow, as described with respect to FIG. 2 herein below.
  • the central controller 120 provides a global set of rules for the network 130 . For instance, a manager or administrator may enter the global set of rules into the central controller 120 . The central controller 120 thereafter maintains global policies using the global set of rules for the network 130 . The global rules may be based on quality of service (QoS) and performance goals.
  • the central controller 120 determines a current load on the switches in the network 130 , for example, based on metric reports from nodes in the network 130 .
  • the central controller 120 also maintains a current topology of the network 130 through communication with the switch 101 and other nodes in the network 130 . For instance, whenever the switch 101 learns about a media access control (MAC) address of a new node, the switch 101 reports the MAC address to the central controller 120 .
  • MAC media access control
  • the central controller 120 may use the topology of the network 130 and the load on the network 130 in a feedback control system to direct switches, including the switch 101 , to adjust the switch 101 to maintain global policies specified in the global rules. For instance, certain flows, as specified by rules provided by the controller, through the switch 101 may be rate limited, or a flow may be routed through other switches in the network 130 .
  • the central controller 120 maintains the global policies of the network 130 by dynamically updating thresholds for network metrics upon which the local rules for controlling flows at the switch 101 are based.
  • the metrics may include, for instance, a bit rate or packet count at the switch 101 .
  • the thresholds are dynamic because the central controller 120 adjusts the thresholds based on load on the network 130 and the topology of the network 130 .
  • the switch 101 triggers a metric report 118 and sends the metric report 118 to the central controller 120 .
  • the central controller 120 determines the local rules for the switch 101 and sends the local rules to the switch 101 to implement flow control.
  • the switch 101 may manage flows using the local rules without contacting the central controller 120 for each flow unless the flow satisfies the dynamic condition.
  • the central controller 120 may reduce latency in the network 130 caused by unnecessary controller communication overhead.
  • the switch 101 may thereafter reliably forward each of the flows using a single path or a multiple paths as defined in the local rules.
  • the central controller 120 may asynchronously (i.e., independent of a flow setup request) send an update to the switch 101 to change the local rules at the switch 101 .
  • New local rules may be received in an instruction from the central controller 120 based on the metric report 118 .
  • the bit rate in the threshold at the switch 101 may be changed, depending on bit rate through other switches in the network 130 .
  • the central controller 120 may place a timeout or expiration (in terms of seconds) or a limit (in terms of a number of flows) on a rule, after which the switch would have to contact the central controller 120 on a first packet of each flow until it gets a new local rule from the central controller 120 .
  • FIG. 2 illustrates an implementation of the local rules at the switch 101 , according to an embodiment.
  • the switch 101 implements the local rules using a flow table 113 , and a normal flow forwarding rules engine 117 .
  • the flow table 113 includes security flow entries 114 , significant flow entries 115 , and normal flow entries 116 .
  • Each of the flow entries in the flow table 113 including the security flow entries 114 , the significant flow entries 115 , and the normal flow entries 116 , provide a protocol with which the switch 101 manages the flow or contacts the central controller 120 .
  • the switch 101 may store a flow pattern (FP) identifier, an action (A), a sampling frequency (SF), a rate limit (RL), and other sampling or counting instructions.
  • the flow table 113 and the normal flow forwarding rules engine 117 are determined by the local rules devolved from the central controller 120 . If the controller 120 specifies multiple measurement or sampling conditions for a flow, the switch 101 may implement this either by attaching multiple conditions to a rule, or by maintaining multiple rules for the flow and allowing a single packet to match more than one rule.
  • the switch 101 receives the packets at the ports 107 a - n (as shown in FIG. 1 ).
  • the switch 101 generates a flow-specification from the packet by extracting certain header fields, and other meta-information, and then looks up the flow in its flow table. If the central controller has defined a flow as a security-sensitive flow, the action associated with the flow will be to require the switch to send a flow-setup request to the controller, and to refrain from forwarding the packet. Otherwise, the action may allow the switch to forward the packet, and may also ask the switch to send a flow-report to the controller.
  • the action may also instruct the switch to create a new flow-specific flow-table entry based on the original flow-table entry, inheriting the indications and thresholds stored with the original flow-table entry.
  • the central controller 120 is therefore able to retain control over security-sensitive flows in the network 120 .
  • the central controller 120 thereafter may setup the flow or direct the switch 101 to drop the flow.
  • the metric may also indicate whether the flow was forwarded from one virtual local area network (VLAN) to a different VLAN. Packets that are sent between different VLANs (i.e., because the destination IP address is on a separate VLAN from the source IP address) require a flow-setup request to the central controller 120 , while packets that are entirely within the same VLAN do not.
  • VLAN virtual local area network
  • the switch 101 determines whether a metric for the flow exceeds a threshold to trigger a metric report 118 to the central controller 120 .
  • the metrics and corresponding thresholds are specified in the local rules by the central controller 120 .
  • the central controller 120 may dynamically update the per-rule thresholds at the switch 101 to maintain global properties of the network 130 .
  • the threshold for the metrics provided in the local rules forms a dynamic condition on the switch 101 .
  • the dynamic condition may be based on a shared resource usage among multiple switches in the network 130 , such as bandwidth.
  • the metrics may include a bit rate, a packet count, or a combination of bit rate and packet count.
  • the threshold may specify “packet count>X” or “bit rate>Y” or “bit rate ⁇ Z”.
  • the bit rate may be measured over intervals, defined either implicitly or explicitly, over which the rates are computed.
  • the switch 101 may use exponentially weighted moving averages as a way to smooth measurements.
  • the threshold may be dynamically updated by the central controller 120 based on changes in the load on the network 130 and changes in the topology of the network 130 . For example, there may be a shared overall bit rate for a sub set of switches including the switch 101 .
  • the switch 101 may be allowed additional bit rate in instances where remaining switches of the subset are under utilizing their allocation of the bit rate.
  • the switch 101 may process each packet using the normal flow forwarding rules engine 117 , regardless of whether a packet causes a threshold to be exceeded. However, if a rule includes a flag indicating that the switch 101 should not forward packets from a flow that has exceeded a threshold, then instead the switch 101 may send a threshold-exceeded message to the central controller 120 , and await further instructions about how to process this flow. Alternatively, a rule may include a flag telling the switch 101 to drop all packets for a flow that exceeds a threshold, as a means of defending against certain kinds of denial-of-service attacks.
  • the switch 101 sends a metric report 118 , as shown in FIG. 2 , to the central controller.
  • the central controller 120 thereby retains oversight of the switch 101 .
  • the central controller 120 may override the switch 101 as needed. For instance, the central controller 120 may re-route, rate limit, or reprioritize the flow based on the metric report 118 .
  • the significant flow entries may provide multi-flow setup. Upon being invoked for a flow setup request, the central controller 120 may provide the switch with flow-setup information for multiple flows. For subsequent flows of the multiple flows, the multi-flow setup becomes a part of the normal flow forwarding rules.
  • the descriptions of the methods 300 and 350 are made with reference to the system 100 illustrated in FIG. 1 , and thus makes reference to the elements cited therein. It should, however, be understood that the methods 300 and 350 are not limited to the elements set forth in the system 100 . Instead, it should be understood that the methods 300 and 350 may be practiced by a system having a different configuration than that set forth in the system 100 .
  • FIG. 3 there is shown a flowchart of a method 300 for distributing decision making in a centralized flow routing system, according to an embodiment.
  • the method 300 may be performed at the switch 101 .
  • the system 100 devolves decision making from the central controller 120 to the switch 101 .
  • the processor 104 in the switch 101 may implement or execute the system 100 to perform one or more of the steps described in the method 300 in distributing decision making in the network 130 .
  • the central controller 120 devolves some controls to a subset of co-operating switches rather than each switch acting alone in conjunction with the central controller 120 .
  • the cooperation between switches may be done via an inter-switch control/management protocol in addition to the central controller 120 issued commands.
  • the switch 101 receives the local rules for managing flows devolved from the central controller 120 .
  • the local rules may include normal flow forwarding rules, significant flow rules, and security flow rules.
  • the local rules devolved from the central controller 120 may be applied based on a type of flow received at the switch 101 .
  • the local rules include thresholds for metrics measured at the switch 101 . For instance, the metrics include bit rate, packet count, or a combination of bit rate and packet count.
  • the switch 101 receives a packet in a flow.
  • the packet may comprise any packet within the flow.
  • the switch 101 determines whether a metric for the flow satisfies a dynamic condition to trigger a metric report 118 .
  • the switch 101 may sample the flow using the measurement circuit 108 .
  • the central controller 120 using the global rules determines the local rules including the dynamic condition and sends the local rules to the switch 101 .
  • the metric is measured and thereafter, the switch 101 compares the metric measured at the switch 101 to the metric threshold provided in the local rules. For instance, if the metric threshold is a packet count threshold, the switch 101 may compare the packet count in the flow to the packet count threshold.
  • the switch 101 sends the metric report 118 to the central controller 120 .
  • the metric threshold may be determined based on a combination of bandwidth and the duration of the flow. Alternately, an especially active host's flows might be significant because the manager may want to rate limit high volume users.
  • the central controller 120 based on volume of use by each end host determines the metric threshold in the local rules. Similarly, other metric thresholds may be determined based on global priorities.
  • the switch 101 may delay invocation of the central controller 120 until a condition specified by the central controller 120 is met.
  • HTTP hyper text transfer protocol
  • the switch 101 determines whether the packet requires a synchronous check with the central controller 120 .
  • the switch 101 checks with the central controller 120 in response to a determination at step 305 that the rule requires a synchronous check with the central controller 120 ,
  • the switch 101 may receive an instruction from the central controller 120 .
  • the central controller 120 sets up the corresponding flow table entries for this new flow in all relevant switches, and sends the packet to the switch 101 .
  • the central controller 120 may also rate limit the flow at the switch 120 .
  • the central controller 120 may update the metric threshold to provide a greater or lesser limit on the metric at the switch 101
  • the switch 101 manages the flow using the normal flow forwarding rules devolved from the central controller 120 .
  • the switch 101 also manages the flow using the normal flow forwarding rules devolved from the central controller 120 in response to a determination at step 305 that the rule does not require a synchronous check with the central controller 120 .
  • the normal flow forwarding rules may comprise multi-path rules in which the central controller 120 provides a flow setup rule with a wildcard flow-ID, to match (for example) all flows between two end hosts, and a plurality of next-hop destinations for the flows matching this rule. The switch 101 then chooses a specific next-hop destination upon a flow arrival.
  • the normal flow forwarding rules may specify that the choice is made round-robin, randomly, based on switch-local knowledge of traffic, etc.
  • the normal flow forwarding rules may also specify weights so that some paths are chosen more often than others.
  • the switch 101 may optionally receive an instruction to manage the flow from the central controller 120 .
  • the method 350 may be performed at the central controller 120 .
  • the central controller 120 may be a single device or a distributed system.
  • the central controller 120 determines global rules for the network 130 . For instance, the central controller may receive global rules based on global policies entered by a manager or an administrator of the system 100 .
  • the central controller 120 determines local rules for the switch 101 and other similar switches in the system 100 .
  • the local rules are determined using the global rules.
  • the central controller 120 devolves the local rules to the switch 101 .
  • the central controller 120 may devolve the local rules to a plurality of switches such as the switch 101 . Devolving may include determining and sending local rules to a switch.
  • the central controller 120 receives a metric report 118 from the switch 101 .
  • the metric report 118 may be received as asynchronous update in which the switch 101 forwards packets of a flow until the central controller 120 provides an instruction regarding the flow.
  • the central controller 120 receives a flow-setup request 112 from switch 101 as a synchronous request, and in addition to responding to this request with an instruction regarding the flow, the controller may also use the information in this flow-setup request to refine the local rules of step 352 .
  • the central controller 120 may therefore use multiple sources of information, for instance flow setup requests, and metric reports to determine actions regarding the current flow and subsequent flows.
  • the central controller 120 provides an instruction for the switch 101 based on the metric report 118 . For instance, the central controller 120 may setup the requested flow for a new flow. Additionally, the central controller 120 may adjust the thresholds for the metrics to meet global policies. The central controller 120 thereby updates the dynamic conditions on the network 130 . Thereafter, the updated thresholds and local rules may be devolved to the switch 101 asynchronously.
  • Some or all of the operations set forth in the methods 300 and 350 and other functions and operations described herein may be embodied in computer programs stored on a storage device.
  • the computer programs may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats.
  • Exemplary storage devices include conventional RAM, ROM, EPROM, EEPROM, and disks. It is therefore to be understood that any electronic device capable of executing the above-described functions may perform those functions enumerated above.

Abstract

Local rules for managing flows devolved from a central controller are received at a switch. The central controller determines a global set of rules for managing flows. The switch receives a packet from a flow from a network and determines whether a metric for the flow satisfies a dynamic condition to trigger a metric report to the central controller. In response to a determination that the metric for the flow at the switch satisfies the dynamic condition to trigger a metric report to the central controller, the switch sends a metric report to the central controller, and the switch then receives an instruction to manage the flow from the central controller. In response to a determination that the metric for the flow at the switch does not satisfy the dynamic condition to trigger the metric report to the central controller, the switch manages the flow using the local rules for managing flows.

Description

BACKGROUND
A centralized flow routing network consists of a set of switches and a logically centralized controller. A flow comprises an aggregation of packets between a source and a destination in the centralized flow routing system. For instance, all hyper text transport protocol (HTTP) packets between two hosts may be defined as a flow. A flow may be a subset of another flow. For example, a specific HTTP connection from the source to the destination can be a subset of all HTTP packets from the source to the destination. A flow may be bidirectional or unidirectional. Centralized flow routing systems provide a framework to enable finer grained, flow-level control of Ethernet (or other kinds of) switches from a global controller.
OpenFlow is one current centralized flow routing system. Upon receiving a packet, a switch in an OpenFlow system extracts a flow identification (flow-ID), defined in one version of the OpenFlow specification by 10 packet header fields across various layers. The switch searches for the flow-ID in its local flow table. The switch performs this search for every packet in the flow. If the flow-ID is found in the flow table, the flow table is known to provide actions such as “forward on the next-hop link I” and “drop packet”. If, however, the flow is unknown, the switch forwards the packet to the global controller. The global controller then makes a decision about whether to admit the flow, and how to route the flow through the switches. The global controller sets up the corresponding flow table entries for this new flow in all relevant switches, and sends back the packet to the switch.
Global control offers several benefits. One benefit is the consistent implementation of global policies. For example, instead of having to ensure that firewall rules at each individual router are consistent across the network, in an OpenFlow network the global controller requires only one description of an global access control policy. Another benefit is that the global controller, by participating in all flow-setup decisions, has better visibility of network conditions, and can make globally sound admission-control and quality of service (QoS) decisions.
Unfortunately, the twin benefits of central control and flow-by-flow forwarding decisions may increase costs, such as, increased network overhead from flow-setup communications. When a packet does not match an existing flow-table entry in a switch, the packet is sent to the global controller. The global controller then evaluates its policy rules, picks a path for the flow, installs a flow entry in each switch on the path, and finally forwards the packet to the switch. In addition, any subsequent packet received by a switch before the corresponding flow entry is installed must also be forwarded to the global controller. These round trips to the global controller from each switch delay the delivery of the first packet, or first set of packets. They also consume bandwidth on the control channel, limiting the scalability of flow setup. There is an additional cost of a connection setup overhead. Because a first packet of each new flow goes to the controller, the connection setup time for the flow increases.
BRIEF DESCRIPTION OF THE DRAWINGS
Features of the present invention will become apparent to those skilled in the art from the following description with reference to the figures, in which:
FIG. 1 shows a simplified block diagram of a switch in a centralized flow routing system, according to an embodiment of the invention;
FIG. 2 shows an implementation of local rules for managing flows at a switch, according to an embodiment of the invention;
FIG. 3 illustrates a flowchart of a method for distributing decision making in a centralized flow routing system, according to an embodiment of the invention;
FIG. 4 illustrates a flowchart of a method for distributing decision making in a centralized flow routing system, according to an embodiment of the invention; and
FIG. 5 illustrates a block diagram of a computing apparatus configured to implement or execute the methods depicted in FIGS. 3 and 4, according to an embodiment of the invention.
DETAILED DESCRIPTION
For simplicity and illustrative purposes, the present invention is described by referring mainly to exemplary embodiments. In the following description, numerous specific details are set forth to provide a thorough understanding of the embodiments. However, it will be apparent to one of ordinary skill in the art that the embodiments may be practiced without limitation to these specific details. In other instances, well known methods and structures have not been described in detail to avoid unnecessarily obscuring the description of the embodiments. Also, different embodiments described herein may be used in combination with each other.
Disclosed herein are methods and systems for distributing decision making in a centralized flow routing system, according to embodiments. Local rules for managing flows are devolved from a central controller to switches in the system. Each switch in turn manages flows received from a network. The switch determines whether a metric for a packet in the flow satisfies a dynamic condition to trigger a metric report to the central controller. The central controller may thereafter send an instruction to the switch to manage the flow. Additionally or alternatively, the central controller may send an instruction to the switch for managing future flows which match rules detailed in the instruction. Through implementation of the embodiments, the central controller is operable to devolve per-flow controls to the switch and this allows the overall system to support higher flow-arrival rates and to reduce flow setup latency for the majority of flows.
FIG. 1 illustrates a switch 101 in a centralized flow routing system 100, according to an embodiment. It should be clearly understood that the system 100 and the switch 101 may include additional components and that some of the components described herein may be removed and/or modified without departing from a scope of the system 100 and/or the switch 101. The system 100 includes a network 130 and a central controller 120. Although not shown, the central controller 120 may be replicated or its function split among multiple central controllers throughout the network 130. Additionally, the system 100 may include any number of switches, end hosts, and other types of network devices, which may include any device that can connect to the network 130. Devices in the network may be referred to as nodes. Also, the end hosts may include source devices and destination devices.
The switch 101 includes a set of ports 107 a-n. The ports 107 a-n are configured to receive and send flows in the network 130. The switch 101 also includes a chassis 102, and a measurement circuit 108. The chassis 102 includes switch fabric 103, a processor 104, data storage 105, and line cards 106 a-f. As is known in the art, the switch fabric 103 may include a high-speed transmission medium for routing packets between the ports 107 a-n internally in the switch 101. The line cards 106 a-f may store information for the routing and other tables and information described herein. The line cards 106 a-f may also control the internal routing and perform other functions described herein. The switch 101 may be configured to maximize a portion of packet-processing performed on the line cards 106 a-f. The packets then travel between line-cards via the switch fabric 103. The processor 104 and data storage 105 that are not on the line cards are used as little as possible, since the available bandwidth between processor 104 and the line cards may be too low. The processor 104 and the storage 105 may be used in cases where the switch 101 exceeds capacity for processing, or storing data, on the line cards 106 a-f.
Each of the line cards 102 a-f may include multiple ports and port capacities. For instance, in an HP ProCurve 5406z1 switch, a line-card may have 24 ports, each port supporting 1 Gigabit per second (Gbps) in the full-duplex mode, and/or a line-card may have 4 ports, each port supporting 10 Gbps. Each of the line cards 106 a-f is connected to the chassis 103. The line cards 106 a-f may be pluggable line cards that can be plugged into the chassis 103. The chassis 103 may include a plurality of slots (not shown), wherein line-cards 106 a-f may be inserted as required. For instance, the switch 101 may have between 4and 9 slots for inserting line cards as is known for switches deployed in data centers or as network edges. In other instances, the line cards 106 a-f are non-pluggable and integrated in the switch 101.
The measurement circuit 108 may be used to measure bit rate and a number of packets for each flow received from the network 130. The measurement circuit 108 may be built into the line cards 106 a-f. Note that the measurement circuit 108 may sample the packets, count the packets or perform a combination of sampling and counting the packets. The measurement circuit 108 may also sample or count: the number of bytes for each flow; the number of bytes for a flow during a given interval; the number of packets for a flow during a given interval; or the number of one or more kinds of events, including occurrences of packets with specific transmission control protocol (TCP) flags such as synchronous idle (SYN) or reset (RST), or occurrences of packets of specific types such as specific video frame types, or other such identifiable packet characteristics. The measurement circuit 108 may also report flows whose duration exceeds a threshold.
The switch 101 is configured to control the flows using local rules devolved from the central controller 120. For instance, the central controller 120 determines the local rules for the switch 101 based on loads derived from measurements received from switches and based on global policies and network topology. The central controller 120 sends the local rules to the switch 101 over the network 130.
The local rules include normal flow forwarding rules, significant flow rules, and security flow rules. A normal flow is a flow that is managed by the switch 101 using the normal flow forwarding rules without invoking the central controller 120. For instance, the switch 101 may receive a normal flow and thereafter manage the normal flow using the normal flow forwarding rules engine 117 as shown in FIG. 2. A significant flow is a flow for which the switch 101 determines that the flow exceeds a threshold triggering a metric report 118. The threshold is based upon a dynamic condition of the network 130, for instance bit rate or packet count at the switch 101. The switch 101 is configured to thereafter invoke the central controller 120. The switch 101 may continue to forward packets for that flow according to the normal rules, in addition to sending a metric report 118. Optionally, the rule provided by the central controller 120 may instruct the switch 101 to include a new local rule to stop forwarding packets for that flow if the metric report 118 is triggered. Thereafter, for packets received, the switch 101 stops forwarding packets for that flow.
The central controller 120 may use the metric report 118 and to determine whether the dynamic condition at the switch 101 affects the network 130 such that the dynamic condition requires adjustment in order to comply with global policies. For instance, the global policies may be related to congestion and QoS. The central controller 120 may then send an instruction to the switch 101 to manage the flow that was the subject of the report. The instruction may include a security flow entry 114 or a significant flow entry 115. A security flow is a flow for which the switch 101 is required to send a flow-setup request to the central controller 120. The switch 101 may be configured to delay the flow until an instruction is received from the central controller 120.
Flows received at the switch 101 are looked up in the switch's flow table 113 to determine whether the flow is a normal flow, a significant flow, or a security flow, and the measurements are sent to the central controller 120, for example, in metric measurement reports. The measurement may be probabilistic, such as setting the sampling rate for a flow covered by a particular flow-ID pattern. Alternately, the central controller 120 may request a measurement at the end of each flow, or periodically during longer flows. Multiple metric reports between the switch 101 and the central controller 120 may be batched to improve communication efficiency. The local rules are thereafter applied by the switch 101 according to a type of flow, as described with respect to FIG. 2 herein below.
The central controller 120 provides a global set of rules for the network 130. For instance, a manager or administrator may enter the global set of rules into the central controller 120. The central controller 120 thereafter maintains global policies using the global set of rules for the network 130. The global rules may be based on quality of service (QoS) and performance goals. The central controller 120 determines a current load on the switches in the network 130, for example, based on metric reports from nodes in the network 130. The central controller 120 also maintains a current topology of the network 130 through communication with the switch 101 and other nodes in the network 130. For instance, whenever the switch 101 learns about a media access control (MAC) address of a new node, the switch 101 reports the MAC address to the central controller 120. The central controller 120 may use the topology of the network 130 and the load on the network 130 in a feedback control system to direct switches, including the switch 101, to adjust the switch 101 to maintain global policies specified in the global rules. For instance, certain flows, as specified by rules provided by the controller, through the switch 101 may be rate limited, or a flow may be routed through other switches in the network 130.
In one embodiment, the central controller 120 maintains the global policies of the network 130 by dynamically updating thresholds for network metrics upon which the local rules for controlling flows at the switch 101 are based. The metrics may include, for instance, a bit rate or packet count at the switch 101. The thresholds are dynamic because the central controller 120 adjusts the thresholds based on load on the network 130 and the topology of the network 130. By satisfying the dynamic condition, for instance exceeding the threshold for a bit rate or packet count, the switch 101 triggers a metric report 118 and sends the metric report 118 to the central controller 120. The central controller 120 determines the local rules for the switch 101 and sends the local rules to the switch 101 to implement flow control. This also enables the switch 101 to manage flows using the local rules without contacting the central controller 120 for each flow unless the flow satisfies the dynamic condition. Thus, by devolving control of the flows to the switch 101 and other switches, the central controller 120 may reduce latency in the network 130 caused by unnecessary controller communication overhead. Based on local rules received from the central controller 120 and stored at the switch 101, the switch 101 may thereafter reliably forward each of the flows using a single path or a multiple paths as defined in the local rules.
The central controller 120 may asynchronously (i.e., independent of a flow setup request) send an update to the switch 101 to change the local rules at the switch 101. New local rules may be received in an instruction from the central controller 120 based on the metric report 118. For instance, the bit rate in the threshold at the switch 101 may be changed, depending on bit rate through other switches in the network 130. Alternately, the central controller 120 may place a timeout or expiration (in terms of seconds) or a limit (in terms of a number of flows) on a rule, after which the switch would have to contact the central controller 120 on a first packet of each flow until it gets a new local rule from the central controller 120.
FIG. 2 illustrates an implementation of the local rules at the switch 101, according to an embodiment. The switch 101 implements the local rules using a flow table 113, and a normal flow forwarding rules engine 117. The flow table 113 includes security flow entries 114, significant flow entries 115, and normal flow entries 116. Each of the flow entries in the flow table 113, including the security flow entries 114, the significant flow entries 115, and the normal flow entries 116, provide a protocol with which the switch 101 manages the flow or contacts the central controller 120. For each entry in the flow table 113, the switch 101 may store a flow pattern (FP) identifier, an action (A), a sampling frequency (SF), a rate limit (RL), and other sampling or counting instructions. The flow table 113 and the normal flow forwarding rules engine 117 are determined by the local rules devolved from the central controller 120. If the controller 120 specifies multiple measurement or sampling conditions for a flow, the switch 101 may implement this either by attaching multiple conditions to a rule, or by maintaining multiple rules for the flow and allowing a single packet to match more than one rule.
The switch 101 receives the packets at the ports 107 a-n (as shown in FIG. 1). The switch 101 generates a flow-specification from the packet by extracting certain header fields, and other meta-information, and then looks up the flow in its flow table. If the central controller has defined a flow as a security-sensitive flow, the action associated with the flow will be to require the switch to send a flow-setup request to the controller, and to refrain from forwarding the packet. Otherwise, the action may allow the switch to forward the packet, and may also ask the switch to send a flow-report to the controller. The action may also instruct the switch to create a new flow-specific flow-table entry based on the original flow-table entry, inheriting the indications and thresholds stored with the original flow-table entry. The central controller 120 is therefore able to retain control over security-sensitive flows in the network 120. The central controller 120 thereafter may setup the flow or direct the switch 101 to drop the flow.
The metric may also indicate whether the flow was forwarded from one virtual local area network (VLAN) to a different VLAN. Packets that are sent between different VLANs (i.e., because the destination IP address is on a separate VLAN from the source IP address) require a flow-setup request to the central controller 120, while packets that are entirely within the same VLAN do not.
If the flow is allowed in terms of security, the switch 101 determines whether a metric for the flow exceeds a threshold to trigger a metric report 118 to the central controller 120. The metrics and corresponding thresholds are specified in the local rules by the central controller 120. The central controller 120 may dynamically update the per-rule thresholds at the switch 101 to maintain global properties of the network 130. The threshold for the metrics provided in the local rules forms a dynamic condition on the switch 101. For instance, the dynamic condition may be based on a shared resource usage among multiple switches in the network 130, such as bandwidth. The metrics may include a bit rate, a packet count, or a combination of bit rate and packet count. For instance, the threshold may specify “packet count>X” or “bit rate>Y” or “bit rate<Z”. The bit rate may be measured over intervals, defined either implicitly or explicitly, over which the rates are computed. Additionally, the switch 101 may use exponentially weighted moving averages as a way to smooth measurements. Note the threshold may be dynamically updated by the central controller 120 based on changes in the load on the network 130 and changes in the topology of the network 130. For example, there may be a shared overall bit rate for a sub set of switches including the switch 101. The switch 101 may be allowed additional bit rate in instances where remaining switches of the subset are under utilizing their allocation of the bit rate.
The switch 101 may process each packet using the normal flow forwarding rules engine 117, regardless of whether a packet causes a threshold to be exceeded. However, if a rule includes a flag indicating that the switch 101 should not forward packets from a flow that has exceeded a threshold, then instead the switch 101 may send a threshold-exceeded message to the central controller 120, and await further instructions about how to process this flow. Alternatively, a rule may include a flag telling the switch 101 to drop all packets for a flow that exceeds a threshold, as a means of defending against certain kinds of denial-of-service attacks.
If the metric first exceeds the threshold, the switch 101 sends a metric report 118, as shown in FIG. 2, to the central controller. The central controller 120 thereby retains oversight of the switch 101. The central controller 120 may override the switch 101 as needed. For instance, the central controller 120 may re-route, rate limit, or reprioritize the flow based on the metric report 118. Alternately, the significant flow entries may provide multi-flow setup. Upon being invoked for a flow setup request, the central controller 120 may provide the switch with flow-setup information for multiple flows. For subsequent flows of the multiple flows, the multi-flow setup becomes a part of the normal flow forwarding rules.
Methods in which the system 100 may be employed for distributing decision making will now be described with respect to the following flow diagram of the methods 300 and 350 depicted in FIGS. 3 and 4. It should be apparent to those of ordinary skill in the art that the methods 300 and 350 represent generalized illustrations and that other steps may be added or existing steps may be removed, modified or rearranged without departing from the scopes of the methods 300 and 350.
The descriptions of the methods 300 and 350 are made with reference to the system 100 illustrated in FIG. 1, and thus makes reference to the elements cited therein. It should, however, be understood that the methods 300 and 350 are not limited to the elements set forth in the system 100. Instead, it should be understood that the methods 300 and 350 may be practiced by a system having a different configuration than that set forth in the system 100.
With reference first to FIG. 3, there is shown a flowchart of a method 300 for distributing decision making in a centralized flow routing system, according to an embodiment. The method 300 may be performed at the switch 101. Using the method 300, the system 100 devolves decision making from the central controller 120 to the switch 101.
The processor 104 in the switch 101, may implement or execute the system 100 to perform one or more of the steps described in the method 300 in distributing decision making in the network 130. In another embodiment, the central controller 120 devolves some controls to a subset of co-operating switches rather than each switch acting alone in conjunction with the central controller 120. The cooperation between switches may be done via an inter-switch control/management protocol in addition to the central controller 120 issued commands.
At step 301, the switch 101 receives the local rules for managing flows devolved from the central controller 120. The local rules may include normal flow forwarding rules, significant flow rules, and security flow rules. The local rules devolved from the central controller 120 may be applied based on a type of flow received at the switch 101. Additionally, the local rules include thresholds for metrics measured at the switch 101. For instance, the metrics include bit rate, packet count, or a combination of bit rate and packet count.
At step 302, the switch 101 receives a packet in a flow. The packet may comprise any packet within the flow.
At step 303, the switch 101 determines whether a metric for the flow satisfies a dynamic condition to trigger a metric report 118. For instance, the switch 101 may sample the flow using the measurement circuit 108. The central controller 120 using the global rules determines the local rules including the dynamic condition and sends the local rules to the switch 101. The metric is measured and thereafter, the switch 101 compares the metric measured at the switch 101 to the metric threshold provided in the local rules. For instance, if the metric threshold is a packet count threshold, the switch 101 may compare the packet count in the flow to the packet count threshold.
At step 304, in response to a determination that the metric satisfies the dynamic condition, the switch 101 sends the metric report 118 to the central controller 120. For example, flows having a long duration and a high-bandwidth may be significant because the manager of the system 100 may want to provide improved QoS. In this instance, the metric threshold may be determined based on a combination of bandwidth and the duration of the flow. Alternately, an especially active host's flows might be significant because the manager may want to rate limit high volume users. The central controller 120 based on volume of use by each end host determines the metric threshold in the local rules. Similarly, other metric thresholds may be determined based on global priorities. The switch 101 may delay invocation of the central controller 120 until a condition specified by the central controller 120 is met.
In another example, the local rules may specify that after N packets on the flow that a threshold condition has been met. For instance with N=40 the central controller 120 learns of any flow that comprises at least 40 packets. Alternately, the local rules may specify that the switch 101 is to invoke the central controller after N bytes on the flow, if the average rate of flow goes above B bits/sec, or for specific source and/or destination TCP ports. Additionally, the central controller 120 may be invoked for specific source and/or destination IP addresses, for wildcards that match some subpart of the IP address, for higher-layer protocol features (e.g., using deep packet inspection). The central controller 120 may also be invoked for a hyper text transfer protocol (HTTP) flow with a Request-URI matching (or not matching) a string pattern specified by the central controller 120; e.g., a specific MPEG frame type.
At step 304, the switch 101 determines whether the packet requires a synchronous check with the central controller 120. At step 306, the switch 101 checks with the central controller 120 in response to a determination at step 305 that the rule requires a synchronous check with the central controller 120,
Thereafter, at step 307, the switch 101 may receive an instruction from the central controller 120. For instance, the central controller 120 sets up the corresponding flow table entries for this new flow in all relevant switches, and sends the packet to the switch 101. The central controller 120 may also rate limit the flow at the switch 120. Additionally, the central controller 120 may update the metric threshold to provide a greater or lesser limit on the metric at the switch 101
At step 308, in response to a determination at step 303 that the metric does not satisfy the dynamic condition, the switch 101 manages the flow using the normal flow forwarding rules devolved from the central controller 120. The switch 101 also manages the flow using the normal flow forwarding rules devolved from the central controller 120 in response to a determination at step 305 that the rule does not require a synchronous check with the central controller 120. For instance, the normal flow forwarding rules may comprise multi-path rules in which the central controller 120 provides a flow setup rule with a wildcard flow-ID, to match (for example) all flows between two end hosts, and a plurality of next-hop destinations for the flows matching this rule. The switch 101 then chooses a specific next-hop destination upon a flow arrival. The normal flow forwarding rules may specify that the choice is made round-robin, randomly, based on switch-local knowledge of traffic, etc. The normal flow forwarding rules may also specify weights so that some paths are chosen more often than others.
At step 309, the switch 101 may optionally receive an instruction to manage the flow from the central controller 120.
With reference next to FIG. 4, there is shown a flowchart of a method 350 for distributing decision making in a centralized flow routing system, according to an example. The method 350 may be performed at the central controller 120. The central controller 120 may be a single device or a distributed system.
At step 351, the central controller 120 determines global rules for the network 130. For instance, the central controller may receive global rules based on global policies entered by a manager or an administrator of the system 100.
At step 352, the central controller 120 determines local rules for the switch 101 and other similar switches in the system 100. The local rules are determined using the global rules. For instance, the local rules determined by the central controller 120 may provide probabilistic admission control in which the central controller 120 directs the switch 101 to drop new flows matching a wildcard flow-ID (or even a singleton flow-ID). For example, such flows could be dropped with probability =P, or if the rate of flows matching the flow-ID exceeds a threshold T, or if there are more than N current flows matching the flow-ID.
At step 353, the central controller 120 devolves the local rules to the switch 101. The central controller 120 may devolve the local rules to a plurality of switches such as the switch 101. Devolving may include determining and sending local rules to a switch.
At step 354, the central controller 120 receives a metric report 118 from the switch 101. The metric report 118 may be received as asynchronous update in which the switch 101 forwards packets of a flow until the central controller 120 provides an instruction regarding the flow. Alternatively, at step 364, the central controller 120 receives a flow-setup request 112 from switch 101 as a synchronous request, and in addition to responding to this request with an instruction regarding the flow, the controller may also use the information in this flow-setup request to refine the local rules of step 352. The central controller 120 may therefore use multiple sources of information, for instance flow setup requests, and metric reports to determine actions regarding the current flow and subsequent flows.
At step 355, the central controller 120 provides an instruction for the switch 101 based on the metric report 118. For instance, the central controller 120 may setup the requested flow for a new flow. Additionally, the central controller 120 may adjust the thresholds for the metrics to meet global policies. The central controller 120 thereby updates the dynamic conditions on the network 130. Thereafter, the updated thresholds and local rules may be devolved to the switch 101 asynchronously.
Some or all of the operations set forth in the methods 300 and 350 and other functions and operations described herein may be embodied in computer programs stored on a storage device. For example, the computer programs may exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats.
Exemplary storage devices include conventional RAM, ROM, EPROM, EEPROM, and disks. It is therefore to be understood that any electronic device capable of executing the above-described functions may perform those functions enumerated above.
What have been described and illustrated herein are embodiments of the invention along with some of their variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the spirit and scope of the invention, wherein the invention is intended to be defined by the following claims and their equivalents in which all terms are mean in their broadest reasonable sense unless otherwise indicated.

Claims (19)

What is claimed is:
1. A method for distributing decision making in a centralized flow routing system, said method comprising by a network switch:
receiving local rules devolved from a central controller that determines a global set of rules for managing flows, wherein the local rules comprise a definition of a dynamic condition, multiple per-flow forwarding rules each of which applies to a different respective individual flow, and a multi-flow forwarding rules engine that applies to each of multiple different flows;
from a network, receiving a packet of a particular flow;
determining whether the particular flow requires the switch to communicate with the central controller, wherein
the determining comprises ascertaining whether the particular flow satisfies the dynamic condition, wherein the ascertaining comprises ascertaining whether a metric for the particular flow exceeds a threshold, and
the determining comprises determining at least one of a security status of the particular flow and whether the particular flow is being forwarded from one virtual local area network (VLAN) to another VLAN;
based on a determination that the particular flow satisfies the dynamic condition,
transmitting to the central controller a metric report comprising one or more flow metrics characterizing the particular flow,
receiving from the central controller an instruction for a particular per-flow forwarding rule for managing the particular flow, and
forwarding the particular flow according to the per-flow forwarding rule; and
based on a determination that the particular flow does not satisfy the dynamic condition, forwarding the particular flow according to the multi-flow forwarding rules engine.
2. The method according to claim 1, wherein the switch ceases to forward packets for the particular flow until receiving the instruction.
3. The method according to claim 1, further comprising:
modifying the dynamic condition in response to an instruction from the central controller, wherein the dynamic condition comprises a threshold on a bit rate of the flow or a packet rate of the particular flow.
4. The method according to claim 3, wherein the dynamic condition comprises a shared resource usage among multiple switches in the network.
5. The method according to claim 1, wherein the instruction to manage the particular flow comprises one of re-routing, rate limiting, and reprioritizing the particular flow.
6. The method according to claim 1, wherein the determining of the security status of the particular flow comprises determining a particular metric for the particular flow that indicates the security status of the particular flow.
7. The method according to claim 1, wherein the central controller is a distributed system.
8. The method according to claim 1, wherein the determining of whether the particular flow is being forwarded from one VLAN to another VLAN comprises determining a particular metric for the particular flow that indicates whether the particular flow is being forwarded from one virtual local area network (VLAN) to another VLAN.
9. The method according to claim 1, wherein the metric for the flow at the switch indicates a number of packets in the particular flow.
10. The method according to claim 1, wherein the metric for the flow at the switch indicates a duration of the particular flow.
11. The method according to claim 1, wherein the metric for the flow at the switch indicates a number of bytes in the particular flow.
12. The method according to claim 1, wherein the metric for the flow at the switch indicates a number of occurrences of a specific type of packet or packet header field value in the particular flow.
13. A switch in a centralized flow routing system, the switch comprising:
data storage configured to store local rules devolved from a central controller that determines a global set of rules for managing flows, wherein the local rules comprise a definition of a dynamic condition, multiple per-flow forwarding rules each of which applies to a different respective individual flow, and a multi-flow forwarding rules engine that applies to each of multiple different flows;
a port configured to receive a packet of a particular flow from a network;
a processor configured to perform operations comprising determining whether the particular flow requires the switch to communicate with the central controller, wherein
the determining comprises ascertaining whether the particular flow satisfies the dynamic condition, wherein the ascertaining comprises ascertaining whether a metric for the particular flow exceeds a threshold, and
the determining comprises determining at least one of a security status of the particular flow and whether the particular flow is being forwarded from one virtual local area network (VLAN) to another VLAN;
based on a determination that the particular flow satisfies the dynamic condition, the processor is operable to perform operations comprising
transmitting to the central controller a metric report comprising one or more flow metrics characterizing the particular flow, and
receiving from the central controller an instruction for a particular per-flow forwarding rule for managing the particular flow, and
forwarding the particular flow according to the per-flow forwarding rule; and
based on a determination that the particular flow does not satisfy the dynamic condition, the processor is configured to forward the particular flow according to the multi-flow forwarding rules engine.
14. The switch according to claim 13, further configured to:
modify the dynamic condition in response to an instruction from the central controller, wherein the dynamic condition comprises a threshold on a bit rate of a flow or a packet rate of a flow.
15. The switch according to claim 14, wherein the dynamic condition comprises a shared resource usage among multiple switches in the network.
16. The switch according to claim 14, wherein the instruction to manage the particular flow comprises one of re-routing, rate limiting, and reprioritizing the particular flow.
17. The switch according to claim 13, wherein the determining of the security status of the particular flow comprises determining a particular metric for the particular flow that indicates the security status of the particular flow.
18. The switch according to claim 13, further comprising:
a measurement circuit configured to sample packets received at the port to determine whether a metric for a flow at the switch exceeds a threshold to trigger a report to the central controller.
19. A non-transitory computer readable storage medium on which is embedded one or more computer programs that, when executed by a processor, implement a method for distributing decision making in a centralized flow routing system comprising:
receiving local rules devolved from a central controller that determines a global set of rules for managing flows, wherein the local rules comprise a definition of a dynamic condition, multiple per-flow forwarding rules each of which applies to a different respective individual flow, and a multi-flow forwarding rules engine that applies to each of multiple different flows;
from a network, receiving a packet of a particular flow;
determining whether the particular flow requires the processor to communicate with the central controller, wherein
the determining comprises ascertaining whether the particular flow satisfies the dynamic condition, wherein the ascertaining comprises ascertaining whether a metric for the particular flow exceeds a threshold, and
the determining comprises determining at least one of a security status of the particular flow and whether the particular flow is being forwarded from one virtual local area network (VLAN) to another VLAN;
based on a determination that the particular flow satisfies the dynamic condition,
transmitting to the central controller a metric report comprising one or more flow metrics characterizing the particular flow,
receiving from the central controller an instruction for a particular per-flow forwarding rule for managing the particular flow, and
forwarding the particular flow according to the per-flow forwarding rule; and
based on a determination that the particular flow does not satisfy the dynamic condition, forwarding the particular flow according to the multi-flow forwarding rules engine.
US12/662,885 2010-05-10 2010-05-10 Distributing decision making in a centralized flow routing system Active 2031-03-01 US8503307B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/662,885 US8503307B2 (en) 2010-05-10 2010-05-10 Distributing decision making in a centralized flow routing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/662,885 US8503307B2 (en) 2010-05-10 2010-05-10 Distributing decision making in a centralized flow routing system

Publications (2)

Publication Number Publication Date
US20110273988A1 US20110273988A1 (en) 2011-11-10
US8503307B2 true US8503307B2 (en) 2013-08-06

Family

ID=44901854

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/662,885 Active 2031-03-01 US8503307B2 (en) 2010-05-10 2010-05-10 Distributing decision making in a centralized flow routing system

Country Status (1)

Country Link
US (1) US8503307B2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028085A1 (en) * 2011-07-28 2013-01-31 Guy Bilodeau Flow control in packet processing systems
US20130163602A1 (en) * 2011-12-26 2013-06-27 Electronics And Telecommunications Research Institute Flow-based packet transport device and packet management method thereof
US20130283374A1 (en) * 2012-04-18 2013-10-24 Radware, Ltd. Techniques for separating the processing of clients' traffic to different zones in software defined networks
US20130322440A1 (en) * 2011-01-28 2013-12-05 Nec Corporation Communication system, forwarding node, control device, communication control method, and program
US20140036673A1 (en) * 2011-01-20 2014-02-06 Koji EHARA Network system, controller and qos control method
US20150043586A1 (en) * 2012-03-23 2015-02-12 Nec Corporation Control Apparatus, Communication Apparatus, Communication System, Communication Method, and Program
CN104767634A (en) * 2014-01-06 2015-07-08 韩国电子通信研究院 Method and apparatus for managing flow table
US20160261491A1 (en) * 2013-12-12 2016-09-08 Alcatel Lucent A method for providing control in a communication network
US20160269289A1 (en) * 2010-11-22 2016-09-15 Nec Corporation Communication system, communication device, controller, and method and program for controlling forwarding path of packet flow
CN105993149A (en) * 2013-11-28 2016-10-05 Kt株式会社 Method and apparatus for dynamic traffic control in SDN environment
US9979637B2 (en) 2016-06-07 2018-05-22 Dell Products L.P. Network flow management system
US10819828B2 (en) * 2016-09-29 2020-10-27 Nokia Solutions And Networks Oy Enhancement of traffic detection and routing in virtualized environment
US20230300045A1 (en) * 2022-03-15 2023-09-21 Keysight Technologies, Inc. Methods, systems, and computer readable media for selectively processing a packet flow using a flow inspection engine
US11949570B2 (en) 2021-07-30 2024-04-02 Keysight Technologies, Inc. Methods, systems, and computer readable media for utilizing machine learning to automatically configure filters at a network packet broker

Families Citing this family (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9716672B2 (en) 2010-05-28 2017-07-25 Brocade Communications Systems, Inc. Distributed configuration management for virtual cluster switching
US8989186B2 (en) 2010-06-08 2015-03-24 Brocade Communication Systems, Inc. Virtual port grouping for virtual cluster switching
US9001824B2 (en) 2010-05-18 2015-04-07 Brocade Communication Systems, Inc. Fabric formation for virtual cluster switching
US9769016B2 (en) 2010-06-07 2017-09-19 Brocade Communications Systems, Inc. Advanced link tracking for virtual cluster switching
US9270486B2 (en) 2010-06-07 2016-02-23 Brocade Communications Systems, Inc. Name services for virtual cluster switching
US8867552B2 (en) 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching
US9608833B2 (en) 2010-06-08 2017-03-28 Brocade Communications Systems, Inc. Supporting multiple multicast trees in trill networks
US8446914B2 (en) 2010-06-08 2013-05-21 Brocade Communications Systems, Inc. Method and system for link aggregation across multiple switches
US9628293B2 (en) 2010-06-08 2017-04-18 Brocade Communications Systems, Inc. Network layer multicasting in trill networks
US9246703B2 (en) 2010-06-08 2016-01-26 Brocade Communications Systems, Inc. Remote port mirroring
US9806906B2 (en) 2010-06-08 2017-10-31 Brocade Communications Systems, Inc. Flooding packets on a per-virtual-network basis
US9807031B2 (en) 2010-07-16 2017-10-31 Brocade Communications Systems, Inc. System and method for network configuration
CN103026662B (en) * 2010-07-23 2016-11-09 日本电气株式会社 Communication system, node, statistics gatherer means, statistical information collection method and program
WO2012077259A1 (en) * 2010-12-10 2012-06-14 Nec Corporation Communication system, control device, node controlling method and program
ES2609521T3 (en) * 2010-12-13 2017-04-20 Nec Corporation Communication route control system, route control device, communication route control method, and route control program
US20130250797A1 (en) * 2010-12-14 2013-09-26 Nobuhiko Itoh Communication control system, control device, communication control method, and communication control program
EP3461077A1 (en) 2011-01-13 2019-03-27 NEC Corporation Network system and routing method
CN103314557B (en) * 2011-01-17 2017-01-18 日本电气株式会社 Network system, controller, switch, and traffic monitoring method
WO2012119614A1 (en) * 2011-03-07 2012-09-13 Nec Europe Ltd. A method for operating an openflow switch within a network, an openflow switch and a network
US9270572B2 (en) 2011-05-02 2016-02-23 Brocade Communications Systems Inc. Layer-3 support in TRILL networks
US20120287930A1 (en) * 2011-05-13 2012-11-15 Cisco Technology, Inc. Local switching at a fabric extender
US8948056B2 (en) 2011-06-28 2015-02-03 Brocade Communication Systems, Inc. Spanning-tree based loop detection for an ethernet fabric switch
US9401861B2 (en) 2011-06-28 2016-07-26 Brocade Communications Systems, Inc. Scalable MAC address distribution in an Ethernet fabric switch
US9407533B2 (en) 2011-06-28 2016-08-02 Brocade Communications Systems, Inc. Multicast in a trill network
US8885641B2 (en) 2011-06-30 2014-11-11 Brocade Communication Systems, Inc. Efficient trill forwarding
US8964563B2 (en) 2011-07-08 2015-02-24 Telefonaktiebolaget L M Ericsson (Publ) Controller driven OAM for OpenFlow
US9736085B2 (en) 2011-08-29 2017-08-15 Brocade Communications Systems, Inc. End-to end lossless Ethernet in Ethernet fabric
JP5943410B2 (en) * 2011-09-21 2016-07-05 日本電気株式会社 COMMUNICATION DEVICE, CONTROL DEVICE, COMMUNICATION SYSTEM, COMMUNICATION CONTROL METHOD, AND PROGRAM
EP2759102A4 (en) * 2011-09-21 2016-01-06 Nec Corp Communication apparatus, communication system, communication control method, and computer program
JP6007978B2 (en) * 2011-09-21 2016-10-19 日本電気株式会社 COMMUNICATION DEVICE, CONTROL DEVICE, COMMUNICATION SYSTEM, COMMUNICATION CONTROL METHOD, AND PROGRAM
US20140233392A1 (en) * 2011-09-21 2014-08-21 Nec Corporation Communication apparatus, communication system, communication control method, and program
JP6036816B2 (en) * 2011-09-22 2016-11-30 日本電気株式会社 Communication terminal, communication method, and program
US8693344B1 (en) * 2011-09-27 2014-04-08 Big Switch Network, Inc. Systems and methods for generating packet forwarding rules based on network policy
US9154433B2 (en) 2011-10-25 2015-10-06 Nicira, Inc. Physical controller
US9699117B2 (en) 2011-11-08 2017-07-04 Brocade Communications Systems, Inc. Integrated fibre channel support in an ethernet fabric switch
US9450870B2 (en) 2011-11-10 2016-09-20 Brocade Communications Systems, Inc. System and method for flow management in software-defined networks
US8644149B2 (en) * 2011-11-22 2014-02-04 Telefonaktiebolaget L M Ericsson (Publ) Mechanism for packet forwarding using switch pools in flow-based, split-architecture networks
CN103166866B (en) * 2011-12-12 2016-08-03 华为技术有限公司 Generate the method for list item, the method receiving message and related device and system
CN103534999B (en) 2012-01-21 2016-07-13 华为技术有限公司 The method of message forwarding and device
US8995272B2 (en) 2012-01-26 2015-03-31 Brocade Communication Systems, Inc. Link aggregation in software-defined networks
EP2819356A4 (en) * 2012-02-20 2015-09-30 Nec Corp Network system, and method for improving resource usage
US9742693B2 (en) 2012-02-27 2017-08-22 Brocade Communications Systems, Inc. Dynamic service insertion in a fabric switch
US20130223226A1 (en) * 2012-02-29 2013-08-29 Dell Products, Lp System and Method for Providing a Split Data Plane in a Flow-Based Switching Device
US9559948B2 (en) * 2012-02-29 2017-01-31 Dell Products, Lp System and method for managing unknown flows in a flow-based switching device
CN104160665B (en) * 2012-03-08 2017-03-08 日本电气株式会社 Network system, controller and load-distribution method
US9154416B2 (en) 2012-03-22 2015-10-06 Brocade Communications Systems, Inc. Overlay tunnel in a fabric switch
AU2013249154B2 (en) 2012-04-18 2015-12-10 Nicira, Inc. Exchange of network state information between forwarding elements
US9374301B2 (en) 2012-05-18 2016-06-21 Brocade Communications Systems, Inc. Network feedback in software-defined networks
US9444842B2 (en) 2012-05-22 2016-09-13 Sri International Security mediation for dynamically programmable network
US10277464B2 (en) 2012-05-22 2019-04-30 Arris Enterprises Llc Client auto-configuration in a multi-switch link aggregation
US9571523B2 (en) 2012-05-22 2017-02-14 Sri International Security actuator for a dynamically programmable computer network
EP2853066B1 (en) 2012-05-23 2017-02-22 Brocade Communications Systems, Inc. Layer-3 overlay gateways
JP2015525982A (en) * 2012-06-26 2015-09-07 日本電気株式会社 COMMUNICATION METHOD, COMMUNICATION SYSTEM, INFORMATION PROCESSING DEVICE, COMMUNICATION TERMINAL, AND PROGRAM
US10075520B2 (en) * 2012-07-27 2018-09-11 Microsoft Technology Licensing, Llc Distributed aggregation of real-time metrics for large scale distributed systems
US9749260B2 (en) 2012-07-31 2017-08-29 Hewlett Packard Enterprise Development Lp Implementing a transition protocol in which a first rule set for routing packets received by a group of switches during a first time period is updated to a second rule set
US9602430B2 (en) 2012-08-21 2017-03-21 Brocade Communications Systems, Inc. Global VLANs for fabric switches
US20150063349A1 (en) * 2012-08-30 2015-03-05 Shahab Ardalan Programmable switching engine with storage, analytic and processing capabilities
JP5917771B2 (en) * 2012-10-03 2016-05-18 エヌイーシー ラボラトリーズ アメリカ インクNEC Laboratories America, Inc. Generic centralized architecture for software-defined networking with low-latency one-way bypass communication
US9071529B2 (en) * 2012-10-08 2015-06-30 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for accelerating forwarding in software-defined networks
US9401872B2 (en) 2012-11-16 2016-07-26 Brocade Communications Systems, Inc. Virtual link aggregations across multiple fabric switches
US9544222B2 (en) * 2013-01-09 2017-01-10 Ventus Networks, Llc Router
US9548926B2 (en) 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
US9350680B2 (en) 2013-01-11 2016-05-24 Brocade Communications Systems, Inc. Protection switching over a virtual link aggregation
US9413691B2 (en) 2013-01-11 2016-08-09 Brocade Communications Systems, Inc. MAC address synchronization in a fabric switch
US9565113B2 (en) 2013-01-15 2017-02-07 Brocade Communications Systems, Inc. Adaptive link aggregation and virtual link aggregation
EP2947826A4 (en) * 2013-01-21 2016-09-21 Nec Corp Control apparatus, communication apparatus, communication system, switch control method and program
US8964530B2 (en) * 2013-01-31 2015-02-24 Cisco Technology, Inc. Increasing multi-destination scale in a network environment
US9565099B2 (en) 2013-03-01 2017-02-07 Brocade Communications Systems, Inc. Spanning tree in fabric switches
US9596192B2 (en) 2013-03-15 2017-03-14 International Business Machines Corporation Reliable link layer for control links between network controllers and switches
US9104643B2 (en) 2013-03-15 2015-08-11 International Business Machines Corporation OpenFlow controller master-slave initialization protocol
US9444748B2 (en) 2013-03-15 2016-09-13 International Business Machines Corporation Scalable flow and congestion control with OpenFlow
US9407560B2 (en) 2013-03-15 2016-08-02 International Business Machines Corporation Software defined network-based load balancing for physical and virtual networks
US9954781B2 (en) 2013-03-15 2018-04-24 International Business Machines Corporation Adaptive setting of the quantized congestion notification equilibrium setpoint in converged enhanced Ethernet networks
US9253096B2 (en) 2013-03-15 2016-02-02 International Business Machines Corporation Bypassing congestion points in a converged enhanced ethernet fabric
US9401818B2 (en) 2013-03-15 2016-07-26 Brocade Communications Systems, Inc. Scalable gateways for a fabric switch
US9118984B2 (en) 2013-03-15 2015-08-25 International Business Machines Corporation Control plane for integrated switch wavelength division multiplexing
US9219689B2 (en) 2013-03-15 2015-12-22 International Business Machines Corporation Source-driven switch probing with feedback request
US9769074B2 (en) 2013-03-15 2017-09-19 International Business Machines Corporation Network per-flow rate limiting
US9401857B2 (en) 2013-03-15 2016-07-26 International Business Machines Corporation Coherent load monitoring of physical and virtual networks with synchronous status acquisition
US9609086B2 (en) 2013-03-15 2017-03-28 International Business Machines Corporation Virtual machine mobility using OpenFlow
US9692775B2 (en) * 2013-04-29 2017-06-27 Telefonaktiebolaget Lm Ericsson (Publ) Method and system to dynamically detect traffic anomalies in a network
US9565028B2 (en) 2013-06-10 2017-02-07 Brocade Communications Systems, Inc. Ingress switch multicast distribution in a fabric switch
US9699001B2 (en) 2013-06-10 2017-07-04 Brocade Communications Systems, Inc. Scalable and segregated network virtualization
US9736041B2 (en) * 2013-08-13 2017-08-15 Nec Corporation Transparent software-defined network management
US9806949B2 (en) 2013-09-06 2017-10-31 Brocade Communications Systems, Inc. Transparent interconnection of Ethernet fabric switches
US20150089566A1 (en) * 2013-09-24 2015-03-26 Radware, Ltd. Escalation security method for use in software defined networks
US9912612B2 (en) 2013-10-28 2018-03-06 Brocade Communications Systems LLC Extended ethernet fabric switches
KR101818082B1 (en) * 2014-01-06 2018-02-21 한국전자통신연구원 A method and apparatus for managing flow table
CN105247831B (en) * 2014-01-23 2018-10-30 华为技术有限公司 Flow table amending method, flow table modification device and open flows network system
US9548873B2 (en) 2014-02-10 2017-01-17 Brocade Communications Systems, Inc. Virtual extensible LAN tunnel keepalives
US10581758B2 (en) 2014-03-19 2020-03-03 Avago Technologies International Sales Pte. Limited Distributed hot standby links for vLAG
US10476698B2 (en) 2014-03-20 2019-11-12 Avago Technologies International Sales Pte. Limited Redundent virtual link aggregation group
US10063473B2 (en) 2014-04-30 2018-08-28 Brocade Communications Systems LLC Method and system for facilitating switch virtualization in a network of interconnected switches
US10003474B2 (en) * 2014-05-01 2018-06-19 Metaswitch Networks Ltd Flow synchronization
WO2015168888A1 (en) * 2014-05-07 2015-11-12 华为技术有限公司 Network congestion control method and controller
US9800471B2 (en) 2014-05-13 2017-10-24 Brocade Communications Systems, Inc. Network extension groups of global VLANs in a fabric switch
US10616108B2 (en) 2014-07-29 2020-04-07 Avago Technologies International Sales Pte. Limited Scalable MAC address virtualization
US9544219B2 (en) 2014-07-31 2017-01-10 Brocade Communications Systems, Inc. Global VLAN services
US9807007B2 (en) 2014-08-11 2017-10-31 Brocade Communications Systems, Inc. Progressive MAC address learning
US9524173B2 (en) 2014-10-09 2016-12-20 Brocade Communications Systems, Inc. Fast reboot for a switch
US9699029B2 (en) 2014-10-10 2017-07-04 Brocade Communications Systems, Inc. Distributed configuration management in a switch group
US9626255B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Online restoration of a switch snapshot
US9628407B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Multiple software versions in a switch group
US9942097B2 (en) 2015-01-05 2018-04-10 Brocade Communications Systems LLC Power management in a network of interconnected switches
US10003552B2 (en) 2015-01-05 2018-06-19 Brocade Communications Systems, Llc. Distributed bidirectional forwarding detection protocol (D-BFD) for cluster of interconnected switches
US9838333B2 (en) * 2015-01-20 2017-12-05 Futurewei Technologies, Inc. Software-defined information centric network (ICN)
US10038592B2 (en) 2015-03-17 2018-07-31 Brocade Communications Systems LLC Identifier assignment to a new switch in a switch group
US9807005B2 (en) 2015-03-17 2017-10-31 Brocade Communications Systems, Inc. Multi-fabric manager
US9967134B2 (en) 2015-04-06 2018-05-08 Nicira, Inc. Reduction of network churn based on differences in input state
US10579406B2 (en) 2015-04-08 2020-03-03 Avago Technologies International Sales Pte. Limited Dynamic orchestration of overlay tunnels
US10009229B2 (en) * 2015-06-11 2018-06-26 Cisco Technology, Inc. Policy verification in a network
US9954746B2 (en) * 2015-07-09 2018-04-24 Microsoft Technology Licensing, Llc Automatically generating service documentation based on actual usage
US10439929B2 (en) 2015-07-31 2019-10-08 Avago Technologies International Sales Pte. Limited Graceful recovery of a multicast-enabled switch
US10171303B2 (en) 2015-09-16 2019-01-01 Avago Technologies International Sales Pte. Limited IP-based interconnection of switches with a logical chassis
US10204122B2 (en) 2015-09-30 2019-02-12 Nicira, Inc. Implementing an interface between tuple and message-driven control entities
US9912614B2 (en) 2015-12-07 2018-03-06 Brocade Communications Systems LLC Interconnection of switches based on hierarchical overlay tunneling
KR102492234B1 (en) * 2016-02-16 2023-01-27 주식회사 쏠리드 Distributed antenna system, method for processing frame of the same, and method for avoiding congestion of the same
US11019167B2 (en) 2016-04-29 2021-05-25 Nicira, Inc. Management of update queues for network controller
US10237090B2 (en) 2016-10-28 2019-03-19 Avago Technologies International Sales Pte. Limited Rule-based network identifier mapping
US10225161B2 (en) * 2016-10-31 2019-03-05 Accedian Networks Inc. Precise statistics computation for communication networks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151322A (en) * 1997-02-14 2000-11-21 Advanced Micro Devices, Inc. Multiport data switch having data frame VLAN tagging and VLAN stripping
US6154776A (en) * 1998-03-20 2000-11-28 Sun Microsystems, Inc. Quality of service allocation on a network
US20060075093A1 (en) * 2004-10-05 2006-04-06 Enterasys Networks, Inc. Using flow metric events to control network operation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151322A (en) * 1997-02-14 2000-11-21 Advanced Micro Devices, Inc. Multiport data switch having data frame VLAN tagging and VLAN stripping
US6154776A (en) * 1998-03-20 2000-11-28 Sun Microsystems, Inc. Quality of service allocation on a network
US20060075093A1 (en) * 2004-10-05 2006-04-06 Enterasys Networks, Inc. Using flow metric events to control network operation

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Eddie Kohler et al., The Click Modular Router, Laboratory for Computer Science, MIT, Jul. 2000.
Global Environment for Network Innovations, http://geni.net, downloaded May 10, 2010.
J. Turner et al., Supercharging PlanetLab-A High Performance, Multi-Application, Overlay Network Platform, ACM SIGCOMM '07, Aug. 27-31, 2007, Kyoto, Japan.
Mark Handley et al., XORP: An Open Platform for Network Rearch, ACM SIGCOMM Hot Topics in Networking, 2002.
Martin Casado et al., Ethane: Taking Control of the Enterprise, ACM SIGCOMM '07, Aug. 27-31, 2007, Kyoto, Japan.
Natasha Gude et al., NOX: Towards an Operating System for Networks, In submission, downloaded May 10, 2010.
NetFPGA: Programmable Networking Hardware, http://netfpga.org, downloaded May 10, 2010.
Nick McKeown et al., OpenFlow: Enabling Innovation in Campus Networks, Mar. 14, 2008.
OpenFlow Switch Specification, http://netfpga.org, downloaded May 10, 2010.

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11134012B2 (en) 2010-11-22 2021-09-28 Nec Corporation Communication system, communication device, controller, and method and program for controlling forwarding path of packet flow
US10541920B2 (en) * 2010-11-22 2020-01-21 Nec Corporation Communication system, communication device, controller, and method and program for controlling forwarding path of packet flow
US20160269289A1 (en) * 2010-11-22 2016-09-15 Nec Corporation Communication system, communication device, controller, and method and program for controlling forwarding path of packet flow
US20140036673A1 (en) * 2011-01-20 2014-02-06 Koji EHARA Network system, controller and qos control method
US9203776B2 (en) * 2011-01-20 2015-12-01 Nec Corporation Network system, controller and QOS control method
US20130322440A1 (en) * 2011-01-28 2013-12-05 Nec Corporation Communication system, forwarding node, control device, communication control method, and program
US9479323B2 (en) * 2011-01-28 2016-10-25 Nec Corporation Communication system, forwarding node, control device, communication control method, and program
US9270556B2 (en) * 2011-07-28 2016-02-23 Hewlett Packard Development Company, L.P. Flow control in packet processing systems
US20130028085A1 (en) * 2011-07-28 2013-01-31 Guy Bilodeau Flow control in packet processing systems
US20130163602A1 (en) * 2011-12-26 2013-06-27 Electronics And Telecommunications Research Institute Flow-based packet transport device and packet management method thereof
US9319241B2 (en) * 2011-12-26 2016-04-19 Electronics And Telecommunications Research Institute Flow-based packet transport device and packet management method thereof
US20150043586A1 (en) * 2012-03-23 2015-02-12 Nec Corporation Control Apparatus, Communication Apparatus, Communication System, Communication Method, and Program
US9590905B2 (en) * 2012-03-23 2017-03-07 Nec Corporation Control apparatus and a communication method, apparatus, and system to perform path control of a network
US9591011B2 (en) 2012-04-18 2017-03-07 Radware, Ltd. Techniques for separating the processing of clients' traffic to different zones in software defined networks
US20130283374A1 (en) * 2012-04-18 2013-10-24 Radware, Ltd. Techniques for separating the processing of clients' traffic to different zones in software defined networks
US9130977B2 (en) * 2012-04-18 2015-09-08 Radware, Ltd. Techniques for separating the processing of clients' traffic to different zones
US9210180B2 (en) * 2012-04-18 2015-12-08 Radware Ltd. Techniques for separating the processing of clients' traffic to different zones in software defined networks
US20130283373A1 (en) * 2012-04-18 2013-10-24 Radware, Ltd. Techniques for separating the processing of clients' traffic to different zones
CN105993149A (en) * 2013-11-28 2016-10-05 Kt株式会社 Method and apparatus for dynamic traffic control in SDN environment
CN105993149B (en) * 2013-11-28 2019-10-08 Kt株式会社 The method and apparatus that dynamic flow controls in SDN environment
US10033630B2 (en) * 2013-12-12 2018-07-24 Alcatel Lucent Method for configuring network elements to delegate control of packet flows in a communication network
US20160261491A1 (en) * 2013-12-12 2016-09-08 Alcatel Lucent A method for providing control in a communication network
CN104767634A (en) * 2014-01-06 2015-07-08 韩国电子通信研究院 Method and apparatus for managing flow table
US9979637B2 (en) 2016-06-07 2018-05-22 Dell Products L.P. Network flow management system
US10819828B2 (en) * 2016-09-29 2020-10-27 Nokia Solutions And Networks Oy Enhancement of traffic detection and routing in virtualized environment
US11949570B2 (en) 2021-07-30 2024-04-02 Keysight Technologies, Inc. Methods, systems, and computer readable media for utilizing machine learning to automatically configure filters at a network packet broker
US20230300045A1 (en) * 2022-03-15 2023-09-21 Keysight Technologies, Inc. Methods, systems, and computer readable media for selectively processing a packet flow using a flow inspection engine

Also Published As

Publication number Publication date
US20110273988A1 (en) 2011-11-10

Similar Documents

Publication Publication Date Title
US8503307B2 (en) Distributing decision making in a centralized flow routing system
US20220210092A1 (en) System and method for facilitating global fairness in a network
US8593970B2 (en) Methods and apparatus for defining a flow control signal related to a transmit queue
US9276852B2 (en) Communication system, forwarding node, received packet process method, and program
US8427958B2 (en) Dynamic latency-based rerouting
KR102104047B1 (en) Congestion control in packet data networking
US9071529B2 (en) Method and apparatus for accelerating forwarding in software-defined networks
US8537669B2 (en) Priority queue level optimization for a network flow
US7916718B2 (en) Flow and congestion control in switch architectures for multi-hop, memory efficient fabrics
EP3024186B1 (en) Methods and apparatus for defining a flow control signal
US8537846B2 (en) Dynamic priority queue level assignment for a network flow
EP1436951B1 (en) Trunking inter-switch links
US7944834B2 (en) Policing virtual connections
US6647412B1 (en) Method and network for propagating status information
US7500014B1 (en) Network link state mirroring
US20220045972A1 (en) Flow-based management of shared buffer resources
WO2012081145A1 (en) Communication path control system, path control device, communication path control method, and path control program
CN111245740B (en) Service quality strategy method and device for configuration service and computing equipment
Liu et al. RGBCC: A new congestion control mechanism for InfiniBand
Alharbi SDN-based mechanisms for provisioning quality of service to selected network flows
US20230022037A1 (en) Flow-based management of shared buffer resources
US20240080266A1 (en) Flexible per-flow multipath managed by sender-side network adapter
EP2164210B1 (en) Methods and apparatus for defining a flow control signal
Tam et al. Leveraging performance of multiroot data center networks by reactive reroute

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TOURRILHES, JEAN;YALAGANDULA, PRAVEEN;SHARMA, PUNEET;AND OTHERS;SIGNING DATES FROM 20100212 TO 20100216;REEL/FRAME:024407/0638

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8