US9270556B2 - Flow control in packet processing systems - Google Patents

Flow control in packet processing systems Download PDF

Info

Publication number
US9270556B2
US9270556B2 US13/192,618 US201113192618A US9270556B2 US 9270556 B2 US9270556 B2 US 9270556B2 US 201113192618 A US201113192618 A US 201113192618A US 9270556 B2 US9270556 B2 US 9270556B2
Authority
US
United States
Prior art keywords
flow control
load parameter
control load
value
intervals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/192,618
Other versions
US20130028085A1 (en
Inventor
Guy Bilodeau
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US13/192,618 priority Critical patent/US9270556B2/en
Assigned to HEWLETT PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BILODEAU, GUY
Publication of US20130028085A1 publication Critical patent/US20130028085A1/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Application granted granted Critical
Publication of US9270556B2 publication Critical patent/US9270556B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions

Definitions

  • Computer networks include various devices that facilitate communication between computers using packetized formats and protocols, such as the ubiquitous Transmission Control Protocol/Internet Protocol (TCP/IP).
  • Computer networks can include various packet processing systems for performing various types of packet processing, such as forwarding, switching, routing, analyzing, and like type packet operations.
  • a packet processing system can have multiple network interfaces to different network devices for receiving packets. The multiple network interfaces are controlled by a common set of resources in the system (e.g., processor, memory, and like type resources).
  • a network interface can receive packets at too high of a rate (e.g., higher than a designated maximum rate for the network interface).
  • a packet overflow condition can be intentional, such as an attacker sending many packets to a network interface in a Denial-of-Service (DoS) attack.
  • DoS Denial-of-Service
  • Such a packet overflow condition can also be unintentional, such as too many devices trying to communicate through the same network interface, or incorrectly configured network device(s) sending too many packets to the network interface.
  • a network interface receiving an overflow of packets can monopolize the resources of the packet processing system or otherwise cause the resources to become overloaded.
  • Other network interface and/or other processes not associated with the overflowing network interface can become starved of resources in the packet processing system, causing such network interfaces and processes to stop working.
  • FIG. 1 is a block diagram of a packet processing system according to an example implementation
  • FIG. 2 is a flow diagram depicting a method of flow control in a packet processing system according to an example implementation
  • FIG. 3 is a flow diagram depicting a method of adjusting a flow control load parameter according to an example implementation
  • FIG. 4 is a flow diagram depicting a method of adjusting a packet flow budget according to an example implementation.
  • FIG. 5 is a block diagram of a computer according to an example implementation.
  • flow control in a packet processing system is implemented by first obtaining metric data measuring performance of at least one resource in the packet processing system over intervals of a time period. A value of a flow control load parameter is adjusted during each of the intervals based on comparing the metric data with at least one condition that indicates depletion of the resource(s). A value of a packet flow budget for the packet processing system is established in each of the intervals based on the respective value of the flow control load parameter in each of the intervals. Thus, after some time intervals have elapsed, where the usage of resource(s) is deemed to be too high, the packet flow budget can restrict the rate of packet processing in the packet processing system to conserve the resource(s). After other time intervals have elapsed, where usage of resource(s) is deemed to be normal, the packet flow budget can provide a standard rate of packet processing. The packet flow budget can be adjusted continuously over the time period based on feedback from measurements of resource performance.
  • the flow control process can be used to monitor the fraction of resource usage devoted to packet processing and continuously maintain packet flows through the system at a maximum rate that can be sustained by the packet processing system. In this manner, packet processing performance in the system is maximized, without starving other processes of resources.
  • the flow control process does not rely on instructing the packet sources to stop sending packets in case of packet saturation, such as by multicasting an Ethernet PAUSE frame. Such an instruction will cause all packet sources to stop transmitting, even those that are not excessively transmitting packets and causing the problem. Further, some packet sources may ignore such an instruction, particularly if the packet sources are maliciously transmitting excessive packets.
  • the flow control process described herein uses metrics internal to the packet processing system to make decisions on if and by how much the packet flow should be restricted. Various embodiments are described below by referring to several examples.
  • FIG. 1 is a block diagram of a packet processing system 100 according to an example implementation.
  • the packet processing system 100 includes physical hardware 102 that implements an operating environment (OE) 104 .
  • the physical hardware 102 includes resources 106 managed by the OE 104 .
  • the packet processing system 100 can be implemented as any type of computer, device, appliance or the like.
  • the resources 106 can include processor(s), memory (e.g., volatile memories, non-volatile memories, magnetic and/or optical storage, etc.), interface circuits to external devices, and the like.
  • the packet processing system 100 can use the resource(s) to send and receive packetized data (“packets”).
  • the packets can be formatted using multiple layers of protocol, e.g., the Transmission Control Protocol (TCP) Internet Protocol (IP) (“TCP/IP”) model, Open Systems Interconnection (OSI) model, or the like.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • OSI Open Systems Interconnection
  • a packet generally includes a header and a payload.
  • the header implements a layer of protocol
  • the payload includes data, which may be related to packet(s) at another layer of protocol.
  • the resources 106 can operate on a flow of the packet (“packet flow”).
  • a “packet flow” is a sequence of packets passing an observation point, such as any of the resources 106 .
  • a “packet rate” for a packet flow is the number of packets in the sequence passing the observation point over a time interval. The more packets in the sequence, the higher the packet rate. Conversely, the fewer packets in the sequence, the lower the packet rate.
  • the packet flow can originate from at least one source.
  • the physical hardware 102 can execute machine-readable instructions to implement elements of functionality in the OE 104 (e.g., using at least one of the resources 106 , such as a processor).
  • elements of functionality in the OE 104 can be implemented as a physical circuit in the physical hardware 102 (e.g., an integrated circuit (IC), such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA)).
  • IC integrated circuit
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • elements of functionality in the OE 104 are implemented using a combination of machine-readable instructions and physical circuits.
  • Elements of functionality in the OE 104 include a kernel 108 , at least one device driver (“device driver(s) 110 ”), a packet flow controller 112 , and at least one application (“application(s) 114 ”).
  • the kernel 108 controls the execution of the application(s) 114 and access to the resources 106 by the application(s) 114 .
  • the kernel 108 provides an application interface to the resources 106 .
  • the device driver(s) 110 provide an interface between the kernel 108 and at least a portion of the resources 106 (e.g., a network interface resource).
  • the device driver(s) 110 provide a kernel interface to the resources 106 .
  • the application(s) 114 can include at least one distinct process implemented by the physical hardware 102 under direction of the kernel 108 (e.g., using at least one of the resources 106 , such as a processor).
  • the application(s) 114 can include process(es) that generate and consume packets to be sent or received by the packet processing system 100 .
  • the packet flow controller 112 cooperates with the kernel 108 to monitor and control the packet flow received by the packet processing system 100 .
  • the packet flow controller 112 monitors the impact the packet flow has on the resources 106 of the packet processing system 100 in terms of resource utilization. When resource utilization exceeds a designated threshold, the packet flow controller 112 can implement flow control to restrict the packet rate of the packet flow.
  • the packet flow controller 112 obtains metric data from the kernel 108 over intervals of a time period.
  • the metric data measures utilization of at least a portion of the resources 106 with respect to processing the packet flow.
  • the packet flow controller 112 can obtain metric data from the kernel every 30 seconds, every minute, every five minutes, or any other time interval.
  • the resources monitored by the packet flow controller 112 can include processor(s), memory, and/or network interfaces.
  • the metric data includes a measure of utilization for each of the monitored resources associated over each of the time intervals attributed to processing the packet flow.
  • the utilization measure can be expressed differently, depending on the type of resource being monitored and the type of information provided by the kernel 108 .
  • processor utilization can be measured in terms of the fraction of time during a respective interval that the processor processes the packet flow.
  • Memory utilization can be measured in terms of the amount of free memory or the amount of used memory.
  • Network interface utilization can be measured in terms of the number of packets dropped internally during the time interval. It is to be understood that other measures of utilization can be used which, in general, include a range of possible values.
  • the packet flow controller 112 establishes a flow control load parameter.
  • the packet flow controller 112 adjusts the value of the flow control load parameter after each of the time intervals based on the metric data.
  • the packet flow controller 112 compares the metric data to at least one condition that indicates depletion of the monitored resources (“depletion condition”).
  • a depletion condition for processor utilization can be some threshold percentage of processing time devoted to processing the packet flow (e.g., if the processor is spending 98% of its time processing packets, processor utilization is deemed depleted).
  • a depletion condition for memory utilization can be some threshold amount of free memory (e.g., if free memory drops below the threshold, then the memory is deemed depleted).
  • a depletion condition for network interface utilization can be some threshold number of packets being dropped by the interface (e.g., if the interface drops more than the threshold number of packets, then the network interface is deemed depleted).
  • These depletion conditions are merely examples, as other types of conditions can be formed based on the particular types of utilization measures in the metric data.
  • the flow control load parameter is an integer between minimum and maximum values (e.g., an integer between 0 and 255).
  • the flow control load parameter indicates how much relative flow control must be applied to the packet processing system 100 , where a minimum value indicates no flow control and a maximum value indicates maximum flow control.
  • the packet flow controller 112 can increment or decrement the flow control load parameter. Whether the flow control load parameter is incremented or decremented depends on the relation between the metric data and the depletion condition(s). The metric data can be compared against any single depletion condition or any logical combination of multiple depletion conditions.
  • the flow control load parameter can be incremented if the processor utilization exceeds a threshold percentage or if the amount of free memory drops below a threshold or if the amount of dropped packets by a network interface exceeds a threshold. Conversely, the flow control load parameter can be decremented if the processor utilization is below the threshold and the amount of free memory is above the threshold and if the amount of dropped packets is below the threshold.
  • the above logical combination of depletion conditions is an example and other combinations can be used to determine if more or less flow control is required by incrementing or decrementing the flow control load parameter.
  • the size of the increment/decrement can be any number relative to the range between minimum and maximum values (e.g., ⁇ 1 with a range between 0 and 255).
  • the packet flow controller 112 can use the flow control load parameter to implement selective flow control.
  • the packet flow controller 112 can determine a packet flow budget based on the flow control load parameter.
  • the kernel 108 instructs the device driver(s) 110 for network interface(s) in the resources 106 to allow a certain amount of packets in the packet flow (the “packet flow budget”).
  • the packet flow budget is set to a standard value, which allows the device driver(s) 110 to accept as many packets as designed.
  • the packet flow controller 112 does not provide flow control and does not adjust the packet flow budget from its standard value.
  • the packet flow controller 112 adjusts the packet flow budget to implement flow control.
  • the packet flow budget can be adjusted based on a function of the value of the flow control load parameter. For example, the packet flow controller 112 can adjust the packet flow budget to a calculated value inversely proportional to the value of the flow control load parameter.
  • the device driver(s) 110 can drop packets from the packet flow so that the packet rate complies with the packet flow budget. The packets are dropped by the action of the packet flow controller 112 without requiring any explicit handling by any processor in the resources 106 .
  • the packet flow budget is decreased. Stated differently, as the resources 106 in the packet processing system 100 become more depleted over time, fewer packets are allowed into the system 100 for processing. As the depletion condition is mitigated or removed over time, more packets are allowed into the system 100 for processing. The net effect is that the packet rate of the packet flow is continuously adjusted (potentially after every time interval) to mitigate or avoid depletion of the resources 106 .
  • the packet flow controller 112 guards against any voluntary or accidental surge in packet flow, such as in a Denial of Service type attack.
  • the packet flow controller 112 also keeps the packet processing system 100 operating at an optimal point, where a maximum amount of packets are processed while leaving some amount of the resources 106 available for other use.
  • FIG. 2 is a flow diagram depicting a method 200 of flow control in a packet processing system according to an example implementation.
  • the method 200 can be performed by the packet processing system 100 described above.
  • the method 200 begins at step 202 , where metric data is obtained that measures utilization of resource(s) in the packet processing system over intervals of a time period.
  • a value of a flow control load parameter is adjusted during each of the intervals based on comparing the metric data with at least one condition that indicates depletion of the resource(s).
  • a value of a packet flow budget for the packet processing system is established in each of the intervals based on the respective value of the flow control load parameter in each of the intervals.
  • the metric data can include data for central processing unit (CPU) use, memory use, and/or network interface use.
  • the data for the CPU use can include a fraction of time during a respective interval that at least one CPU in the packet processing system processes packets.
  • the flow control load parameter is an integer between minimum and maximum values. The value of the flow control load parameter can be adjusted by incrementing or decrementing the flow control load parameter in each of the intervals.
  • FIG. 3 is a flow diagram depicting a method 300 of adjusting a flow control load parameter according to an example implementation.
  • the method 300 can be performed during step 204 of the method 200 shown in FIG. 2 .
  • the method 300 begins at step 302 , where metric data is selected to be processed for a time interval.
  • the metric data is compared with depletion condition(s) for resource(s) in the packet processing system. As described above, the depletion conditions can be formed into various logical combinations.
  • a determination is made whether the metric data satisfies the depletion condition(s). If not, the method 300 proceeds to step 308 .
  • the flow control load parameter is decremented if the flow control load parameter is greater than the minimum value.
  • the method 300 proceeds from step 308 to step 302 for another time interval. If the metric data satisfies the depletion condition(s) at step 306 , the method 300 proceeds to step 310 .
  • the flow control load parameter is incremented if the flow control load parameter is less than the maximum value. The method 300 proceeds from step 310 to step 302 for another time interval.
  • the flow control load parameter is an integer having a minimum value (e.g., zero).
  • the packet flow budget is set to a standard value if the flow control load parameter is the minimum value (e.g., zero). If the flow control load parameter is greater than the minimum value, the packet flow budget is set to a calculated value that is a function of the flow control load parameter. In an example, the packet flow budget is adjusted inversely proportional to the respective value of the flow control load parameter.
  • FIG. 4 is a flow diagram depicting a method 400 of adjusting a packet flow budget according to an example implementation.
  • the method 400 can be performed as part of the step 206 of the method 200 shown in FIG. 2 .
  • the method 400 begins at step 402 , where a value of the flow control load parameter is obtained.
  • the value of the flow control load parameter can range from minimum to maximum values.
  • a determination is made whether the flow control load parameter is a minimum value. If so, the method 400 proceeds to step 406 .
  • the packet flow budget for the packet processing system is not adjusted. That is, flow control is not applied to the packet flow.
  • the method 400 returns to step 402 .
  • step 404 If at step 404 the flow control load parameter is not the minimum value, the method 400 proceeds to step 408 .
  • the packet flow budget for the packet processing system is adjusted based on a function of the flow control load value. The method 400 returns to step 402 .
  • FIG. 5 is a block diagram of a computer 500 according to an example implementation.
  • the computer 500 includes a processor 502 , support circuits 504 , an IO interface 506 , a memory 508 , and hardware peripheral(s) 510 .
  • the processor 502 includes any type of microprocessor, microcontroller, microcomputer, or like type computing device known in the art.
  • the processor 502 can include one or more of such processing devices, and each of the processing devices can include one or more processing “cores”.
  • the support circuits 504 for the processor 502 can include cache, power supplies, clock circuits, data registers, IO circuits, and the like.
  • the IO interface 506 can be directly coupled to the memory 508 , or coupled to the memory 508 through the processor 502 .
  • the IO interface 506 can include at least one network interface (“network interface(s) 507 ”).
  • the memory 508 can include random access memory, read only memory, cache memory, magnetic read/write memory, or the like or any combination of such memory devices.
  • the hardware peripheral(s) 510 can include various hardware circuits that perform functions on behalf of the processor 502 and the computer 500 .
  • the memory 508 can store machine readable code 540 that is executed or interpreted by the processor 502 to implement an operating environment 516 .
  • the operating environment 516 includes a packet flow controller 518 .
  • the packet flow controller can be implemented as a dedicated circuit on the hardware peripheral(s) 510 .
  • the hardware peripheral(s) 510 can include a programmable logic device (PLD), such as a field programmable gate array (FPGA), which can be programmed to implement the function of the packet flow controller 518 .
  • PLD programmable logic device
  • FPGA field programmable gate array
  • the network interface(s) 507 can receive packets from packet source(s), which can be external to the computer 500 .
  • the packets received by the network interface(s) 507 form a packet flow for the computer 500 that is processed in the operating environment 516 .
  • the packet flow controller 518 selectively implements flow control on the packet flow.
  • the packet flow controller 518 obtains metric data measuring utilization of at least one of the network interface(s) 507 , the memory 508 , or the processor 502 in each of a plurality of time intervals.
  • the packet flow controller 518 compares the metric data to at least one condition in each of the plurality of time intervals to maintain a flow control load parameter.
  • the packet flow controller 518 establishes a packet flow budget for the network interface(s) 507 in each of the plurality of time intervals based on respective values of the flow control load parameter in each of the plurality of time intervals.
  • the at least one condition against which the metric data is compared indicates depletion of at least one of the network interface(s) 507 , the memory 508 , and the processor 502 .
  • the flow control load parameter is an integer between minimum and maximum values, and packet flow controller 518 increments or decrements the flow control load parameter in each of the plurality of time intervals.
  • the flow control load parameter is an integer, and the packet flow controller 518 reduces the packet flow budget based on a function of the flow control load parameter if the respective value of the flow control load parameter is not a minimum value. Otherwise, the packet flow controller 518 maintains the packet flow budget at a standard value if the respective value of the flow control load parameter is the minimum value.
  • the minimum value is zero and the if the respective value of the flow control load parameter is not zero, the packet flow controller 518 sets the packet flow budget to a calculated value inversely proportional to the respective value of the flow control load parameter.
  • the techniques described above may be embodied in a computer-readable medium for configuring a computing system to execute the method.
  • the computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; holographic memory; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; volatile storage media including registers, buffers or caches, main memory, RAM, etc., just to name a few. Other new and various types of computer-readable media may be used to store machine readable code discussed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Flow control in a packet processing system includes obtaining metric data measuring utilization of at least one resource in the packet processing system over intervals of a time period. A value of a flow control load parameter is adjusted during each of the intervals based on comparing the metric data with at least one condition that indicates depletion of the at least one resource. A value of a packet flow budget is established for the packet processing system in each of the intervals based on the respective value of the flow control load parameter in each of the intervals.

Description

BACKGROUND
Computer networks include various devices that facilitate communication between computers using packetized formats and protocols, such as the ubiquitous Transmission Control Protocol/Internet Protocol (TCP/IP). Computer networks can include various packet processing systems for performing various types of packet processing, such as forwarding, switching, routing, analyzing, and like type packet operations. A packet processing system can have multiple network interfaces to different network devices for receiving packets. The multiple network interfaces are controlled by a common set of resources in the system (e.g., processor, memory, and like type resources).
Sometimes, a network interface can receive packets at too high of a rate (e.g., higher than a designated maximum rate for the network interface). Such a packet overflow condition can be intentional, such as an attacker sending many packets to a network interface in a Denial-of-Service (DoS) attack. Such a packet overflow condition can also be unintentional, such as too many devices trying to communicate through the same network interface, or incorrectly configured network device(s) sending too many packets to the network interface. In any case, a network interface receiving an overflow of packets can monopolize the resources of the packet processing system or otherwise cause the resources to become overloaded. Other network interface and/or other processes not associated with the overflowing network interface can become starved of resources in the packet processing system, causing such network interfaces and processes to stop working.
BRIEF DESCRIPTION OF THE DRAWINGS
Some embodiments of the invention are described with respect to the following figures:
FIG. 1 is a block diagram of a packet processing system according to an example implementation;
FIG. 2 is a flow diagram depicting a method of flow control in a packet processing system according to an example implementation;
FIG. 3 is a flow diagram depicting a method of adjusting a flow control load parameter according to an example implementation;
FIG. 4 is a flow diagram depicting a method of adjusting a packet flow budget according to an example implementation; and
FIG. 5 is a block diagram of a computer according to an example implementation.
DETAILED DESCRIPTION
Flow control in packet processing systems is described. In an embodiment, flow control in a packet processing system is implemented by first obtaining metric data measuring performance of at least one resource in the packet processing system over intervals of a time period. A value of a flow control load parameter is adjusted during each of the intervals based on comparing the metric data with at least one condition that indicates depletion of the resource(s). A value of a packet flow budget for the packet processing system is established in each of the intervals based on the respective value of the flow control load parameter in each of the intervals. Thus, after some time intervals have elapsed, where the usage of resource(s) is deemed to be too high, the packet flow budget can restrict the rate of packet processing in the packet processing system to conserve the resource(s). After other time intervals have elapsed, where usage of resource(s) is deemed to be normal, the packet flow budget can provide a standard rate of packet processing. The packet flow budget can be adjusted continuously over the time period based on feedback from measurements of resource performance.
The flow control process can be used to monitor the fraction of resource usage devoted to packet processing and continuously maintain packet flows through the system at a maximum rate that can be sustained by the packet processing system. In this manner, packet processing performance in the system is maximized, without starving other processes of resources. The flow control process does not rely on instructing the packet sources to stop sending packets in case of packet saturation, such as by multicasting an Ethernet PAUSE frame. Such an instruction will cause all packet sources to stop transmitting, even those that are not excessively transmitting packets and causing the problem. Further, some packet sources may ignore such an instruction, particularly if the packet sources are maliciously transmitting excessive packets. Finally, the exact duration of such pause instruction is difficult to calculate, which can cause a larger than necessary delay before the instruction is sent, received, and acted upon, leading to too much flow restriction and wasted bandwidth utilization. Rather than focusing on the packet sources, the flow control process described herein uses metrics internal to the packet processing system to make decisions on if and by how much the packet flow should be restricted. Various embodiments are described below by referring to several examples.
FIG. 1 is a block diagram of a packet processing system 100 according to an example implementation. The packet processing system 100 includes physical hardware 102 that implements an operating environment (OE) 104. The physical hardware 102 includes resources 106 managed by the OE 104. The packet processing system 100 can be implemented as any type of computer, device, appliance or the like. The resources 106 can include processor(s), memory (e.g., volatile memories, non-volatile memories, magnetic and/or optical storage, etc.), interface circuits to external devices, and the like. In particular, the packet processing system 100 can use the resource(s) to send and receive packetized data (“packets”). The packets can be formatted using multiple layers of protocol, e.g., the Transmission Control Protocol (TCP) Internet Protocol (IP) (“TCP/IP”) model, Open Systems Interconnection (OSI) model, or the like. A packet generally includes a header and a payload. The header implements a layer of protocol, and the payload includes data, which may be related to packet(s) at another layer of protocol.
The resources 106 can operate on a flow of the packet (“packet flow”). As used herein, a “packet flow” is a sequence of packets passing an observation point, such as any of the resources 106. A “packet rate” for a packet flow is the number of packets in the sequence passing the observation point over a time interval. The more packets in the sequence, the higher the packet rate. Conversely, the fewer packets in the sequence, the lower the packet rate. The packet flow can originate from at least one source.
In an example, the physical hardware 102 can execute machine-readable instructions to implement elements of functionality in the OE 104 (e.g., using at least one of the resources 106, such as a processor). In another example, elements of functionality in the OE 104 can be implemented as a physical circuit in the physical hardware 102 (e.g., an integrated circuit (IC), such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA)). In yet another example, elements of functionality in the OE 104 are implemented using a combination of machine-readable instructions and physical circuits.
Elements of functionality in the OE 104 include a kernel 108, at least one device driver (“device driver(s) 110”), a packet flow controller 112, and at least one application (“application(s) 114”). The kernel 108 controls the execution of the application(s) 114 and access to the resources 106 by the application(s) 114. The kernel 108 provides an application interface to the resources 106. The device driver(s) 110 provide an interface between the kernel 108 and at least a portion of the resources 106 (e.g., a network interface resource). The device driver(s) 110 provide a kernel interface to the resources 106. The application(s) 114 can include at least one distinct process implemented by the physical hardware 102 under direction of the kernel 108 (e.g., using at least one of the resources 106, such as a processor). The application(s) 114 can include process(es) that generate and consume packets to be sent or received by the packet processing system 100.
The packet flow controller 112 cooperates with the kernel 108 to monitor and control the packet flow received by the packet processing system 100. The packet flow controller 112 monitors the impact the packet flow has on the resources 106 of the packet processing system 100 in terms of resource utilization. When resource utilization exceeds a designated threshold, the packet flow controller 112 can implement flow control to restrict the packet rate of the packet flow.
In an example, the packet flow controller 112 obtains metric data from the kernel 108 over intervals of a time period. The metric data measures utilization of at least a portion of the resources 106 with respect to processing the packet flow. For example, the packet flow controller 112 can obtain metric data from the kernel every 30 seconds, every minute, every five minutes, or any other time interval. In an example, the resources monitored by the packet flow controller 112 can include processor(s), memory, and/or network interfaces. The metric data includes a measure of utilization for each of the monitored resources associated over each of the time intervals attributed to processing the packet flow.
The utilization measure can be expressed differently, depending on the type of resource being monitored and the type of information provided by the kernel 108. For example, processor utilization can be measured in terms of the fraction of time during a respective interval that the processor processes the packet flow. Memory utilization can be measured in terms of the amount of free memory or the amount of used memory. Network interface utilization can be measured in terms of the number of packets dropped internally during the time interval. It is to be understood that other measures of utilization can be used which, in general, include a range of possible values.
The packet flow controller 112 establishes a flow control load parameter. The packet flow controller 112 adjusts the value of the flow control load parameter after each of the time intervals based on the metric data. In an example, for each time interval, the packet flow controller 112 compares the metric data to at least one condition that indicates depletion of the monitored resources (“depletion condition”). For example, a depletion condition for processor utilization can be some threshold percentage of processing time devoted to processing the packet flow (e.g., if the processor is spending 98% of its time processing packets, processor utilization is deemed depleted). A depletion condition for memory utilization can be some threshold amount of free memory (e.g., if free memory drops below the threshold, then the memory is deemed depleted). A depletion condition for network interface utilization can be some threshold number of packets being dropped by the interface (e.g., if the interface drops more than the threshold number of packets, then the network interface is deemed depleted). These depletion conditions are merely examples, as other types of conditions can be formed based on the particular types of utilization measures in the metric data.
In an example, the flow control load parameter is an integer between minimum and maximum values (e.g., an integer between 0 and 255). The flow control load parameter indicates how much relative flow control must be applied to the packet processing system 100, where a minimum value indicates no flow control and a maximum value indicates maximum flow control. During each time interval, the packet flow controller 112 can increment or decrement the flow control load parameter. Whether the flow control load parameter is incremented or decremented depends on the relation between the metric data and the depletion condition(s). The metric data can be compared against any single depletion condition or any logical combination of multiple depletion conditions. For example, the flow control load parameter can be incremented if the processor utilization exceeds a threshold percentage or if the amount of free memory drops below a threshold or if the amount of dropped packets by a network interface exceeds a threshold. Conversely, the flow control load parameter can be decremented if the processor utilization is below the threshold and the amount of free memory is above the threshold and if the amount of dropped packets is below the threshold. The above logical combination of depletion conditions is an example and other combinations can be used to determine if more or less flow control is required by incrementing or decrementing the flow control load parameter. The size of the increment/decrement can be any number relative to the range between minimum and maximum values (e.g., ±1 with a range between 0 and 255).
The packet flow controller 112 can use the flow control load parameter to implement selective flow control. The packet flow controller 112 can determine a packet flow budget based on the flow control load parameter. In an example, when the kernel 108 is ready to handle more packets, the kernel 108 instructs the device driver(s) 110 for network interface(s) in the resources 106 to allow a certain amount of packets in the packet flow (the “packet flow budget”). Initially, the packet flow budget is set to a standard value, which allows the device driver(s) 110 to accept as many packets as designed. When the flow control load parameter is at the minimum value, the packet flow controller 112 does not provide flow control and does not adjust the packet flow budget from its standard value. When the flow control load parameter rises above the minimum value, the packet flow controller 112 adjusts the packet flow budget to implement flow control. The packet flow budget can be adjusted based on a function of the value of the flow control load parameter. For example, the packet flow controller 112 can adjust the packet flow budget to a calculated value inversely proportional to the value of the flow control load parameter. When the packet flow budget is reduced from the standard value, the device driver(s) 110 can drop packets from the packet flow so that the packet rate complies with the packet flow budget. The packets are dropped by the action of the packet flow controller 112 without requiring any explicit handling by any processor in the resources 106.
As the flow control load parameter increases over time, the packet flow budget is decreased. Stated differently, as the resources 106 in the packet processing system 100 become more depleted over time, fewer packets are allowed into the system 100 for processing. As the depletion condition is mitigated or removed over time, more packets are allowed into the system 100 for processing. The net effect is that the packet rate of the packet flow is continuously adjusted (potentially after every time interval) to mitigate or avoid depletion of the resources 106. By continuously estimating resource utilization required by the packet flow and applying a variable amount of negative feedback to the packet flow budget, the packet flow controller 112 guards against any voluntary or accidental surge in packet flow, such as in a Denial of Service type attack. The packet flow controller 112 also keeps the packet processing system 100 operating at an optimal point, where a maximum amount of packets are processed while leaving some amount of the resources 106 available for other use.
FIG. 2 is a flow diagram depicting a method 200 of flow control in a packet processing system according to an example implementation. The method 200 can be performed by the packet processing system 100 described above. The method 200 begins at step 202, where metric data is obtained that measures utilization of resource(s) in the packet processing system over intervals of a time period. At step 204, a value of a flow control load parameter is adjusted during each of the intervals based on comparing the metric data with at least one condition that indicates depletion of the resource(s). At step 206, a value of a packet flow budget for the packet processing system is established in each of the intervals based on the respective value of the flow control load parameter in each of the intervals.
In an example, the metric data can include data for central processing unit (CPU) use, memory use, and/or network interface use. In an example, the data for the CPU use can include a fraction of time during a respective interval that at least one CPU in the packet processing system processes packets. In an example, the flow control load parameter is an integer between minimum and maximum values. The value of the flow control load parameter can be adjusted by incrementing or decrementing the flow control load parameter in each of the intervals.
FIG. 3 is a flow diagram depicting a method 300 of adjusting a flow control load parameter according to an example implementation. The method 300 can be performed during step 204 of the method 200 shown in FIG. 2. The method 300 begins at step 302, where metric data is selected to be processed for a time interval. At step 304, the metric data is compared with depletion condition(s) for resource(s) in the packet processing system. As described above, the depletion conditions can be formed into various logical combinations. At step 306, a determination is made whether the metric data satisfies the depletion condition(s). If not, the method 300 proceeds to step 308. At step 308, the flow control load parameter is decremented if the flow control load parameter is greater than the minimum value. The method 300 proceeds from step 308 to step 302 for another time interval. If the metric data satisfies the depletion condition(s) at step 306, the method 300 proceeds to step 310. At step 310, the flow control load parameter is incremented if the flow control load parameter is less than the maximum value. The method 300 proceeds from step 310 to step 302 for another time interval.
Returning to FIG. 2, in an example, the flow control load parameter is an integer having a minimum value (e.g., zero). The packet flow budget is set to a standard value if the flow control load parameter is the minimum value (e.g., zero). If the flow control load parameter is greater than the minimum value, the packet flow budget is set to a calculated value that is a function of the flow control load parameter. In an example, the packet flow budget is adjusted inversely proportional to the respective value of the flow control load parameter.
FIG. 4 is a flow diagram depicting a method 400 of adjusting a packet flow budget according to an example implementation. The method 400 can be performed as part of the step 206 of the method 200 shown in FIG. 2. The method 400 begins at step 402, where a value of the flow control load parameter is obtained. The value of the flow control load parameter can range from minimum to maximum values. At step 404, a determination is made whether the flow control load parameter is a minimum value. If so, the method 400 proceeds to step 406. At step 406, the packet flow budget for the packet processing system is not adjusted. That is, flow control is not applied to the packet flow. The method 400 returns to step 402. If at step 404 the flow control load parameter is not the minimum value, the method 400 proceeds to step 408. At step 408, the packet flow budget for the packet processing system is adjusted based on a function of the flow control load value. The method 400 returns to step 402.
FIG. 5 is a block diagram of a computer 500 according to an example implementation. The computer 500 includes a processor 502, support circuits 504, an IO interface 506, a memory 508, and hardware peripheral(s) 510. The processor 502 includes any type of microprocessor, microcontroller, microcomputer, or like type computing device known in the art. The processor 502 can include one or more of such processing devices, and each of the processing devices can include one or more processing “cores”. The support circuits 504 for the processor 502 can include cache, power supplies, clock circuits, data registers, IO circuits, and the like. The IO interface 506 can be directly coupled to the memory 508, or coupled to the memory 508 through the processor 502. The IO interface 506 can include at least one network interface (“network interface(s) 507”).
The memory 508 can include random access memory, read only memory, cache memory, magnetic read/write memory, or the like or any combination of such memory devices. The hardware peripheral(s) 510 can include various hardware circuits that perform functions on behalf of the processor 502 and the computer 500. The memory 508 can store machine readable code 540 that is executed or interpreted by the processor 502 to implement an operating environment 516. The operating environment 516 includes a packet flow controller 518. In another example, the packet flow controller can be implemented as a dedicated circuit on the hardware peripheral(s) 510. For example, the hardware peripheral(s) 510 can include a programmable logic device (PLD), such as a field programmable gate array (FPGA), which can be programmed to implement the function of the packet flow controller 518.
In an example, the network interface(s) 507 can receive packets from packet source(s), which can be external to the computer 500. The packets received by the network interface(s) 507 form a packet flow for the computer 500 that is processed in the operating environment 516. The packet flow controller 518 selectively implements flow control on the packet flow. In an example, the packet flow controller 518 obtains metric data measuring utilization of at least one of the network interface(s) 507, the memory 508, or the processor 502 in each of a plurality of time intervals. The packet flow controller 518 compares the metric data to at least one condition in each of the plurality of time intervals to maintain a flow control load parameter. The packet flow controller 518 establishes a packet flow budget for the network interface(s) 507 in each of the plurality of time intervals based on respective values of the flow control load parameter in each of the plurality of time intervals.
In an example, the at least one condition against which the metric data is compared indicates depletion of at least one of the network interface(s) 507, the memory 508, and the processor 502. In an example, the flow control load parameter is an integer between minimum and maximum values, and packet flow controller 518 increments or decrements the flow control load parameter in each of the plurality of time intervals. In an example, the flow control load parameter is an integer, and the packet flow controller 518 reduces the packet flow budget based on a function of the flow control load parameter if the respective value of the flow control load parameter is not a minimum value. Otherwise, the packet flow controller 518 maintains the packet flow budget at a standard value if the respective value of the flow control load parameter is the minimum value. In an example, the minimum value is zero and the if the respective value of the flow control load parameter is not zero, the packet flow controller 518 sets the packet flow budget to a calculated value inversely proportional to the respective value of the flow control load parameter.
The techniques described above may be embodied in a computer-readable medium for configuring a computing system to execute the method. The computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; holographic memory; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; volatile storage media including registers, buffers or caches, main memory, RAM, etc., just to name a few. Other new and various types of computer-readable media may be used to store machine readable code discussed herein.
In the foregoing description, numerous details are set forth to provide an understanding of the present invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these details. While the invention has been disclosed with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover such modifications and variations as fall within the true spirit and scope of the invention.

Claims (17)

What is claimed is:
1. A method of flow control in a packet processing system, comprising:
obtaining metric data measuring utilization of at least one resource in the packet processing system over intervals of a time period, the at least one resource selected from among a processor and a memory;
adjusting a value of a flow control load parameter during each of the intervals based on comparing the metric data with at least one condition that indicates depletion of the at least one resource, wherein the adjusting comprises successively advancing by a predetermined amount the flow control load parameter in successive intervals of the intervals while the metric data satisfies the at least one condition in the successive intervals, the metric data satisfying the at least one condition comprising at least one of a utilization of the processor exceeding a utilization threshold, and an amount of free space in the memory dropping below a memory threshold;
establishing a value of a packet flow budget for the packet processing system in each respective interval of the intervals based on the respective value of the flow control load parameter in the respective interval; and
performing, by the packet processing system, flow control of packets using the packet flow budget.
2. The method of claim 1, wherein the metric data includes a measure of a fraction of time during a respective interval of the intervals in which the processor processes packets.
3. The method of claim 1, wherein the flow control load parameter is an integer between minimum and maximum values, and wherein the successive advancing of the flow control load parameter in the successive intervals comprises incrementing or decrementing between the minimum and maximum values.
4. The method of claim 3, wherein the successive advancing of the flow control load parameter by the predetermined amount comprises successively incrementing the flow control load parameter in the successive time intervals, and wherein the adjusting further comprises decrementing the flow control load parameter if during a respective interval of the intervals the metric data does not satisfy the at least one condition.
5. The method of claim 1, wherein the flow control load parameter is an integer, and wherein the establishing includes reducing the packet flow budget based on a function of the flow control load parameter responsive to the respective value of the flow control load parameter not being a minimum value.
6. The method of claim 5, wherein the minimum value is zero, and responsive to the respective value of the flow control load parameter not being zero, the establishing comprises setting the packet flow budget to a calculated value inversely proportional to the respective value of the flow control load parameter.
7. The method of claim 1, wherein successively advancing the flow control load parameter in the successive intervals comprises successively incrementing the flow control load parameter in the successive intervals, and wherein the adjusting further comprises successively decrementing the flow control load parameter in further successive intervals while the metric data does not satisfy the at least one condition.
8. The method of claim 1, wherein successively advancing the flow control load parameter in the successive intervals comprises:
incrementing, in a first of the successive intervals, the flow control load parameter by the predetermined amount from a first value to a second value; and
incrementing, in a second of the successive intervals, the flow control load parameter by the predetermined amount from the second value to a third value.
9. An apparatus to provide flow control in a packet processing system, comprising:
a network interface to receive packets;
a memory; and
a processor, communicatively coupled to the network interface and the memory, to:
obtain metric data measuring utilization of at least one of the memory or the processor in each of a plurality of time intervals,
adjust a flow control load parameter based on comparing the metric data to at least one condition in each of the plurality of time intervals, wherein the adjusting comprises successively incrementing the flow control load parameter in successive time intervals of the plurality of time intervals while the metric data satisfies the at least one condition in the successive time intervals, the metric data satisfying the at least one condition comprising at least one of the utilization of the processor exceeding a utilization threshold, and an amount of free space in the memory dropping below a memory threshold; and
establish a packet flow budget for the network interface in each respective time interval of the plurality of time intervals based on the respective value of the flow control load parameter in the respective time interval.
10. The apparatus of claim 9, wherein the flow control load parameter is an integer between minimum and maximum values, and wherein the processor is to successively increment the flow control load parameter between the minimum and maximum values.
11. The apparatus of claim 9, wherein the flow control load parameter is an integer, and wherein the processor is to reduce the packet flow budget based on a function of the flow control load parameter responsive to the respective value of the flow control load parameter not being a minimum value.
12. The apparatus of claim 11, wherein the minimum value is zero, and responsive to the respective value of the flow control load parameter not being zero, the processor is to set the packet flow budget to a calculated value inversely proportional to the respective value of the flow control load parameter.
13. The apparatus of claim 9, wherein the successively incrementing the flow control load parameter in the successive time intervals comprises:
incrementing, in a first of the successive time intervals, the flow control load parameter by a predetermined amount from a first value to a second value; and
incrementing, in a second of the successive time intervals, the flow control load parameter by the predetermined amount from the second value to a third value.
14. A packet processing system, comprising:
physical hardware including resources to process a packet flow, the resources selected from among a processor and a memory; and
an operating environment including:
a kernel to provide an application interface to the resources; and
a packet flow controller to:
obtain metric data from the kernel that measures utilization of the resources over intervals of a time period,
adjust a value of a flow control load parameter during each of the intervals based on the metric data, wherein the adjusting comprises successively advancing by a predetermined amount the flow control load parameter in successive intervals of the intervals while the metric data satisfies at least one condition in the successive intervals, the metric data satisfying the at least one condition comprising at least one of a utilization of the processor exceeding a utilization threshold, and an amount of free space in the memory dropping below a memory threshold, and
establish a value of a packet flow budget for the packet flow in each of the intervals based on the respective value of the flow control load parameter in each of the intervals.
15. The packet processing system of claim 14, wherein the flow control load parameter is an integer between minimum and maximum values, and wherein the packet flow controller is to successively advance the flow control load parameter in each of the time intervals between the minimum and maximum values.
16. The packet processing system of claim 14, wherein the flow control load parameter is an integer, and wherein the packet flow controller is to reduce the packet flow budget based on a function of the flow control load parameter responsive to the respective value of the flow control load parameter not being a minimum value.
17. The packet processing system of claim 14, wherein the successively advancing the flow control load parameter in the successive intervals comprises:
incrementing, in the first of the successive intervals, the flow control load parameter by the predetermined amount from a first value to a second value; and
incrementing, in a second of the successive intervals, the flow control load parameter by the predetermined amount from the second value to a third value.
US13/192,618 2011-07-28 2011-07-28 Flow control in packet processing systems Active 2034-07-25 US9270556B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/192,618 US9270556B2 (en) 2011-07-28 2011-07-28 Flow control in packet processing systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/192,618 US9270556B2 (en) 2011-07-28 2011-07-28 Flow control in packet processing systems

Publications (2)

Publication Number Publication Date
US20130028085A1 US20130028085A1 (en) 2013-01-31
US9270556B2 true US9270556B2 (en) 2016-02-23

Family

ID=47597133

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/192,618 Active 2034-07-25 US9270556B2 (en) 2011-07-28 2011-07-28 Flow control in packet processing systems

Country Status (1)

Country Link
US (1) US9270556B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150006820A1 (en) * 2013-06-28 2015-01-01 Texas Instruments Incorporated Dynamic management of write-miss buffer to reduce write-miss traffic

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013042373A1 (en) * 2011-09-21 2013-03-28 Nec Corporation Communication apparatus, control apparatus, communication system, communication control method, communication terminal and program
US20140025823A1 (en) * 2012-02-20 2014-01-23 F5 Networks, Inc. Methods for managing contended resource utilization in a multiprocessor architecture and devices thereof
US9736041B2 (en) * 2013-08-13 2017-08-15 Nec Corporation Transparent software-defined network management
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US10565667B2 (en) * 2015-08-19 2020-02-18 Lee P. Brintle Methods and systems for optimized and accelerated registration and registration management
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof
KR101831186B1 (en) * 2016-06-30 2018-02-22 엘지디스플레이 주식회사 Coplanar type oxide tft, method of manufacturing the same, and display panel and display apparatus using the same
US11463535B1 (en) * 2021-09-29 2022-10-04 Amazon Technologies, Inc. Using forensic trails to mitigate effects of a poisoned cache

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519689A (en) * 1993-06-12 1996-05-21 Samsung Electronics Co., Ltd. Traffic control apparatus and method of user-network interface of asynchronous transfer mode
US6189035B1 (en) 1998-05-08 2001-02-13 Motorola Method for protecting a network from data packet overload
US20020056007A1 (en) * 1998-06-26 2002-05-09 Verizon Laboratories Inc. Method and system for burst congestion control in an internet protocol network
US6427114B1 (en) * 1998-08-07 2002-07-30 Dinbis Ab Method and means for traffic route control
US6442135B1 (en) * 1998-06-11 2002-08-27 Synchrodyne Networks, Inc. Monitoring, policing and billing for packet switching with a common time reference
US20030096597A1 (en) * 2001-11-16 2003-05-22 Kelvin Kar-Kin Au Scheduler with fairness control and quality of service support
US20040054857A1 (en) * 2002-07-08 2004-03-18 Farshid Nowshadi Method and system for allocating bandwidth
US20060120282A1 (en) * 2000-05-19 2006-06-08 Carlson William S Apparatus and methods for incorporating bandwidth forecasting and dynamic bandwidth allocation into a broadband communication system
US20070014276A1 (en) * 2005-07-12 2007-01-18 Cisco Technology, Inc., A California Corporation Route processor adjusting of line card admission control parameters for packets destined for the route processor
US20070097864A1 (en) * 2005-11-01 2007-05-03 Cisco Technology, Inc. Data communication flow control
US7274665B2 (en) 2002-09-30 2007-09-25 Intel Corporation Packet storm control
US20090010165A1 (en) * 2007-07-06 2009-01-08 Samsung Electronics Cp. Ltd. Apparatus and method for limiting packet transmission rate in communication system
US20090080331A1 (en) * 2007-09-20 2009-03-26 Tellabs Operations, Inc. Modeling packet traffic using an inverse leaky bucket
US20090097407A1 (en) * 2001-05-04 2009-04-16 Buskirk Glenn A System and method for policing multiple data flows and multi-protocol data flows
US7660252B1 (en) 2005-03-17 2010-02-09 Cisco Technology, Inc. System and method for regulating data traffic in a network device
US20100034090A1 (en) * 2006-11-10 2010-02-11 Attila Bader Edge Node for a network domain
US7715438B1 (en) 2004-07-06 2010-05-11 Juniper Networks, Inc. Systems and methods for automatic provisioning of data flows
US7814224B2 (en) 2007-02-09 2010-10-12 Hitachi Industrial Equipment Systems Co. Information processor deactivates communication processing function without passing interrupt request for processing when detecting traffic inbound is in over-traffic state
US8503307B2 (en) * 2010-05-10 2013-08-06 Hewlett-Packard Development Company, L.P. Distributing decision making in a centralized flow routing system

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519689A (en) * 1993-06-12 1996-05-21 Samsung Electronics Co., Ltd. Traffic control apparatus and method of user-network interface of asynchronous transfer mode
US6189035B1 (en) 1998-05-08 2001-02-13 Motorola Method for protecting a network from data packet overload
US6442135B1 (en) * 1998-06-11 2002-08-27 Synchrodyne Networks, Inc. Monitoring, policing and billing for packet switching with a common time reference
US20020056007A1 (en) * 1998-06-26 2002-05-09 Verizon Laboratories Inc. Method and system for burst congestion control in an internet protocol network
US6427114B1 (en) * 1998-08-07 2002-07-30 Dinbis Ab Method and means for traffic route control
US20060120282A1 (en) * 2000-05-19 2006-06-08 Carlson William S Apparatus and methods for incorporating bandwidth forecasting and dynamic bandwidth allocation into a broadband communication system
US20090097407A1 (en) * 2001-05-04 2009-04-16 Buskirk Glenn A System and method for policing multiple data flows and multi-protocol data flows
US20030096597A1 (en) * 2001-11-16 2003-05-22 Kelvin Kar-Kin Au Scheduler with fairness control and quality of service support
US20040054857A1 (en) * 2002-07-08 2004-03-18 Farshid Nowshadi Method and system for allocating bandwidth
US7274665B2 (en) 2002-09-30 2007-09-25 Intel Corporation Packet storm control
US7715438B1 (en) 2004-07-06 2010-05-11 Juniper Networks, Inc. Systems and methods for automatic provisioning of data flows
US7660252B1 (en) 2005-03-17 2010-02-09 Cisco Technology, Inc. System and method for regulating data traffic in a network device
US20070014276A1 (en) * 2005-07-12 2007-01-18 Cisco Technology, Inc., A California Corporation Route processor adjusting of line card admission control parameters for packets destined for the route processor
US20070097864A1 (en) * 2005-11-01 2007-05-03 Cisco Technology, Inc. Data communication flow control
US20100034090A1 (en) * 2006-11-10 2010-02-11 Attila Bader Edge Node for a network domain
US7814224B2 (en) 2007-02-09 2010-10-12 Hitachi Industrial Equipment Systems Co. Information processor deactivates communication processing function without passing interrupt request for processing when detecting traffic inbound is in over-traffic state
US20090010165A1 (en) * 2007-07-06 2009-01-08 Samsung Electronics Cp. Ltd. Apparatus and method for limiting packet transmission rate in communication system
US20090080331A1 (en) * 2007-09-20 2009-03-26 Tellabs Operations, Inc. Modeling packet traffic using an inverse leaky bucket
US8503307B2 (en) * 2010-05-10 2013-08-06 Hewlett-Packard Development Company, L.P. Distributing decision making in a centralized flow routing system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150006820A1 (en) * 2013-06-28 2015-01-01 Texas Instruments Incorporated Dynamic management of write-miss buffer to reduce write-miss traffic

Also Published As

Publication number Publication date
US20130028085A1 (en) 2013-01-31

Similar Documents

Publication Publication Date Title
US9270556B2 (en) Flow control in packet processing systems
US8081569B2 (en) Dynamic adjustment of connection setup request parameters
US10218620B2 (en) Methods and nodes for congestion control
US9948561B2 (en) Setting delay precedence on queues before a bottleneck link based on flow characteristics
US20180331965A1 (en) Control channel usage monitoring in a software-defined network
US8443444B2 (en) Mitigating low-rate denial-of-service attacks in packet-switched networks
US8509074B1 (en) System, method, and computer program product for controlling the rate of a network flow and groups of network flows
US20130142038A1 (en) Adaptive scheduling of data transfer in p2p applications over asymmetric networks
US9350669B2 (en) Network apparatus, performance control method, and network system
US20220200858A1 (en) Method and apparatus for configuring a network parameter
US9432296B2 (en) Systems and methods for initializing packet transfers
Tahiliani et al. A principled look at the utility of feedback in congestion control
US9577727B2 (en) Enforcing station fairness with MU-MIMO deployments
CN109525446B (en) Processing method and electronic equipment
Bagnulo et al. When less is more: BBR versus LEDBAT++
CN109787922B (en) Method and device for acquiring queue length and computer readable storage medium
Vargas et al. Are mobiles ready for BBR?
US20120236715A1 (en) Measurement Based Admission Control Using Explicit Congestion Notification In A Partitioned Network
Fridovich-Keil et al. A model predictive control approach to flow pacing for TCP
KR101806510B1 (en) Method and apparatus for congention entrance control
US20110182176A1 (en) Method and apparatus to provide minimum resource sharing without buffering requests
US20220330098A1 (en) Method for adjusting a total bandwidth for a network device
WO2017131674A1 (en) Managing network traffic using experiential capacity
US10516619B2 (en) TCP window sizing
CN107465631B (en) Active queue management method and system

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BILODEAU, GUY;REEL/FRAME:026663/0981

Effective date: 20110727

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8