CN109792409B - Methods, systems, and computer readable media for dropping messages during congestion events - Google Patents
Methods, systems, and computer readable media for dropping messages during congestion events Download PDFInfo
- Publication number
- CN109792409B CN109792409B CN201780058408.6A CN201780058408A CN109792409B CN 109792409 B CN109792409 B CN 109792409B CN 201780058408 A CN201780058408 A CN 201780058408A CN 109792409 B CN109792409 B CN 109792409B
- Authority
- CN
- China
- Prior art keywords
- message
- congestion
- messages
- traffic
- policy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012545 processing Methods 0.000 claims description 8
- 230000001419 dependent effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 10
- 230000015654 memory Effects 0.000 description 10
- 230000007246 mechanism Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000007493 shaping process Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0894—Policy-based network configuration management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
- H04L47/2433—Allocation of priorities to traffic types
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2475—Traffic characterised by specific attributes, e.g. priority or QoS for supporting traffic characterised by the type of applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0893—Assignment of logical groups to network elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0882—Utilisation of link capacity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
- H04L43/0894—Packet rate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/11—Identifying congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/22—Traffic shaping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/29—Flow control; Congestion control using a combination of thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/30—Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/0289—Congestion control
Abstract
The subject matter described herein relates to methods, systems, and computer-readable media for dropping messages during congestion events. A method includes registering a traffic congestion policy for handling traffic associated with an application during congestion. The method also includes determining a first congestion level associated with the congestion event. The method also includes determining a message rate for messages associated with similar message priority values, where the message priority values are determined using a traffic congestion policy. The method also includes dropping the first message using the message rate, the first congestion level, and a message drop algorithm, wherein the message drop algorithm is determined using a traffic congestion policy.
Description
Priority declaration
This application claims the benefit of U.S. patent application serial No.15/273,069, filed 2016, 9, 22, the disclosure of which is incorporated herein by reference in its entirety.
Technical Field
The subject matter described herein relates to computer network traffic management. More particularly, the present subject matter relates to methods, systems, and computer readable media for dropping messages during congestion events.
Background
Traffic-related congestion in a computer network can prevent or hinder messages from reaching the appropriate destination. For example, the authentication message may be used to authenticate a subscriber for service access. If the subscriber is not authenticated because the network or nodes therein are too congested to timely route or process authentication messages, the subscriber may be denied service access. To reduce the problems associated with traffic-related congestion, many networks attempt to drop less important messages during a congestion event (e.g., an event or time period when congestion is detected, such as when a network node is overloaded), while still allowing some important messages. However, various factors may need to be considered when determining which messages to drop and which messages to allow when congestion is detected.
Disclosure of Invention
The subject matter described herein relates to methods, systems, and computer-readable media for dropping messages during congestion events. A method includes registering a traffic congestion policy for handling traffic associated with an application during congestion. The method also includes determining a first congestion level associated with the congestion event. The method also includes determining a message rate for messages associated with similar message priority values, where the message priority values are determined using a traffic congestion policy. The method also includes dropping the first message using the message rate, the first congestion level, and a message drop algorithm, wherein the message drop algorithm is determined using a traffic congestion policy.
A system for dropping messages during a congestion event includes at least one processor and a traffic manager. The traffic manager is implemented using the at least one processor. The traffic manager is configured to: means for registering a traffic congestion policy for handling traffic associated with an application during congestion; for determining a first congestion level associated with a congestion event; means for determining a message rate for messages associated with similar message priority values, wherein the message priority values are determined using a traffic congestion policy; and drop the first message associated with the application using one or more of a message rate, a congestion level, and a message drop algorithm, wherein the message drop algorithm is determined using a traffic congestion policy.
The subject matter described herein may be implemented in software in conjunction with hardware and/or firmware. For example, the subject matter described herein may be implemented in software executed by a processor. In some implementations, the subject matter described herein may be implemented using a non-transitory computer-readable medium having stored thereon computer-executable instructions that, when executed by a processor of a computer, control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory devices such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. Furthermore, a non-transitory computer-readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
As used herein, the term "node" refers to at least one physical computing platform that includes one or more processors and memory. For example, a node may comprise a virtual machine and/or software executing on a physical computing platform.
As used herein, the term "function" or "module" refers to software combined with hardware, firmware, or with hardware and/or firmware for implementing the features described herein.
Drawings
The subject matter described herein will now be explained with reference to the drawings, in which:
FIG. 1 is a block diagram illustrating an example computing environment;
FIG. 2 is a block diagram illustrating an example traffic manager;
fig. 3 depicts an example of traffic congestion policy information;
fig. 4 depicts another example of traffic congestion policy information; and
fig. 5 is a flow chart illustrating a process of dropping messages during a congestion event.
Detailed Description
The subject matter described herein relates to methods, systems, and computer-readable media for dropping messages during congestion events. Traffic-related congestion can occur when a network node receives more messages than it can process or handle (e.g., route, respond, etc.). When a network node experiences congestion, various problems may arise, including dropped calls and/or terminated connections. To reduce traffic-related congestion, some networks may detect congestion events at various congestion points (e.g., one or more network nodes or modules therein), and perform various actions to alleviate congestion and congestion-related problems when a congestion event is detected. For example, to alleviate congestion, a message drop policy may be used that defines which types of messages should be allowed and/or dropped.
In accordance with some aspects of the subject matter described herein, techniques, methods, systems, or mechanisms for pluggable traffic congestion policies are disclosed. For example, a network node or module may represent a point of congestion in a network. In this example, a node or module may be configured to receive and register dynamic and/or pluggable traffic congestion policies for one or more applications. Continuing this example, a node or module may use a particular traffic congestion policy to determine whether to allow or drop messages associated with a particular application during a congestion event, and may use another traffic congestion policy to handle other traffic.
According to some aspects of the subject matter described herein, techniques, methods, systems, or mechanisms are disclosed for utilizing message rates, congestion levels, and/or policy-defined priority values in message drop algorithms. For example, a traffic congestion policy may define or indicate a message drop algorithm that determines a congestion level of a congestion event (e.g., a value indicating congestion at a congestion point) and determines a message rate for messages associated with similar message priority values (e.g., calculated or determined based on factors defined by the policy). Continuing with this example, for a given congestion level, the message drop algorithm may limit the message rate of messages associated with lower message priority values more than the message rate of messages associated with higher message priority values, e.g., by dropping messages. In some examples, as the congestion level of a given congestion event increases, the message drop algorithm may gradually limit the message rate of messages associated with various message priority values.
Advantageously, according to some aspects of the subject matter described herein, computer capacity associated with congestion management is improved by using pluggable traffic congestion policies, for example, by reducing the length of congestion events and/or by mitigating the effects of congestion related issues. In addition, congestion management may be improved by utilizing application-specific features and/or factors when determining whether to allow or drop messages during a congestion event.
Reference will now be made in detail to various examples of the subject matter described herein, some of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
FIG. 1 is a block diagram illustrating an example computing environment 100. Referring to fig. 1, computing environment 100 may include node(s) 102, Routing Node (RN)104, and/or node(s) 112. Each of node(s) 102 and node(s) 112 may represent one or more suitable entities (e.g., software executing on at least one processor, one or more computing platforms, etc.) capable of communicating using at least one communication protocol, such as using one or more network layer protocols (e.g., Internet Protocol (IP)), one or more transport layer protocols (e.g., Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), and/or Reliable Data Protocol (RDP)), and/or one or more session layer protocols (e.g., Diameter protocol, hypertext transfer protocol (HTTP), and/or real-time transport protocol (RTP)). For example, each of node(s) 102 and 112 may be a client, a server, a Diameter node, a network node, a Mobility Management Entity (MME), a Home Subscriber Server (HSS), an authentication, authorization, and/or accounting (AAA) server, a Diameter application server, a Subscriber Profile Repository (SPR), or other node. Each of nodes (one or more) 102 and 112 may include functionality to send, receive, and/or process various messages. For example, node(s) 102 may include a client that requests subscriber-related information, and node(s) 112 may include a server that provides subscriber-related information.
RN104 may represent any suitable entity or entities (e.g., software executing on at least one processor, one or more computing platforms, etc.) for receiving, processing, routing, and/or discarding messages, such as IP messages, TCP messages, Diameter messages, HTTP messages, and other messages. For example, RN104 may include or represent an IP router, an IP switch, a Long Term Evolution (LTE) signaling router, a Diameter proxy server, a Diameter proxy, a Diameter routing agent, a Diameter relay agent, a Diameter translation agent, or a Diameter redirect agent. The RN104 may include functionality for processing and/or routing various messages. In some embodiments, such functionality may be included in one or more modules (e.g., session routing modules).
The RN104 may include functionality to receive, process, and/or exchange or route various messages, and may include various communication interfaces for communicating with various nodes, such as a third generation partnership project (3GPP) LTE communication interface and other (e.g., non-LTE) communication interfaces. Some example communication interfaces for communicating with various nodes may include an IP interface, a TCP interface, a UDP interface, an HTTP interface, an RDP interface, an SCTP interface, an RTP interface, a Diameter interface, an LTE interface, and/or an IMS interface.
RN104 may facilitate communication between node(s) 102 and node(s) 112. For example, node(s) 102 may represent a Diameter client and may send a Diameter request message (e.g., a Diameter session establishment request message) to RN 104. The Diameter request message may require information or one or more services from node(s) 112. RN104 may route, relay, and/or translate requests or responses between node(s) 102 and node(s) 112. After receiving and processing the Diameter request message, node(s) 112 may send a Diameter response message (e.g., a Diameter session establishment response message) to RN 104. The Diameter response message may be sent in response to a Diameter request message initiated by node(s) 102. RN104 may provide a Diameter response message to node(s) 102.
In some embodiments, RN104 may include processor(s) 106, memory 108, and/or Traffic Manager (TM) 110. The processor(s) 106 may represent or include at least one of a physical processor, a general purpose microprocessor, a single core processor, a multi-core processor, a Field Programmable Gate Array (FPGA), and/or an Application Specific Integrated Circuit (ASIC). In some embodiments, the processor(s) 106 may be configured to execute software stored in one or more non-transitory computer-readable media, such as the memory 108. For example, software may be loaded into a memory structure for execution by the processor(s) 106. In some embodiments, for example, where RN104 includes multiple processors, some processor(s) 106 may be configured to operate independently of other processor(s) 106.
TM110 may be any suitable entity or entities (e.g., software executing on processor(s) 106, an ASIC, an FPGA, or a combination of software, ASICs, or FPGAs) for performing one or more aspects associated with traffic management and/or traffic-related congestion management. For example, TM110 may include or represent any programmable element to drop or allow various messages (e.g., IP messages, Diameter messages, HTTP messages, etc.) based on a message priority value and a congestion level of RN104 and/or another node (e.g., nodes 102 and 112). In some embodiments, TM110 may be implemented using processor(s) 106 and/or one or more memories, such as memory 108. For example, TM110 may utilize processor(s) 106 (e.g., using software stored in local memory) and Random Access Memory (RAM).
In some embodiments, TM110 may include functionality to receive, register, and/or use traffic congestion policies. For example, the traffic congestion policy may include or indicate a message drop algorithm for determining whether to drop or allow messages associated with the application using one or more policy determinable factors. Some examples of policy determinable factors may include policy-defined message priority values, message parameter values, message types, message events, message attributes (e.g., priority Attribute Value Pairs (AVPs)), detected congestion levels, path-related congestion indicators (e.g., color codes), and/or one or more message rates for a given group of messages (e.g., messages of a certain priority). In this example, TM110 may be configured to register and use traffic congestion policies without requiring RN104 and/or TM110 to reboot or restart.
In some embodiments, TM110 may include functionality for allowing an application or other entity (e.g., a network operator or device) to register traffic congestion policies for processing messages based on policy-defined message priorities during congestion. For example, the traffic congestion policy may be capable of supporting traffic policing (traffic policing) of messages associated with message priority values (e.g., values between 1-15) that are different from the number of congestion levels detected at RN104 (values between 1-4). In this example, a traffic congestion policy or related message drop algorithm may map at least some congestion management actions associated with congestion levels to groups of messages associated with different message priority values.
In some embodiments, TM110 may include functionality to track existing or current traffic patterns at RN104 and/or determine message rates for packet-like messages. For example, TM110 may be configured to identify and group messages (e.g., traffic received at RN 104) using policy determinable factors (e.g., event priorities, event types, message types, path-related congestion indicators, or application-specific parameter values), and by identifying and grouping messages, may track, measure, and/or determine message rates at which packet-like messages are received at RN104 and/or transmitted from RN 104.
In some embodiments, TM110 may include functionality for shaping traffic (e.g., traffic leaving RN 104), for example, by dropping at least some messages using traffic pattern information, a congestion level associated with a node (e.g., RN 104), and/or one or more policy-defined message priority values. For example, for a given congestion level, the message drop algorithm may limit the message rate of messages associated with lower priority values more than the message rate of messages associated with higher priority values, e.g., by dropping such messages. In some examples, the message drop algorithm may gradually decrease the traffic rate limit for messages associated with various message priority values as the congestion level for a given congestion event increases.
The memory 108 may be any suitable entity or entities (e.g., one or more memory devices) for storing information associated with traffic management (e.g., traffic tracking, traffic shaping, etc.) and/or traffic-related congestion management. For example, memory 108 may store one or more traffic congestion policies, one or more message drop algorithms, message statistics, message priority values, message rates, and/or other traffic-related information.
It will be appreciated that fig. 1 is for illustrative purposes and that various nodes, locations of nodes, and/or functions of nodes (e.g., modules) described above with respect to fig. 1 may be changed, altered, added, or removed. For example, some nodes and/or functions may be combined into a single entity. In another example, some nodes and/or functions may be distributed across multiple nodes and/or platforms.
Fig. 2 is a block diagram illustrating an example TM 110. Referring to FIG. 2, TM110 may interact and/or communicate with source task 204 and/or destination task 206. The source task 204 may be any entity (e.g., node, module, etc.) that provides a message to the TM110 and the destination task 208 may be any entity that receives a message from the TM 110. For example, source task 204 may include node(s) 102 and/or a module within RN 104. In another example, destination task 206 may comprise node(s) 112 and/or a module within RN 104. In some embodiments, the source task 204 and/or the destination task 206 may include or utilize one or more buffers or memories to store messages. For example, source task 204 may store incoming messages waiting to be processed by RN104 and/or TM110, and destination task 206 may store outgoing messages from RN104 and/or TM110, e.g., messages to be sent or routed forward by RN 104.
TM110 may include or interact with tracker 200 and shaper 202. The tracker 200 may be any suitable entity or entities (e.g., software executing on the processor(s) 106, an ASIC, an FPGA, or a combination of software, ASIC, or FPGA) for tracking message rates of incoming and/or outgoing messages. In some embodiments, tracker 200 may utilize traffic congestion policies and/or related information to classify or group messages using policy determinable message priority values. Tracker 200 may also track message rates of packet-like messages (e.g., messages associated with the same message priority value). For example, tracker 200 may determine the message rate of messages associated with a message priority value of "1", the message rate of messages associated with a message priority value of "2", and the message rate of messages associated with a message priority value of "3".
In some embodiments, TM110 and/or tracker 200 may utilize traffic congestion policies to determine message priority values for various messages using one or more message attributes, connection attributes, and/or path-related attributes. For example, a traffic congestion policy may define or indicate that a priority AVP and/or color code (e.g., a path or connection-related congestion indicator assigned by a customer rule and/or a policy-defined rule) is to be used to determine a message priority for a Diameter message.
In some embodiments, the traffic congestion policy may define or indicate how to generate or calculate message priority values for various messages. For example, a traffic congestion policy may indicate that bit or byte operations (e.g., bit concatenation and/or bit arithmetic) may be used to generate a message priority value using multiple factors or values. In this example, the traffic congestion policy may indicate that the message priority value may be calculated by concatenating one or more bits from the first attribute and one or more bits from another attribute.
In some embodiments, TM110, or an entity associated therewith, may generate a message priority value by concatenating a plurality of most significant (left) bits from a first attribute (e.g., a value from a priority AVP) and a plurality of least significant (right) bits from a second attribute (e.g., a color code). For example, assume that the priority value from the priority AVP may be 0-15 (where 15 is the highest priority) and the color code may be 0-3 (where 3 is the highest priority color). Continuing with this example, a cascade priority value of "10" (0xa) and color code "0" (0x0) may result in a message priority value of "160" (0xa0x0) and a cascade priority value of "10" (0xa) and color code "3" (0x3) may result in a message priority value of "163" (0xa0x 0).
In some embodiments, the traffic congestion policy and/or associated drop algorithm may use the message priority value when shaping traffic (e.g., dropping some traffic) during a congestion event. For example, where the traffic congestion policy and/or related dropping algorithm uses message rate limiting, the traffic congestion policy and/or related dropping algorithm may enforce an overall message rate limit (e.g., based on all message rates of messages received at RN 104) by first limiting the message rate of messages associated with lower priority message values before affecting (e.g., limiting) the message rate of messages associated with higher priority message values. In this example, the message priority value may be determined by using a policy in determining which messages to drop, traffic congestion policy and/or an associated drop algorithm may be configured for any environment and/or application use.
In some embodiments, tracker 200 may include functionality for determining a message rate before and/or after enforcing message rate limits. For example, tracker 200 may track the message rate of incoming messages before dropping the messages, and may also track and/or verify the message rate (e.g., outgoing messages to destination task 206) after shaper 202 drops the messages to enforce and/or verify certain message rate limits.
It will be appreciated that fig. 2 is for illustrative purposes, and that various nodes, locations of nodes, and/or functions of nodes (e.g., modules) described above with respect to fig. 2 may be changed, altered, added, or removed. For example, some nodes and/or functions may be combined into a single entity. In another example, some nodes and/or functions may be distributed across multiple nodes and/or platforms.
Fig. 3 depicts an example of traffic congestion policy information. In some embodiments, TM110 may utilize a traffic congestion policy that adjusts or decreases the message rate based on a fixed amount of various congestion levels. For example, a congestion policy or related dropping algorithm may enforce a total message rate limit of 50 thousand messages per second (K/sec) during a congestion level "1" event, a total message rate limit of 35K/sec during a congestion level "2" event, and a total message rate limit of 28K/sec during a congestion level "3" event. In another example, a congestion policy or associated drop algorithm may enforce a total message rate based on a message priority value and a congestion level, where lower priority messages have a lower message rate limit than higher priority messages, and where the message rate is more limited as the congestion level increases.
In some embodiments, each message priority value may be determined by using one or more policy determinable factors, such as event priority, event type, message type, path-related congestion indicator, or application-specific parameter value. In some embodiments, each congestion level may represent a particular amount of congestion being experienced by RN104, TM110, and/or another entity. Various mechanisms and/or methods (e.g., queue-based techniques, message rate-based techniques, and/or other techniques) may be utilized to determine the level of congestion.
Referring to fig. 3, a table 300 may represent a traffic congestion policy or related message drop algorithm that relates message rates associated with a plurality of message priority values (e.g., P-0-P-5) and a plurality of congestion levels (e.g., CL-0-CL-4). As depicted in table 300, each row may represent a traffic congestion policy and/or an associated message drop algorithm at a particular congestion level. For example, each row may represent or indicate whether a message rate associated with a particular message priority value is limited (e.g., defined) or unchanged (e.g., unaffected) during a congestion level. In this example, as the congestion level increases, more message rate limiting may be enforced. In some embodiments, as the congestion level increases, the traffic congestion policy may enforce message rate limits on the overall message rate (e.g., at RN 104) and/or for specific message priority values.
As depicted in row "CL-0" of table 300, the traffic congestion policy may not enforce message rate restrictions or restrictions during the congestion level of "CL-0" (e.g., no congestion is detected). For example, a message rate of "P-0" - "P-5" is depicted without any message rate restrictions or limitations being enforced, and the sum of the message rates of "P-0" - "P-5" is depicted as a total message rate of "100K/sec" (e.g., as received by RN 104).
As depicted in row "CL-1" of table 300, the traffic congestion policy may enforce a total message rate limit of "50K/sec" during a congestion level of "CL-1" (e.g., slight congestion detected). For example, to enforce a total message rate limit of "50K/sec," the traffic congestion policy represented by table 300 may drop all messages associated with message priority values "P-0" - "P-2" and may allow (e.g., process and/or route) all messages associated with message priority values "P-3" - "P-5".
As depicted in row "CL-2" of table 300, the traffic congestion policy may enforce a total message rate limit of "35K/sec" during the congestion level of "CL-2" (e.g., moderate congestion detected). For example, to enforce a total message rate limit of "35K/sec," the traffic congestion policy represented by table 300 may drop all messages associated with message priority values "P-0" - "P-2" and may drop some messages associated with "P-3," and may allow (e.g., process and/or route) all messages associated with message priority values "P-4" - "P-5.
As depicted in row "CL-3" of table 300, the traffic congestion policy may enforce a total message rate limit of "20K/sec" during a congestion level of "CL-3" (e.g., severe congestion is detected). For example, to enforce a total message rate limit of "20K/sec," the traffic congestion policy represented by table 300 may drop all messages associated with message priority values "P-0" - "P-3" and may allow (e.g., process and/or route) all messages associated with message priority values "P-4" - "P-5".
It will be appreciated that table 300 is for illustrative purposes and that RN104 or TM110 may use different and/or additional information, logic, message rates, and/or data than described above with respect to fig. 3. It will also be appreciated that the comments depicted in fig. 3 are for illustrative purposes and should not be construed as limitations of RN104 or the functionality therein.
Fig. 4 depicts another example of traffic congestion policy information. In some embodiments, TM110 may utilize a traffic congestion policy that adjusts or decreases the message rate based on a percentage amount of various congestion levels. For example, a congestion policy or related dropping algorithm may enforce a total message rate limit that is 50% less than the normal (e.g., undefined) total message rate. In this example, the total message rate limit may be gradually decreased as the congestion level increases, e.g., by a percentage of the normal total message rate or the previous total message rate limit. In another example, a congestion policy or related drop algorithm may enforce an overall message rate based on a message priority value and a congestion level, where lower priority messages have a lower message rate limit than higher priority messages, and where the message rate is more limited as the congestion level increases.
In some embodiments, each message priority value may be determined by using one or more policy determinable factors, such as event priority, event type, message type, path-related congestion indicator, or application-specific parameter value. In some embodiments, each congestion level may represent a particular amount of congestion being experienced by RN104, TM110, and/or another entity. Various mechanisms and/or methods (e.g., message queue-based techniques, message rate-based techniques, and/or other techniques) may be utilized to determine the congestion level.
Referring to fig. 4, a table 400 may represent a traffic congestion policy or related message drop algorithm that relates to message rates associated with a plurality of message priority values (e.g., P-0-P-5) and a plurality of congestion levels (e.g., CL-0-CL-4). As depicted in table 400, each row may represent a traffic congestion policy and/or an associated message drop algorithm at a particular congestion level. For example, each row may represent or indicate whether a message rate associated with a particular message priority value is limited (e.g., defined) or unchanged (e.g., unaffected) during a congestion level. In this example, as the congestion level increases, more message rate limiting may be enforced. In some embodiments, as the congestion level increases, the traffic congestion policy may enforce message rate limits on the overall message rate (e.g., at RN 104) and/or for specific message priority values.
As depicted in row "CL-0" of table 400, the traffic congestion policy may not enforce message rate restrictions or restrictions during the congestion level of "CL-0" (e.g., no congestion is detected). For example, a message rate of "P-0" - "P-5" is depicted without any message rate restrictions or limitations being enforced, and the sum of the message rates of "P-0" - "P-5" is depicted as a total message rate of "100K/sec" (e.g., as received by RN 104).
As depicted in row "CL-1" of table 400, the traffic congestion policy may enforce a total message rate limit of "50K/sec" (e.g., calculated by multiplying the total message rate of "CL-0" by 50%) during a congestion level of "CL-1" (e.g., slight congestion detected). For example, to enforce a total message rate limit of "50K/sec," the traffic congestion policy represented by table 400 may drop all messages associated with message priority values "P-0" - "P-2" and may allow (e.g., process and/or route) all messages associated with message priority values "P-3" - "P-5".
As depicted in row "CL-2" of table 400, the traffic congestion policy may enforce a total message rate limit of "35K/sec" (e.g., calculated by multiplying the total message rate of "CL-1" by 70%) during a congestion level of "CL-2" (e.g., moderate congestion is detected). For example, to enforce a total message rate limit of "35K/sec," the traffic congestion policy represented by table 400 may drop all messages associated with message priority values "P-0" - "P-2" and may drop some messages associated with "P-3," and may allow (e.g., process and/or route) all messages associated with message priority values "P-4" - "P-5.
As depicted in row "CL-3" of table 400, the traffic congestion policy may enforce a total message rate limit of "28K/sec" (e.g., calculated by multiplying the total message rate of "CL-2" by 80%) during a congestion level of "CL-3" (e.g., severe congestion is detected). In this example, to enforce a total message rate limit of "28K/sec," the traffic congestion policy represented by table 400 may drop all messages associated with message priority values "P-0" - "P-2" and may drop some messages associated with "P-3," and may allow (e.g., process and/or route) all messages associated with message priority values "P-4" - "P-5.
It will be appreciated that table 400 is for illustrative purposes and that RN104 or TM110 may use different and/or additional information, logic, message rates, and/or data than described above with respect to fig. 4. It will also be appreciated that the comments depicted in fig. 4 are for illustrative purposes and should not be construed as limitations on RN104 or the functionality within RN 104.
Fig. 5 is a flow chart illustrating a process of dropping messages during a congestion event. In some embodiments, process 500 or portions thereof (e.g., steps 502, 504, 506, and/or 508) may be performed by or at RN104, TM110, and/or another node or module.
Referring to process 500, in step 502, a traffic congestion policy for handling traffic associated with an application during congestion may be registered. For example, a Diameter application or related node may register traffic congestion policies with RN104 and/or TM110 for processing Diameter messages during a congestion event at RN 104. In another example, a web application or web server may register traffic congestion policies with RN104 and/or TM110 for handling HTTP and/or IP messages during a congestion event at RN 104.
In step 504, a first congestion level associated with the congestion event may be determined. For example, the RN104 or TM110 may use a message queue based congestion detection mechanism in which the congestion level increases when one or more message queues reach an increased threshold amount, e.g., a value between 0 and 3, where 0 is no congestion and 3 is the highest level of congestion. In another example, RN104 or TM110 may use a message rate based congestion detection mechanism, where the congestion level increases when the total incoming message rate reaches an increased threshold rate.
In step 506, the message rate of messages associated with similar message priority values may be determined. The message priority value may be determined using a traffic congestion policy. For example, a traffic congestion policy may define or indicate how and/or which factors to use in the calculation of message priority values, and these message priority values may be used in determining message rates for similar traffic.
In step 508, the first message associated with the application may be dropped using one or more of a message rate, a first congestion level, and a message drop algorithm. The message dropping algorithm may be determined using a traffic congestion policy. For example, a traffic congestion policy may define or indicate that certain traffic rate limits are to be enforced for messages associated with various message priority values, and the TM110 may drop messages when enforcing these message rate limits.
In some embodiments, a message attribute of the first message and a path-related congestion indicator associated with the first message may be used to determine a first message priority value associated with the first message. For example, TM110 and/or tracker 200 may use a value from a priority AVP in a Diameter message and a color code associated with the Diameter message in determining a message priority value for the Diameter message.
In some embodiments, a first message priority value associated with a first message may be calculated by concatenating one or more bits from a message attribute and one or more bits from a path-dependent congestion indicator.
In some embodiments, the traffic congestion policy may be hot-swappable. For example, RN104 and/or TM110 may be configured to select a particular traffic congestion policy based on the application being handled during a congestion event (e.g., when RN104 is overloaded). In this example, the RN104 and/or TM110 may change between traffic congestion policies without requiring a reboot or reboot of the RN 104.
In some embodiments, determining the first congestion level may include determining an amount of messages queued for processing. For example, TM110 may determine that RN104 or another entity is experiencing congestion by monitoring one or more message queues for some amount of load (e.g., greater than 50% full, greater than 75% full, etc.). In this example, as the amount of load increases, the congestion level may increase.
In some embodiments, the message drop algorithm may limit the message rate of the first set of messages associated with the first priority value by dropping a first amount of messages during the first congestion level. For example, RN104 and/or TM110 may enforce a message rate of "5K/sec" for messages associated with a priority of "P-1" (e.g., a low message priority value). In this example, RN104 and/or TM110 may discard 80% of these messages, assuming these messages are typically received at a message rate of "25K/sec". In another example, RN104 and/or TM110 may enforce a message rate of "50K/sec" for messages associated with a priority of "P-5" (e.g., a high message priority value). In this example, RN104 and/or TM110 may not drop any of these messages, assuming these messages are typically received at a message rate of "20K/sec".
In some embodiments, the message drop algorithm may limit the message rate of the first set of messages by dropping a second amount of messages during the second congestion level, where the second amount may be greater than the first amount dropped during the first congestion level. For example, RN104 and/or TM110 may enforce a message rate of "5K/sec" for messages associated with priority "P-1" at a congestion level of "CL-1", and may enforce a message rate of "0K/sec" for these messages at a congestion level of "CL-2". In this example, assuming that these messages are typically received at a message rate of "25K/sec," RN104 and/or TM110 may drop 80% of these messages during the congestion level of "CL-1" and may drop 100% of these messages during the congestion level of "CL-2".
In some embodiments, the first amount of messages dropped by the message drop algorithm may be based on a first percentage associated with the first set of messages or the first congestion level, where the first percentage may be different from a second percentage associated with the second set of messages or the second congestion level. For example, for messages associated with priority "P-1," RN104 and/or TM110 may enforce a message rate 25% lower than the normal message rate for these messages at the congestion level of "CL-1," and may enforce a message rate 50% lower than the normal message rate for these messages at the congestion level of "CL-2.
In some embodiments, the first amount of messages dropped by the message drop algorithm may be based on a first total message rate allowed for the first set of messages or the first congestion level, where the first total message rate allowed may be different from a second message rate allowed for the second set of messages or the second congestion level. For example, for messages associated with priority level "P-1," RN104 and/or TM110 may enforce a message rate of "10K/sec" at a congestion level of "CL-1," and may enforce a message rate of "5K/sec" at a congestion level of "CL-2.
It will be appreciated that process 500 is for illustrative purposes, and that different and/or additional acts may be used. It will also be appreciated that various actions described herein may occur in a different order or sequence.
It should be noted that RN104, TM110, tracker 200, shaper 202, and/or the functions described herein may constitute a dedicated computing device, such as a Diameter signaling router, or switch. Additionally, RN104, TM110, tracker 200, shaper 202, and/or functions described herein may improve the technical field of traffic-related congestion management and related computer functions by using techniques, methods, and/or mechanisms that utilize pluggable traffic congestion policies and/or by using message dropping algorithms that use traffic pattern information (e.g., message rate), congestion level, and/or policy-defined message priority values (e.g., message priority values that may be factored based on one or more policies) to determine whether to drop or allow a message.
Various combinations and subcombinations of the structures and features described herein are contemplated and will be apparent to those skilled in the art upon learning the present disclosure. Unless indicated to the contrary herein, any of the various features and elements disclosed herein may be combined with one or more other disclosed features and elements. Accordingly, it is intended that the subject matter as claimed hereinafter be broadly construed and interpreted, as including all such variations, modifications and alternative embodiments, as fall within the scope of the claims, and including equivalents thereof. It will be understood that various details of the presently disclosed subject matter may be changed without departing from the scope of the presently disclosed subject matter. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.
Claims (19)
1. A method for dropping messages during a congestion event, the method comprising:
registering a traffic congestion policy for handling traffic associated with an application during congestion;
determining a first congestion level associated with a congestion event;
determining a message rate for messages associated with similar message priority values, wherein the message priority values are determined using a traffic congestion policy, wherein a first message priority value associated with a first message is determined using message attributes of the first message and a path-related congestion indicator associated with the first message, wherein the path-related congestion indicator comprises a color value assigned by a network operator based on a path to be followed by the first message; and
drop a first message associated with the application using the first message priority value, the message rate of the message having the first message priority value, the first congestion level, and a message drop algorithm, wherein the message drop algorithm is determined using a traffic congestion policy.
2. The method of claim 1, wherein the first message priority value associated with the first message is calculated by concatenating one or more bits from a message attribute and one or more bits from a path-dependent congestion indicator.
3. The method of claim 1, wherein the traffic congestion policy is hot-pluggable.
4. The method of claim 1, wherein determining the first congestion level comprises determining an amount of messages queued for processing.
5. The method of claim 1, wherein the message drop algorithm limits a message rate of the first set of messages associated with the first priority value by dropping a first amount of messages during the first congestion level.
6. The method of claim 5, wherein the message drop algorithm limits the message rate of the first set of messages by dropping a second amount of messages during the second congestion level, wherein the second amount is greater than the first amount.
7. The method of claim 5, wherein the first amount is based on a first percentage associated with the first set of messages or the first congestion level, wherein the first percentage is different from a second percentage associated with the second set of messages or the second congestion level.
8. The method of claim 5, wherein the first amount is based on a first total message rate allowed for the first set of messages or the first congestion level, wherein the first total message rate allowed is different from a second message rate allowed for the second set of messages or the second congestion level.
9. A system for dropping messages during a congestion event, the system comprising:
at least one processor; and
a traffic manager, wherein the traffic manager is implemented using the at least one processor, wherein the traffic manager is configured to: means for registering a traffic congestion policy for handling traffic associated with an application during congestion; for determining a first congestion level associated with a congestion event; for determining a message rate for messages associated with similar message priority values, wherein the message priority values are determined using a traffic congestion policy, wherein a first message priority value associated with a first message is determined using message attributes of the first message and a path-related congestion indicator associated with the first message, wherein the path-related congestion indicator comprises a color value assigned by a network operator based on a path to be followed by the first message; and means for dropping the first message associated with the application using the first message priority value, the message rate of the message having the first message priority value, the first congestion level, and a message drop algorithm, wherein the message drop algorithm is determined using a traffic congestion policy.
10. The system of claim 9, wherein the first message priority value associated with the first message is calculated by concatenating one or more bits from a message attribute and one or more bits from a path-dependent congestion indicator.
11. The system of claim 9, wherein the traffic congestion policy is hot-pluggable.
12. The system of claim 9, wherein determining the first congestion level comprises determining an amount of messages queued for processing.
13. The system of claim 9, wherein the message drop algorithm limits a message rate of the first set of messages associated with the first priority value by dropping a first amount of messages during the first congestion level.
14. The system of claim 13, wherein the message drop algorithm limits the message rate of the first set of messages by dropping a second amount of messages during the second congestion level, wherein the second amount is greater than the first amount.
15. The system of claim 13, wherein the first amount is based on a first percentage associated with the first set of messages or the first congestion level, wherein the first percentage is different from a second percentage associated with the second set of messages or the second congestion level.
16. The system of claim 13, wherein the first amount is based on a first total message rate allowed for the first set of messages or the first congestion level, wherein the first total message rate allowed is different from a second message rate allowed for the second set of messages or the second congestion level.
17. A non-transitory computer-readable medium comprising computer-executable instructions that, when executed by at least one processor of a computer, cause the computer to perform steps comprising:
registering a traffic congestion policy for handling traffic associated with an application during congestion;
determining a first congestion level associated with a congestion event;
determining a message rate for messages associated with similar message priority values, wherein the message priority values are determined using a traffic congestion policy, wherein a first message priority value associated with a first message is determined using message attributes of the first message and a path-related congestion indicator associated with the first message, wherein the path-related congestion indicator comprises a color value assigned by a network operator based on a path to be followed by the first message; and
drop a first message associated with the application using the first message priority value, a message rate for messages having the first message priority value, the first congestion level, and a message drop algorithm, wherein the message drop algorithm is determined using a traffic congestion policy.
18. A non-transitory computer-readable medium comprising computer-executable instructions that, when executed by at least one processor of a computer, cause the computer to perform the steps of the method of any one of claims 2-8.
19. An apparatus for dropping messages during a congestion event, comprising means for performing the steps of the method of any of claims 1-8.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/273,069 US10291539B2 (en) | 2016-09-22 | 2016-09-22 | Methods, systems, and computer readable media for discarding messages during a congestion event |
US15/273,069 | 2016-09-22 | ||
PCT/US2017/051351 WO2018057368A1 (en) | 2016-09-22 | 2017-09-13 | Methods, systems, and computer readable media for discarding messages during a congestion event |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109792409A CN109792409A (en) | 2019-05-21 |
CN109792409B true CN109792409B (en) | 2022-05-24 |
Family
ID=59966866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780058408.6A Active CN109792409B (en) | 2016-09-22 | 2017-09-13 | Methods, systems, and computer readable media for dropping messages during congestion events |
Country Status (5)
Country | Link |
---|---|
US (1) | US10291539B2 (en) |
EP (1) | EP3516833B1 (en) |
JP (1) | JP7030815B2 (en) |
CN (1) | CN109792409B (en) |
WO (1) | WO2018057368A1 (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10291539B2 (en) | 2016-09-22 | 2019-05-14 | Oracle International Corporation | Methods, systems, and computer readable media for discarding messages during a congestion event |
US11044159B2 (en) * | 2018-03-09 | 2021-06-22 | International Business Machines Corporation | Application-level, cooperative minimization of offlining incidents in an internet of things (IoT) environment |
US11271846B2 (en) | 2018-10-22 | 2022-03-08 | Oracle International Corporation | Methods, systems, and computer readable media for locality-based selection and routing of traffic to producer network functions (NFs) |
US10778527B2 (en) | 2018-10-31 | 2020-09-15 | Oracle International Corporation | Methods, systems, and computer readable media for providing a service proxy function in a telecommunications network core using a service-based architecture |
US11252093B2 (en) | 2019-06-26 | 2022-02-15 | Oracle International Corporation | Methods, systems, and computer readable media for policing access point name-aggregate maximum bit rate (APN-AMBR) across packet data network gateway data plane (P-GW DP) worker instances |
US11159359B2 (en) * | 2019-06-26 | 2021-10-26 | Oracle International Corporation | Methods, systems, and computer readable media for diameter-peer-wide egress rate limiting at diameter relay agent (DRA) |
US11323413B2 (en) | 2019-08-29 | 2022-05-03 | Oracle International Corporation | Methods, systems, and computer readable media for actively discovering and tracking addresses associated with 4G service endpoints |
US11082393B2 (en) | 2019-08-29 | 2021-08-03 | Oracle International Corporation | Methods, systems, and computer readable media for actively discovering and tracking addresses associated with 5G and non-5G service endpoints |
US11102138B2 (en) | 2019-10-14 | 2021-08-24 | Oracle International Corporation | Methods, systems, and computer readable media for providing guaranteed traffic bandwidth for services at intermediate proxy nodes |
US11425598B2 (en) | 2019-10-14 | 2022-08-23 | Oracle International Corporation | Methods, systems, and computer readable media for rules-based overload control for 5G servicing |
CN112804156A (en) * | 2019-11-13 | 2021-05-14 | 深圳市中兴微电子技术有限公司 | Congestion avoidance method and device and computer readable storage medium |
US11224009B2 (en) | 2019-12-30 | 2022-01-11 | Oracle International Corporation | Methods, systems, and computer readable media for enabling transport quality of service (QoS) in 5G networks |
CN111464358B (en) * | 2020-04-02 | 2021-08-20 | 深圳创维-Rgb电子有限公司 | Message reporting method and device |
US11528334B2 (en) | 2020-07-31 | 2022-12-13 | Oracle International Corporation | Methods, systems, and computer readable media for preferred network function (NF) location routing using service communications proxy (SCP) |
US11290549B2 (en) | 2020-08-24 | 2022-03-29 | Oracle International Corporation | Methods, systems, and computer readable media for optimized network function (NF) discovery and routing using service communications proxy (SCP) and NF repository function (NRF) |
US11483694B2 (en) | 2020-09-01 | 2022-10-25 | Oracle International Corporation | Methods, systems, and computer readable media for service communications proxy (SCP)-specific prioritized network function (NF) discovery and routing |
US11570262B2 (en) | 2020-10-28 | 2023-01-31 | Oracle International Corporation | Methods, systems, and computer readable media for rank processing for network function selection |
US11470544B2 (en) | 2021-01-22 | 2022-10-11 | Oracle International Corporation | Methods, systems, and computer readable media for optimized routing of messages relating to existing network function (NF) subscriptions using an intermediate forwarding NF repository function (NRF) |
US11496954B2 (en) | 2021-03-13 | 2022-11-08 | Oracle International Corporation | Methods, systems, and computer readable media for supporting multiple preferred localities for network function (NF) discovery and selection procedures |
US11895080B2 (en) | 2021-06-23 | 2024-02-06 | Oracle International Corporation | Methods, systems, and computer readable media for resolution of inter-network domain names |
US11849506B2 (en) | 2021-10-08 | 2023-12-19 | Oracle International Corporation | Methods, systems, and computer readable media for routing inter-public land mobile network (inter-PLMN) messages related to existing subscriptions with network function (NF) repository function (NRF) using security edge protection proxy (SEPP) |
WO2023210957A1 (en) * | 2022-04-28 | 2023-11-02 | Lg Electronics Inc. | Method and apparatus for performing data transmissions based on congestion indicator in wireless communication system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105474683A (en) * | 2013-09-17 | 2016-04-06 | 英特尔Ip公司 | Congestion measurement and reporting for real-time delay-sensitive applications |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5084816A (en) | 1987-11-25 | 1992-01-28 | Bell Communications Research, Inc. | Real time fault tolerant transaction processing system |
US5253248A (en) | 1990-07-03 | 1993-10-12 | At&T Bell Laboratories | Congestion control for connectionless traffic in data networks via alternate routing |
GB9521831D0 (en) | 1995-10-25 | 1996-01-03 | Newbridge Networks Corp | Crankback and loop detection in ATM SVC routing |
US6212164B1 (en) | 1996-06-19 | 2001-04-03 | Hitachi, Ltd. | ATM switch congestion control method of connection setup requests and priority control method for receiving connection requests |
US5933474A (en) | 1996-12-24 | 1999-08-03 | Lucent Technologies Inc. | Telecommunications call preservation in the presence of control failure and high processing load |
US5915013A (en) | 1997-02-04 | 1999-06-22 | At&T Corp | Method and system for achieving routing of signalling information |
US6018515A (en) | 1997-08-19 | 2000-01-25 | Ericsson Messaging Systems Inc. | Message buffering for prioritized message transmission and congestion management |
US6115383A (en) | 1997-09-12 | 2000-09-05 | Alcatel Usa Sourcing, L.P. | System and method of message distribution in a telecommunications network |
US6747955B1 (en) | 1998-11-13 | 2004-06-08 | Alcatel Usa Sourcing, L.P. | Method and system for congestion control in a telecommunications network |
US6704287B1 (en) | 1999-02-26 | 2004-03-09 | Nortel Networks Limited | Enabling smart logging for webtone networks and services |
US7318091B2 (en) | 2000-06-01 | 2008-01-08 | Tekelec | Methods and systems for providing converged network management functionality in a gateway routing node to communicate operating status information associated with a signaling system 7 (SS7) node to a data network node |
US6606379B2 (en) | 2001-06-01 | 2003-08-12 | Tekelec | Methods and systems for collapsing signal transfer point (STP) infrastructure in a signaling network |
US6996225B1 (en) | 2002-01-31 | 2006-02-07 | Cisco Technology, Inc. | Arrangement for controlling congestion in an SS7 signaling node based on packet classification |
US7324451B2 (en) | 2003-10-30 | 2008-01-29 | Alcatel | Aggregated early message discard for segmented message traffic in a communications network |
US8903074B2 (en) | 2005-03-04 | 2014-12-02 | Tekelec Global, Inc. | Methods, systems, and computer program products for congestion-based routing of telecommunications signaling messages |
US20070237074A1 (en) | 2006-04-06 | 2007-10-11 | Curry David S | Configuration of congestion thresholds for a network traffic management system |
JP2008205721A (en) * | 2007-02-19 | 2008-09-04 | Hitachi Communication Technologies Ltd | Data transfer device, base station and data transfer method |
US8547846B1 (en) | 2008-08-28 | 2013-10-01 | Raytheon Bbn Technologies Corp. | Method and apparatus providing precedence drop quality of service (PDQoS) with class-based latency differentiation |
US20110239226A1 (en) | 2010-03-23 | 2011-09-29 | Cesare Placanica | Controlling congestion in message-oriented middleware |
US20110261695A1 (en) * | 2010-04-23 | 2011-10-27 | Xiaoming Zhao | System and method for network congestion control |
CN101984608A (en) * | 2010-11-18 | 2011-03-09 | 中兴通讯股份有限公司 | Method and system for preventing message congestion |
US9790107B2 (en) | 2011-10-31 | 2017-10-17 | Innovation Services, Inc. | Apparatus and method for generating metal ions in a fluid stream |
US9106545B2 (en) | 2011-12-19 | 2015-08-11 | International Business Machines Corporation | Hierarchical occupancy-based congestion management |
WO2014153130A1 (en) * | 2013-03-14 | 2014-09-25 | Sirius Xm Radio Inc. | High resolution encoding and transmission of traffic information |
US9571402B2 (en) | 2013-05-03 | 2017-02-14 | Netspeed Systems | Congestion control and QoS in NoC by regulating the injection traffic |
EP3120605B1 (en) | 2014-03-17 | 2020-01-08 | Telefonaktiebolaget LM Ericsson (publ) | Congestion level configuration for radio access network congestion handling |
US10291539B2 (en) | 2016-09-22 | 2019-05-14 | Oracle International Corporation | Methods, systems, and computer readable media for discarding messages during a congestion event |
-
2016
- 2016-09-22 US US15/273,069 patent/US10291539B2/en active Active
-
2017
- 2017-09-13 EP EP17772559.5A patent/EP3516833B1/en active Active
- 2017-09-13 CN CN201780058408.6A patent/CN109792409B/en active Active
- 2017-09-13 WO PCT/US2017/051351 patent/WO2018057368A1/en unknown
- 2017-09-13 JP JP2019536462A patent/JP7030815B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105474683A (en) * | 2013-09-17 | 2016-04-06 | 英特尔Ip公司 | Congestion measurement and reporting for real-time delay-sensitive applications |
Also Published As
Publication number | Publication date |
---|---|
US20180083882A1 (en) | 2018-03-22 |
CN109792409A (en) | 2019-05-21 |
JP7030815B2 (en) | 2022-03-07 |
EP3516833B1 (en) | 2021-11-10 |
WO2018057368A1 (en) | 2018-03-29 |
JP2019532600A (en) | 2019-11-07 |
EP3516833A1 (en) | 2019-07-31 |
US10291539B2 (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109792409B (en) | Methods, systems, and computer readable media for dropping messages during congestion events | |
US9699045B2 (en) | Methods, systems, and computer readable media for performing diameter overload control | |
EP3146682B1 (en) | Method and system for managing flows in a network | |
CN106062726B (en) | Flow aware buffer management for data center switches | |
US9819590B2 (en) | Method and apparatus for notifying network abnormality | |
US10616802B2 (en) | Methods, systems and computer readable media for overload and flow control at a service capability exposure function (SCEF) | |
CN111788803B (en) | Flow management in a network | |
JP5673805B2 (en) | Network device, communication system, abnormal traffic detection method and program | |
US11888745B2 (en) | Load balancer metadata forwarding on secure connections | |
JP6923809B2 (en) | Communication control system, network controller and computer program | |
CN112889029A (en) | Methods, systems, and computer readable media for lock-free communication processing at a network node | |
WO2017035717A1 (en) | Distributed denial of service attack detection method and associated device | |
CN107241280A (en) | The dynamic prioritization of network traffics based on prestige | |
Fredj et al. | Measurement-based admission control for elastic traffic | |
JP2008048131A (en) | P2p traffic monitoring and control system, and method therefor | |
CN105704057B (en) | The method and apparatus for determining the type of service of burst port congestion packet loss | |
RU2568784C1 (en) | Method of controlling data streams in distributed information systems | |
EP3471351B1 (en) | Method and device for acquiring path information about data packet | |
JP4586183B2 (en) | BAND MANAGEMENT METHOD, BAND MANAGEMENT DEVICE, AND PROGRAM FOR THE SAME IN COMMUNICATION NETWORK | |
JP2017098605A (en) | Communication control device, communication control method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |