EP2074760A1 - Procédé et appareil destinés à être utilisés dans un réseau de communications - Google Patents

Procédé et appareil destinés à être utilisés dans un réseau de communications

Info

Publication number
EP2074760A1
EP2074760A1 EP06819403A EP06819403A EP2074760A1 EP 2074760 A1 EP2074760 A1 EP 2074760A1 EP 06819403 A EP06819403 A EP 06819403A EP 06819403 A EP06819403 A EP 06819403A EP 2074760 A1 EP2074760 A1 EP 2074760A1
Authority
EP
European Patent Office
Prior art keywords
node
leak rate
determined
rate
response delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06819403A
Other languages
German (de)
English (en)
Inventor
Gergely PONGRÁCZ
Dániel KRUPP
Péter VADERNA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/EP2006/067204 external-priority patent/WO2008043390A1/fr
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Priority to EP06819403A priority Critical patent/EP2074760A1/fr
Publication of EP2074760A1 publication Critical patent/EP2074760A1/fr
Withdrawn legal-status Critical Current

Links

Definitions

  • the present invention relates to a method and apparatus for use in a communications network.
  • a Next Generation Network is a packet-based network able to provide services including Telecommunication Services and able to make use of multiple broadband, QoS-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.
  • the IP Multimedia Subsystem is a standardised control plane for the NGN architecture capable of handling Internet based multimedia-services defined by the European Telecommunications Standards Institute (ETSI) and the 3rd Generation Partnership Project (3 GPP). IP Multimedia services provide a dynamic combination of voice, video, messaging, data, etc. within the same session. By growing the number of basic applications and the media which it is possible to combine, the number of services offered to the end users will grow, and the inter-personal communication experience will be enriched.
  • the IP Multimedia Subsystem (IMS) is a new subsystem added to the UMTS architecture in Release 5, for supporting traditional telephony as well as new multimedia services.
  • Controller nodes like Media Gateway Controllers (also known as call servers or call agents) in NGN, or Mobile Switching Centers (MSCs) and Radio Network Controllers (RNCs) in Universal Mobile Telecommunications System (UMTS) networks, have significantly higher processing capacity than access nodes or media gateways. Because of that, there are scenarios where signalling overload in a specified access node caused by the controller node is likely.
  • Media Gateway Controllers also known as call servers or call agents
  • MSCs Mobile Switching Centers
  • RNCs Radio Network Controllers
  • UMTS Universal Mobile Telecommunications System
  • Signalling overload causes the affected access node to respond with an increased delay. If overload continues, loss of messages or rejection will occur, and the access node's performance will degrade, or in the worst case the node will crash entirely.
  • the access node is assumed to have an internal overload protection mechanism that is able to reject a part of the arriving stream of signalling messages in order to avoid a complete crash, but even in this case the access node throughput will drop if its processing capacity is significantly lower than the offered load. This is illustrated in Figure 1, which shows access node behaviour in different load scenarios.
  • the offered load can be controlled by an external load control function. It is desirable to provide such an external load control function that meets as many of the following requirements as possible:
  • a method of regulating a load placed on a first node of a telecommunications network caused by messages sent to the first node by a second node of the network according to a signalling protocol between the first node and the second node comprising: using a leaky bucket restrictor associated with the second node to regulate the load, the leaky bucket having an adjustable leak rate; determining a roundtrip response delay relating to at least some of the messages during a measurement period; and adjusting the leak rate in dependence upon the determined response delay.
  • the method may comprise determining the number of or rate at which messages are delayed by more than a predetermined threshold during the measurement period.
  • Adjusting the leak rate may comprise determining whether the first node is in an overloaded condition.
  • Determining whether the first node is in an overloaded condition may be performed in dependence upon the number of or rate at which messages are delayed by more than the predetermined threshold during the measurement period.
  • Determining whether the first node is in an overloaded condition may comprise comparing the number of or rate at which messages are delayed by more than the predetermined threshold during the measurement period with a predetermined target number or rate.
  • Adjusting the leak rate may comprise, if it is determined that the first node is in an overloaded condition, determining whether the first node has entered the overloaded condition since it was previously determined whether the first node is in an overloaded condition, and if it is determined that the first node is not in an overloaded condition, determining whether the first node has left the overloaded condition since it was previously determined whether the first node is in an overloaded condition.
  • the method may comprise, if it is determined that the first node has entered the overloaded condition since it was previously determined whether the first node is in an overloaded condition, decreasing the leak rate.
  • the method may comprise, if it is determined that the first node has left the overloaded condition since it was previously determined whether the first node is in an overloaded condition, increasing the leak rate.
  • the method may comprise counting the number of consecutive adjustment steps performed in which it is not determined that the first node has entered or has left the overloaded condition, as the case may be, since it was previously determined whether the first node is in an overloaded condition.
  • the method may comprise adjusting the leak rate in dependence upon the count.
  • the method may comprise increasing or decreasing the leak rate by an amount proportional to 2 count .
  • the method may comprise adjusting the leak rate in dependence upon a response delay determined for a previous measurement period.
  • the method may comprise increasing the leak rate if it is determined that the response delay determined for the previous measurement period is greater than the present response delay by more than a predetermined amount.
  • the method may comprise decreasing the leak rate otherwise.
  • the method may comprise decreasing the leak rate by an amount less than a previous adjustment made to the leak rate.
  • the method may comprise decreasing the leak rate if it is determined that the present response delay is greater than the response delay determined for the previous measurement period by more than a predetermined amount.
  • the method may comprise increasing the leak rate otherwise.
  • the method may comprise increasing the leak rate by an amount less than a previous adjustment made to the leak rate.
  • the method may comprise adjusting the leak rate in dependence upon the response delay determined for the previous measurement period in such a manner only if it is not determined that the first node has entered or has left the overloaded condition, as the case may be, since it was previously determined whether the first node is in an overloaded condition.
  • the method may comprise adjusting the leak rate in dependence upon the fill of the leaky bucket.
  • the method may comprise not performing at least part of an adjustment step if it is determined that the fill of the leaky bucket is above a predetermined level.
  • the determined response delay may be an average response delay for the at least some messages during the measurement period.
  • the method may comprise adjusting the leak rate within predetermined bounds.
  • the method may comprise performing the adjusting step periodically. Messages rejected by the leaky bucket restrictor may be dropped or queued in dependence upon the type of message.
  • the second node may be a controller node and the first node may be a controlled node.
  • the second node may be a master node and the first node may be a slave node.
  • the second node may be a gateway controller node and the first node may be a gateway node.
  • the signalling protocol may be a request-reply based signalling protocol.
  • the signalling protocol may be the H.248 protocol.
  • the signalling protocol may be the Q.2630 protocol.
  • the signalling protocol may be the Session Initiation Protocol, SIP.
  • the signalling protocol may be the Media Gateway Control Protocol.
  • the signalling protocol may be the Simple Gateway Control Protocol.
  • the signalling protocol may be the Internet Protocol Device Control.
  • the network may be a Next Generation Network.
  • the network may be a 3G Network.
  • LeakRate is adaptively changed by the overload control.
  • an apparatus for use as or in a second node of a telecommunications network comprising means for: using a leaky bucket restrictor to regulate a load placed on a first node of the network caused by messages sent to the first node by the second node according to a signalling protocol between the first node and the second node, the leaky bucket having an adjustable leak rate; determining a roundtrip response delay relating to at least some of the messages during a measurement period; and adjusting the leak rate in dependence upon the determined response delay.
  • the program may be carried on a carrier medium.
  • the carrier medium may be a storage medium.
  • the carrier medium may be a transmission medium.
  • an apparatus programmed by a program according to the third aspect of the present invention.
  • a storage medium containing a program according to the third aspect of the present invention.
  • Figure 2 illustrates one example of an operational environment to which an embodiment of the present invention can be applied: Next Generation Networks;
  • Figure 3 illustrates another example of an operational environment to which an embodiment of the present invention can be applied: 3G networks;
  • Figure 4 is a block diagram for explaining the context of a delay based overload control method according to an embodiment of the present invention
  • Figure 5 illustrates an overall state machine of a method embodying the present invention
  • Figure 6 is a flowchart illustrating a rate setting part of a method embodying the present invention.
  • Figure 7 illustrates simulated results of using a rate control method according to an embodiment of the present invention in the case of fully controlled traffic.
  • Figure 8 illustrates simulated results of using a rate control method according to an embodiment of the present invention to show the sharing of available rate in the case of not fully controlled traffic.
  • a disadvantage with the drop and resend approach is that dropping a signalling message results in high end-to-end delay from the subscriber's perspective. Moreover, it is very probable that there are a number of nodes needed to cooperate to create a voice call. If one node drops a message, the processing on other nodes may cause unnecessary load, or may block resources even they do not need to be blocked.
  • TCP Transmission Control Protocol
  • SSCOP Service Specific Connection Oriented Protocol
  • a disadvantage with the window-based approach is that of granularity.
  • the minimal window size is one message for the resource users. In case of lower capacity and/or large number of resource users, even a window of size one can cause high delay in the processing node.
  • the nodes have to maintain sequence numbers to send or acknowledge. Transmitting sequence numbers or credits requires the same protocol implemented on the receiver side.
  • Q.2630 also known as Q.AAL2: ITU-T recommendation Q.2630.3 can rely on SSCOP window mechanism but this protection is only valid on the SAAL layer (Signalling ATM Adaptation Layer) and the AAL2 layer (ATM Adaptation Layer) may still be overloaded.
  • a matching between AAL2 processes and the SSCOP window is required on the peer node to use it for AAL2 signalling.
  • overloaded entity controlled congestion handling approach e.g. H.248.10; Media Gateway Resource Congestion Handling Package (H.248.10): ITU-T H.248 Annex M.2
  • the overloaded entity calculates its real processing capacity and signals it to the connected external nodes.
  • the explicit rate signalled by the overloaded entity is then applied in the external nodes thus decreasing the load.
  • Regulation may use leaky bucket or percentage based rate control.
  • H.248.10 like, notification approach has a disadvantage that an overloaded entity must take care of measuring its load, calculating its available capacity and signalling it to the non-overloaded entities. This causes even more load, while this notification has to be very fast and precise.
  • H.248.10 uses a percentage- based restrictor, where part of the incoming requests will pass the restrictor, causing bursts inside the system as well.
  • a disadvantage with the congestion signal based approach is that the node also needs to monitor its load characteristics, and signal overload indication in case of overload.
  • the control is split into two nodes, and the far end node can only rely on the number of overload indication messages it gets, and nothing more.
  • An embodiment of the present invention provides a Delay-based Overload Control (DOC) mechanism that uses round trip delay measurements as overload indication; it can be applied to control the load of request-reply based signalling protocols, for example the Session Initiation Protocol (SIP; see IETF RFC 3261), H.248 (see the H.248 v2 protocol specification: draft-ietf-megaco-h248v2-04.txt), and Q.2630 (also known as Q.AAL2 where AAL means ATM Adaptation Layer; see ITU-T recommendation Q.2630.3).
  • SIP Session Initiation Protocol
  • H.248 see the H.248 v2 protocol specification: draft-ietf-megaco-h248v2-04.txt
  • Q.2630 also known as Q.AAL2 where AAL means ATM Adaptation Layer; see ITU-T recommendation Q.2630.3
  • FIG. 2 and Figure 3 illustrate example operation environments.
  • the node-node signalling can be controlled with a delay based overload control method embodying the present invention.
  • the H.248 messages might be subject to control
  • the Q.2630 messages might be subject to control.
  • the signalling link is between nodes with different capacity. In this case the originator node should control its signalling rate. Examples are: o Call server to access gateway (AGW) links (NGN) o RNC to Node-B links (3G)
  • the algorithm measures the response delay of reply messages on the originator nodes. It is not necessary to monitor the delay of every reply packets, but some minimal frequency would be required to reduce the variance of the measurement.
  • the originator nodes use leaky bucket restrictors to control the amount of signalling load they send to the processing node. Traffic rejected by the leaky bucket can be dropped or queued, depending on the type of the message. For example, call setups could be rejected, while releases could be queued.
  • the load control entity can adapt the leak rate periodically, thus finding the processing capacity of the resource server. This capacity may change in time, as there are other tasks consuming the common resource, so the dynamic behavior of the adaptation algorithm enables successful control.
  • rate setting routine that is performed when the algorithm finishes a measurement cycle and re-enters into the wait state. It has different stability criteria, and an overshoot protection mechanism. It changes the leak rate with an exponential function applied on the number of consecutive measurement cycles where the node was in the same state (overload, non-overload).
  • the algorithm controls the leak rate of a leaky bucket based on the roundtrip response delay of controllable requests, where the roundtrip response delay is the time it takes from sending a message to receiving a response, or some measure associated with or relating thereto.
  • Controllable requests are the ones that can be dropped or queued because of load regulation, e.g. the ADD request in H.248 or ERQ in Q.2630.
  • An example for non-controllable requests is the H.248 Modify request during a call setup, which has to be processed regardless of the load on the node, as the context on the gateway already exists.
  • Within the response delay not only is the service time and the queuing delay included, but also the link delay that occurs on the links between the nodes.
  • the algorithm counts those packets whose response delay was above a certain threshold (measurement threshold) and calculates the rate of these packets. At the end of the measurement period it computes the difference between the predefined target and the rate of delayed packets to decide whether the controlled node is in overload or not.
  • the operator specifies a target (delayed messages per second) for each protected node. This parameter plays an important role in the operation of the algorithm. At each controlling entity, the difference between the rate of the messages having a larger delay than the defined threshold and the overload target is calculated.
  • Max-min fairness is defined in the following way: A rate control is max-min fair if the leak rate (and the admitted call rate accordingly) of an originator node (I 1 ) can not be increased if it causes another node's leak rate (1,) to drop, where 1, ⁇ I 1 .
  • Delay based overload control can be sensitive to the link delays occurring on the links between the controllers and the controlled node.
  • link delay is usually much smaller than the typical measurement threshold, and the variance of it is minimal. Even if narrowband links are used, the delay is nearly constant. Knowing these constant delays, the operators can easily configure the algorithm's thresholds properly.
  • the algorithm measures the response delay D of each transaction (or a number of transactions). If D > D e th, meaning the delay D of the packet was above an entry threshold D e th, then a state transition occurs to the Wait state W.
  • the entry threshold should normally be higher than the measurement threshold, and should normally be high enough to prevent the algorithm from accidentally leaving the Normal state N.
  • a timer Tl and the initial leak rate I 1 is set.
  • the algorithm enters into the Measurement state M.
  • the response delay D of reply messages is checked, and if D > D mt h, so if the delay D of a packet was above the measurement threshold D mt h, then a delayed message counter is increased.
  • the rate of the packets with higher delay that D mt h is calculated (the "overdelayed" rate).
  • the algorithm sets a new leak rate using the following parameters:
  • the algorithm decides that the overload period is finished, and it jumps to the Pending state P.
  • the algorithm measures the delay as in Normal state N, and if it is larger than the measurement threshold, it re-initializes the leaky bucket with the last leak rate where there was no overload and enters the Wait state W. After a period T in the Pending state without this occurring, the algorithm jumps back to the Normal state N. This state is only needed to avoid false turnoff of the algorithm when there is a short break in the overload period. However, it is reasonable to set the length of timer T to zero, thus effectively removing this state. A suggested value for T is 30-60 seconds.
  • a further high threshold Dhth for the response delay is defined.
  • Dhth the algorithm restarts in the Wait state W and applies the initial leak rate. This can happen in any state, and these asynchronous transitions are represented in Figure 5 by dotted-line arrows.
  • the algorithm decides whether the system is in overload or not. After that the algorithm checks whether it is the first time in that state. In this case different actions are needed. Stability check is the following step. Here the average delay measured in the previous measurement cycle is checked, and the rate is adjusted if the difference is too large. At the end the algorithm the bucket fill check is executed, which means, that if the bucket is not nearly full (the restrictor is not restricting traffic), the rate of the restrictor should not be increased further.
  • the rate setting algorithm will now be described in detail with reference to Figure 6. The following actions are taken at the end of the Measurement state M.
  • step Sl decides whether or not the node is in overload. This is done by comparing the delayed packet rate to the target rate set on the given originator node.
  • step Sl If the node is determined in step Sl to be in overload, the following actions are taken:
  • step S2 The algorithm checks (step S2) whether it just entered overload. If yes, it decreases the leak rate with the half of the last change, as it probably overshot the capacity (step S3). It also resets the counter in order to continue the rate setting with finer granularity and jumps to the bucket fill check (step S14).
  • Delay change check The average response delay D avg measured during the Measurement period M is compared to the last measured value D las t (step S4). If the delay has dropped more than a specified amount e (suggested: 10-20% of the measurement threshold) since the last measurement period, the leak rate is probably almost smaller than the capacity of the overloaded entity, so instead of decreasing it further, increase it with the half of the last change (step S5). The counter is also reset, allowing finer granularity in the following cycles. The algorithm continues at the bucket fill check (step S 14).
  • step S 6 If the checks fail, the algorithm will change the leak rate (step S 6) according to the following formula:
  • NewLeakRate OldLeakRate - 2 counter ⁇ unit
  • the unit is configurable. It is suggested to use a small value (5-10% of the splash amount), as it allows finer granularity. The exact value depends on the number of originator nodes, as the rate changes on the different nodes are cumulated. On the other hand, when the processing node is determined in step Sl not to be in overload, the following actions are taken:
  • Delay change check For stability reasons the algorithm checks the delay change similar to the overloaded state, and will decrease the leak rate and reset the counter if the delay increases (steps S9 and SlO).
  • step Sl 1 Leak rate change: Outside overload the algorithm will increase the leak rate according to the following formula (step Sl 1):
  • NewLeakRate OldLeakRate + 2 counter ⁇ unit
  • Overshoot protection Bucket fill check: If the leaky bucket leak rate was previously not overly restrictive, the leak rate should not be increased. This could be checked, for example, by determining whether the leaky bucket is filled up to at least a configurable percent (suggested: 80%), or whether the bucket has rejected any transactions during the previous measurement period. If the bucket proved to be not overly restrictive, then the leak rate and the counter need not be changed. This check can be performed at any suitable time; in Figure 6 it is shown as being performed in step S 14, with any changes to the leak rate and counter in the present rate setting procedure being reversed depending on the result of the check.
  • this check could also be performed elsewhere, such as between steps S7 and S9; in that case, depending on the result of the check, the procedure would pass either to step SI l or straight to a modified step S 14 (min/max check).
  • This condition is evaluated to avoid leak rate increments if the restrictor is not really restricting traffic. It can happen that after a short peak, which causes the restrictor to switch on, there comes a period with moderate load. In this case, the leak rate might be increased further and further, regardless of the fact that there is no need to do that, because the actual rate is large enough. After such periods, if another burst comes, the rate might be too high, and the algorithm might not be able to decrease it fast enough to prevent a significant queue size to be built up on the protected node.
  • Min./Max. check Configurable minimum and maximum values can be provided to limit the leak rate. These values are not mandatory, because the algorithm can operate work well with minimum set to zero, and maximum set to the processing capacity of the processing node. However, in some cases they can be useful. A check of the leak rate against the minimum and maximum values is performed in step S 14.
  • step S 15 If the controlled node is not in overload, the algorithm checks the number of consecutive cycles in which the status has been the same (step S 15). If this counter exceeds a configurable parameter (suggested value: 20- 30), the leaky bucket is disconnected, and the algorithm enters the Pending state.
  • a configurable parameter suggested value: 20- 30
  • Simulation results were produced with the NS-2 simulator (The Network Simulator; see httrj ⁇ /wjwwjgLedu/rjsnjtn/rjs ⁇ and were used to illustrate the algorithm's behavior.
  • Figure 7 illustrates the behavior of the algorithm in a simple case in which:
  • the intensity of the generated requests was: o 50% of the processing node's capacity for 20 seconds o 1000% of the processing node's capacity from 20 to 420 seconds o 50% of the processing node's capacity from 420 to 520 seconds
  • Figure 7 illustrates that the algorithm finds a stable state and after that runs with almost 100% utilization. As the external load is 1000% of the engineered capacity, 90% of the incoming requests are rejected, while the peak and the average delay are limited effectively. The first and second requirements mentioned above in the introductory description have been met.
  • Figure 8 illustrates a more complicated test case in which:
  • Uncontrolled messages are typically release messages, that arrive after a given holding time, and can not be rejected by the controllers.
  • Figure 8 illustrates that in a normal situation, where there is some uncontrolled traffic, the algorithm runs effectively, limiting the peak and average delay, maximizing the utilization and sharing the resource fairly between the users. All three requirements mentioned above in the introductory description (limit delay, maximize utilization, fairness) have been met.
  • a method embodying the present invention can be used with a wide range of protocols that rely on a transaction-reply type of model. No internal knowledge of the controlled protocol is required; the overload protection just needs to measure the response time for a controlled transaction type. This kind of openness allows an algorithm embodying the present invention to co-operate with various other load control protocols, allowing it to be an effective emergency handling protection that starts working when there is a problem with the other protocols.
  • a key advantage of an embodiment of the invention is its generality. There is no need of internal knowledge on the controlled protocol, no need for node performance measurements and external signalling, the algorithm just needs to measure the response time for controlled transactions.
  • the control and the measurement are on the same (not overloaded) side. This means, that the controlling entity has the capacity of calculating the suitable rate, while the overloaded entity does not need to play an active role in the process. Because of this, the system's performance is optimal, as the overloaded entity processes only the real life signalling traffic, while the controller entity tries to find the capacity of the overloaded node by setting the leak rate.
  • operation of one or more of the above-described components can be controlled by a program operating on the device or apparatus.
  • Such an operating program can be stored on a computer-readable medium, or could, for example, be embodied in a signal such as a downloadable data signal provided from an Internet website.
  • the appended claims are to be interpreted as covering an operating program by itself, or as a record on a carrier, or as a signal, or in any other form.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Le procédé décrit permet de réguler une charge placée sur un premier nœud d'un réseau de télécommunications causée par des messages envoyés au premier nœud par un second nœud du réseau selon un protocole de signalisation entre le premier nœud et le second nœud, comprenant les étapes suivantes : l'utilisation d'un discriminateur du type panier percé associé au second nœud pour réguler la charge, le panier percé ayant un débit de fuite réglable; la détermination d'un délai de réponse aller-retour relatif à au moins certains des messages pendant une période de mesure; et le réglage du débit de fuite en fonction du délai de réponse déterminé.
EP06819403A 2006-10-09 2006-11-10 Procédé et appareil destinés à être utilisés dans un réseau de communications Withdrawn EP2074760A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP06819403A EP2074760A1 (fr) 2006-10-09 2006-11-10 Procédé et appareil destinés à être utilisés dans un réseau de communications

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/EP2006/067204 WO2008043390A1 (fr) 2006-10-09 2006-10-09 Procédé et un appareil pour une utilisation dans un réseau de communication
EP06819403A EP2074760A1 (fr) 2006-10-09 2006-11-10 Procédé et appareil destinés à être utilisés dans un réseau de communications
PCT/EP2006/068357 WO2008043398A1 (fr) 2006-10-09 2006-11-10 Procédé et appareil destinés à être utilisés dans un réseau de communications

Publications (1)

Publication Number Publication Date
EP2074760A1 true EP2074760A1 (fr) 2009-07-01

Family

ID=40673835

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06819403A Withdrawn EP2074760A1 (fr) 2006-10-09 2006-11-10 Procédé et appareil destinés à être utilisés dans un réseau de communications

Country Status (1)

Country Link
EP (1) EP2074760A1 (fr)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2008043398A1 *

Similar Documents

Publication Publication Date Title
WO2008043398A1 (fr) Procédé et appareil destinés à être utilisés dans un réseau de communications
Al-Saadi et al. A survey of delay-based and hybrid TCP congestion control algorithms
EP1876779B1 (fr) Manipulation de la congestion et des retards dans un réseau de données en paquets
EP2425592B1 (fr) Contrôle de débit adaptative basé sur des signaux de surcharge
RU2316127C2 (ru) Спектрально-ограниченная контролирующая пакетная передача для управления перегрузкой и установления вызова в сетях, основанных на пакетах
US7688731B2 (en) Traffic congestion
US20140372623A1 (en) Rate control
US20100118704A1 (en) Method and Apparatus for use in a communications network
Gao et al. A state feedback control approach to stabilizing queues for ECN-enabled TCP connections
KR101333856B1 (ko) 트래픽 부하를 관리하는 방법
JP2004532566A (ja) キューバッファ制御方法
Azhari et al. Overload control in SIP networks using no explicit feedback: A window based approach
JP2008507204A (ja) 二方向メッセージングネットワークでゾーン間帯域を管理する方法
US9054988B2 (en) Method and apparatus for providing queue delay overload control
Albisser et al. DUALPI2-Low Latency, Low Loss and Scalable Throughput (L4S) AQM
Kweon et al. Soft real-time communication over Ethernet with adaptive traffic smoothing
Muhammad et al. Study on performance of AQM schemes over TCP variants in different network environments
Lee et al. Enhanced TFRC for high quality video streaming over high bandwidth delay product networks
EP2553971A1 (fr) Procédé de détection de congestion dans un système radiocellulaire
Irawan et al. Performance evaluation of queue algorithms for video-on-demand application
EP2074760A1 (fr) Procédé et appareil destinés à être utilisés dans un réseau de communications
Belenki An enforced inter-admission delay performance-driven connection admission control algorithm
Aweya et al. DRED: a random early detection algorithm for TCP/IP networks
Guduru et al. Overload control in SIP signalling networks with redirect servers
CN118524065A (zh) 拥塞控制方法及装置、存储介质及电子设备

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090420

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20100601