EP1066703A2 - Congestion control in a telecommunications network - Google Patents

Congestion control in a telecommunications network

Info

Publication number
EP1066703A2
EP1066703A2 EP99915776A EP99915776A EP1066703A2 EP 1066703 A2 EP1066703 A2 EP 1066703A2 EP 99915776 A EP99915776 A EP 99915776A EP 99915776 A EP99915776 A EP 99915776A EP 1066703 A2 EP1066703 A2 EP 1066703A2
Authority
EP
European Patent Office
Prior art keywords
packets
network
buffer
backward
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99915776A
Other languages
German (de)
French (fr)
Inventor
Jian Ma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Networks Oy
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Networks Oy, Nokia Oyj filed Critical Nokia Networks Oy
Publication of EP1066703A2 publication Critical patent/EP1066703A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • H04L2012/5635Backpressure, e.g. for ABR
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5647Cell loss
    • H04L2012/5648Packet discarding, e.g. EPD, PTD
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5684Characteristics of traffic flows

Definitions

  • This invention relates generally to flow control in a telecommunications network. More particularly, the invention relates to congestion control in a packet switched telecommunications network, especially in networks where Transmission Control Protocol (TCP) is used as a transport layer protocol and where asymmetries can occur, i.e. where the opposite transmission directions can have unequal transmission capacities.
  • TCP Transmission Control Protocol
  • TCP is the most popular transport layer protocol for data transfer. It provides a connection-oriented reliable transfer of data between two communicating hosts. (Host refers to a network-connected computer, or to any system that can be connected to a network for offering services to another host connected to the same network.) TCP uses several techniques to maximize the performance of the connection by monitoring different variables related to the connection. For example, TCP includes an internal algorithm for avoiding congestion.
  • ATM Asynchronous Transfer Model
  • ITU-T has chosen as the target solution for a broadband integrated services digital network (B-ISDN).
  • B-ISDN broadband integrated services digital network
  • Congestion control relates to the general problem of traffic management for packet switched networks.
  • Congestion means a situation in which the number of transmission requests at a specific time exceeds the transmission capacity at a certain network point (called a bottle-neck resource). Congestion usually results in overload conditions. As a result, the buffers overflow, for instance, so that packets are retransmitted either by the network or by the subscriber.
  • congestion arises when the incoming traffic to a specific link is more than the outgoing link capacity.
  • the primary function of congestion control is to ensure good throughput and delay performance while maintaining a fair allocation of network resources to users. For TCP traffic, whose traffic patterns are often highly bursty, congestion control poses a challenging problem. It is known that packet losses result in significant degradation in TCP throughput. Thus, for the best possible throughput, a minimum number of packet losses should occur.
  • the present invention relates to congestion control in packet switched networks.
  • TCP networks or TCP over ATM networks i.e. networks in which TCP provides the end-to-end transport functions and the ATM network provides the underlying "bit pipes").
  • TCP provides the end-to-end transport functions
  • ATM network provides the underlying "bit pipes"
  • ATM Forum has specified five different service categories which re- late traffic characteristics and the quality of service (QoS) requirements to network behavior.
  • service classes are: constant bit rate (CBR), real-time variable bit rate (rt-VBR), non-real time variable bit rate (nrt-VBR), available bit rate (ABR), and unspecified bit rate (UBR).
  • CBR constant bit rate
  • rt-VBR real-time variable bit rate
  • nrt-VBR non-real time variable bit rate
  • ABR available bit rate
  • UBR unspecified bit rate
  • ABR Advanced Bit Rate
  • RM Resource Management
  • ABR sources periodically probe the network state (factors such as bandwidth availability, the state of congestion, and impending congestion) by sending RM cells intermixed with data cells.
  • the RM cells are turned around at the destination and sent back to the source.
  • ATM switches can write congestion information on these RM cells.
  • the source Upon receiving returned RM cells, the source can then increase, decrease, or maintain its rate according to the information carried by the cells.
  • FIG. 1 illustrates a connection between a TCP source A and a TCP destination B in a network, where the connection path goes through an ATM network using ABR flow control.
  • ABR rate control becomes effective and forces the edge router R1 to reduce its transmission rate to the ATM network.
  • the purpose of the ABR control loop is to command the ATM sources of the network to reduce their transmission rate. If congestion persists, the buffer in the router will reach its maximum capacity. As a consequence, the router starts to discard packets, resulting in the reduction of the TCP congestion window (the congestion window concept will be explained in more detail later).
  • the network of Figure 1 comprises two independent control loops: an ABR control loop and a TCP control loop.
  • this kind of congestion control which relies on dual congestion control schemes on different protocol layers, may have an unexpected and undesirable influence on the performance of the network.
  • the inner control loop ABR loop
  • TCP loop An alternative approach to support the best effort traffic is to use
  • FIG. 2 illustrates this kind of network, i.e. a TCP over UBR network.
  • the nodes of this kind of network comprise packet discard mechanisms which discard packets or cells when congestion occurs. When a packet is discarded somewhere in the network, the corresponding TCP source does not receive an acknowledgment. As a result, the TCP source reduces its transmission rate.
  • the UBR service employs no flow control and provides no numerical guarantees on the quality of service; it is therefore also the least expensive service to provide.
  • plain UBR without adequate buffer sizes provides poor performance in a congested network.
  • more sophisticated congestion control mechanisms have been proposed.
  • One is the so-called early packet discard (EPD) scheme.
  • EPD early packet discard
  • an ATM switch drops entire packets prior to buffer overflow. In this way the throughput of TCP over ATM can be much improved, as the ATM switches need not transmit cells of a packet with corrupted cells, i.e. cells belonging to packets in which at least one cell is discarded (these packets would be discarded during the reassembly of packets in any case).
  • EPD scheme is that it is relatively inexpensive to implement in an ATM switch.
  • EPD method For those inter- ested in the subject, a detailed description of the EPD method can be found, for example, in an article by A. Romanow and S. Floyd, Dynamics of TCP Traffic over ATM Networks, Proc. ACM SIGCOMM '94, pp. 79-88, August 1994.
  • the EPD method still deals unfairly with the users. This is due to the fact that the EPD scheme discards complete packets from all connections, without taking into account their current rates or their relative shares in the buffer, i.e. without taking into account their relative contribution to an overload situation.
  • several variations for selective drop policies have been proposed. One of these is described in an article by Rohit Goyal, Performance of TCP/IP over UBR+, ATM_Forum/96-1269. This method uses a FIFO buffer at the switch and performs some per-VC accounting to keep track of the buffer occupancy of each virtual circuit. In this way only cells from overloading connections can be dropped, whereas the underloading connections can increase their throughput.
  • ADSL Asymmetrical Digital Subscriber Line
  • the ADSL transmission connection is asymmetrical in that the transmission capacity from network to subscriber is considerably higher than from subscriber to network. This is due to the fact that the ADSL technique is intended mainly for high data rate applications which are asymmetric in nature. For example, video-on-demand, home shopping and Internet access all fea- ture high data rate demands in downstream direction (from network to subscriber), but relatively low data rate demands in upstream direction (from subscriber to network).
  • bidirectionality means that the slower upstream link is shared both by data packets sent up- stream and by acknowledgment packets which acknowledge data packets received from the downstream connection.
  • the rate at which the acknowledgments arrive on the backward channel controls the packet rate on the forward channel, congestion on the backward channel may lead to poor throughput on the forward channel.
  • the purpose of the invention is to alleviate the above-described drawback and to create a method by means of which it is possible, using a simple implementation, to effectively improve the throughput in an asymmetric environment, both in TCP over ATM networks and in IP networks.
  • the basic idea of the invention is to exploit a packet discard mecha- nism on a backward path of a link or a connection.
  • a packet discard mechanism on the backward path the acknowledgment packets can be discarded in a controlled manner. This leads to reduced packet fragmentation on the backward path, which in turn leads to improved throughput on the forward path.
  • the performance of asymmetric links can be significantly improved.
  • the traffic source can be informed at an early stage that the network is becoming overloaded.
  • a discard mechanism on both the forward and backward links, and to activate the packet discard method on the backward link only when asymmetry is sufficiently high.
  • Figure 1 illustrates a TCP connection path through an ABR-based ATM subnetwork
  • Figure 2 illustrates a TCP connection path through a UBR-based ATM subnetwork
  • Figure 3 illustrates an embodiment of the invention for an environment where user data is transferred in only one direction on a connection with fixed asymmetry
  • Figure 4 is a flow diagram illustrating the flow control mechanism of the embodiment of Figure 3
  • Figure 5 illustrates an alternative embodiment of the invention for an environ- ment where data traffic is bidirectional and the level of asymmetry can vary in time domain
  • Figure 6 is a flow diagram illustrating the flow control mechanism of the embodiment of Figure 5.
  • FIG. 3 illustrates the application of the invention for a single connection in a TCP over ATM network.
  • the figure shows schematically a traffic source, a traffic destination, and one intermediate node.
  • the data traffic is unidirectional so that host A sends TCP segments to host B through forward link FL and host B acknowledges correctly received segments by sending acknowledgment packets to host A through backward link BL.
  • the asymmetry of the connection is fixed, the forward link having a much higher transmission capacity than the backward link.
  • the term "segment” refers to the unit of information passed by TCP to IP (Internet Protocol).
  • the user data is read out from the traffic source through a socket buffer SB.
  • host A At the transport layer, host A first adds headers to user data units to form TCP segments. Then, at the network layer, host A further adds an IP header to each TCP segment to form IP datagrams. These datagrams are then converted in a known manner into standard ATM cells in an access node AN1 located at the edge of the ATM network. The cells of the datagrams are then routed through the ATM network to the access node AN2 of host B. On their way the to the destination the cells pass through a forward buffer FB of an intermediate node N1. The access node of host B reconstructs the original IP datagrams from the arriving cells and sends the reconstructed datagrams to host B. Host B removes the IP header to reveal the TCP segment from each datagram.
  • host B sends an acknowledging TCP segment back to host A through the backward link BL. In this way host B acknowledges each segment received correctly. On their way to host A along the backward link, the cells containing acknowledg- ments pass through backward buffer BB. Then, the access node AN1 and host B perform the above steps to extract the acknowledging TCP segments. After the source has received the acknowledgments, it can send more data continuously.
  • traffic load is measured on the backward path of an asymmetric connection, and cells or packets are discarded there when the measured traffic load exceeds a predetermined threshold level.
  • the measurement can be realized, for example, by measuring the fill rate of the backward buffer BB. If the load measurement unit LMU of node N1 detects that a certain predetermined fill rate has been exceeded, it commands the dis- card unit PDU to start dropping cells (or packets).
  • the discard mechanism can be any known mechanism. However, it is preferable to use a mechanism which discards cells so that entire acknowledgment packets are discarded, i.e. so that the integrity of the packets is protected as efficiently as possible.
  • the backward buffer would eventually overflow, which would cause serious packet fragmentation problems on the backward link. This, in turn, would degrade the data throughput significantly, as the source uses the acknowledgments to control its output rate in the forward direction.
  • the integrity of the acknowledgment packets can be protected, i.e. packet fragmentation can be decreased, and the arrival of the acknowledgments can be stabilized. In this way the invention is able to prevent the degradation of the data throughput of asymmetric connections.
  • cells can be discarded according to any packet discard mechanism which can protect the integrity of the acknowledgment packets, for example, according to the above-mentioned EPD method.
  • Cells can be discarded, for example, so that if an acknowledgment packet is to be discarded, all of its cells are discarded, except the last cell.
  • a bit in the cell header indicates which is the last cell formed from an acknowledgment packet. This bit is the third bit in the PTI field of the cell header. It is preferable not to discard the last cell in order to be able to detect the border between two suc- cessive packets. If an acknowledgment packet includes only the TCP and IP headers (i.e. no payload), two cells are needed to carry the packet.
  • Figure 4 is a flow diagram showing the steps performed in the embodiment of Figure 3. It is to be noted that if the asymmetry is fixed, as it is in the example of Figure 3, no packet discard mechanism is needed on the for- ward path of the connection. In other words, instead of monitoring traffic and discarding data packets (i.e. packets carrying user data) on the forward path, traffic is monitored and acknowledgment packets are discarded on the backward path.
  • data packets i.e. packets carrying user data
  • Figure 5 illustrates schematically another implementation example of the present invention.
  • the data traffic is birectional so that there is one TCP connection from host A to host B (connection 1) and another TCP connection from host B to host A (connection 2).
  • the level of asymmetry can vary, for example, due to reallocation of bandwidth. (This is called dynamic asymmetry.)
  • the exemplary network is an IP network, i.e. instead of ATM cells IP datagrams are transferred.
  • each input port of an intermediate node is provided with a traffic splitter (TS1 and TS2), which directs data packets to packet buffers and acknowledgment packets to acknowledgment buffers.
  • Traffic splitter TS1 on the forward path of connection 1 directs data packets traveling from host A to host B to data buffer DB1 (the forward buffer of connection 1) and acknowledgment packets traveling from host A to host B to acknowledgment buffer AB2 (the backward buffer of connection 2).
  • Traffic splitter TS1 on the forward path of connection 2 in turn di- rects data packets traveling from host B to host A to data buffer DB2 (the forward buffer of connection 2) and acknowledgment packets traveling from host B to host A to acknowledgment buffer AB1 (the backward buffer of connection
  • the service rate indicates the current rate at which information is transmitted out from the associated buffer.
  • k. is a variable representing the current asym- metry of connection 1
  • k 2 is a variable representing the current asymmetry of connection 2.
  • the packet discard mechanism used has two different modes of operation for both connections, a first mode for the forward link and a second mode for the backward link.
  • the load measurement unit of the intermediate node N1 monitors the values of S f1 , S b1 , S ⁇ , and S b2 by measuring the transmission rate from each buffer. On the basis of the measured values, the load measurement unit then calculates the values of k 1 and k 2 for connections 1 and 2, respectively. If , is smaller than or equal to a predetermined threshold K1 , the packet discard mechanism operates in the first mode for connection 1. Correspondingly, if k 2 is smaller than or equal to a predetermined threshold K2 (which typically equals K1), the packet discard mechanism operates in the first mode for connection 2.
  • the load measurement unit LMU measures the fill rates of the forward buffers DB1 (connection 1) and DB2 (connection 2). If the fill rate of buffer DB1 exceeds a predetermined value TH1 , the data packets of connection 1 are discarded by the packet discard unit PDU. Correspondingly, if the fill rate of buffer DB2 exceeds a predetermined value TH2, the data packets of connection 2 are discarded.
  • the first mode is similar to known packet discard mechanisms.
  • the load measurement unit inactivates the first mode of operation and activates the second mode of operation.
  • the packet discard unit discards entire acknowledgment packets on the backward link when the fill rate of the backward buffer ABi exceeds a given threshold value.
  • acknowledgment packets are dropped only when the asymmetry of the connection is high enough and the fill rate of the acknowledgment buffer exceeds a predetermined value.
  • Figure 5 shows a general situation regarding symmetry, i.e. a situation in which the link can be either symmetric or asymmetric and in which the degree of asymmetry can vary.
  • the packet discard mechanism discards cells so that entire packets are discarded.
  • the discard mechanism can operate according to the known EPD method, for example.
  • FIG. 6 is a flow diagram illustrating the above principles whereby packets are discarded either on the forward or on the backward link.
  • the de- gree of asymmetry of an individual connection is continuously monitored (phase 60). If there is no asymmetry or the degree of asymmetry is low, the packet discard mechanism is used only on the forward link, i.e. data packets are discarded on the forward link when the load level on the forward link exceeds a predetermined first threshold (phases 61 and 63). However, if the asymmetry of the connection exceeds a certain threshold, the packet discard mechanism is used only on the backward link (phases 62 and 64).
  • TCP is used as an example of the protocol
  • any other window-based protocol in which the arrival of acknowledgments controls the size of the window (output rate) could also be used in the network.
  • connection-specific buffers are shown in Figure 5, buffers shared by multiple connections could as well be used.
  • the load measurement unit could function so that it measures the overall output rates from the common data and acknowledgment buffers and discards the data units (packets or cells) of all connections in a similar manner.
  • Different kinds of variables can also be used to describe the degree of asymmetry.
  • Data and acknowledgment packets can also be stored in a common buffer.
  • the ratio of the output rate of the forward buffer to the output rate of the backward buffer would then determine whether the packet discard mechanism is used on the forward or on the backward path. If it is used on the backward path, only acknowledgments would be discarded from the common buffer.
  • the connections are not necessarily wireline connections; for example, the user terminals can have wireless access to the network.
  • the packets can be transmitted and buffered as different kinds of data units (segments or cells), depending on the type of transmission links.
  • packets can be transmitted and buffered as smaller data units (such as cells). These smaller data units are discarded so that the integrity of the packets is protected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to overload control in a packet switched network, especially in a network where Transmission Control Protocol (TCP) is used as the transport layer protocol. In order to increase the throughput of asymmetric connections, the level of traffic load is measured on the backward path of a connection and acknowledgement packets traveling along the backward path are discarded when the measured load level exceeds a predetermined level. If the asymmetry is dynamic, the current level of asymmetry can be estimated to determine whether a packet discard mechanism is used on the forward path or on the backward path.

Description

Congestion control in a telecommunications network
Field of the invention
This invention relates generally to flow control in a telecommunications network. More particularly, the invention relates to congestion control in a packet switched telecommunications network, especially in networks where Transmission Control Protocol (TCP) is used as a transport layer protocol and where asymmetries can occur, i.e. where the opposite transmission directions can have unequal transmission capacities.
Background of the invention
As is commonly known, TCP is the most popular transport layer protocol for data transfer. It provides a connection-oriented reliable transfer of data between two communicating hosts. (Host refers to a network-connected computer, or to any system that can be connected to a network for offering services to another host connected to the same network.) TCP uses several techniques to maximize the performance of the connection by monitoring different variables related to the connection. For example, TCP includes an internal algorithm for avoiding congestion. ATM (Asynchronous Transfer Model) is a newer connection-oriented packet-switching technique which the international telecommunication standardization organization ITU-T has chosen as the target solution for a broadband integrated services digital network (B-ISDN). The problems of conventional packet networks have been eliminated in the ATM network by using short packets of a standard length (53 bytes), known as cells. ATM networks are quickly being adopted as backbones for the various parts of TCP/IP networks (such as Internet).
Although ATM has been designed to provide an end-to-end transport level service, it is very likely that also the future networks will be implemented in such a way that (a) TCP/IP remains as the de-facto standard of the networks and (b) only part of the end-to-end path of a connection is implemented using ATM. Thus, even though ATM will continue to be utilized, TCP will still be needed to provide the end-to-end transport functions.
The introduction of ATM also means that implementations must be able to accommodate the huge legacy of existing data applications, in which TCP is widely used as transport layer protocol. To migrate the existing upper layer protocols to ATM networks, several approaches to congestion control in ATM networks have been considered in the past.
Congestion control relates to the general problem of traffic management for packet switched networks. Congestion means a situation in which the number of transmission requests at a specific time exceeds the transmission capacity at a certain network point (called a bottle-neck resource). Congestion usually results in overload conditions. As a result, the buffers overflow, for instance, so that packets are retransmitted either by the network or by the subscriber. In general, congestion arises when the incoming traffic to a specific link is more than the outgoing link capacity. The primary function of congestion control is to ensure good throughput and delay performance while maintaining a fair allocation of network resources to users. For TCP traffic, whose traffic patterns are often highly bursty, congestion control poses a challenging problem. It is known that packet losses result in significant degradation in TCP throughput. Thus, for the best possible throughput, a minimum number of packet losses should occur.
The present invention relates to congestion control in packet switched networks. For the above-mentioned reasons, most of such networks are, and will be in the foreseeable future, TCP networks or TCP over ATM networks (i.e. networks in which TCP provides the end-to-end transport functions and the ATM network provides the underlying "bit pipes"). In the following, the congestion control mechanisms of these networks are described briefly.
ATM Forum has specified five different service categories which re- late traffic characteristics and the quality of service (QoS) requirements to network behavior. These service classes are: constant bit rate (CBR), real-time variable bit rate (rt-VBR), non-real time variable bit rate (nrt-VBR), available bit rate (ABR), and unspecified bit rate (UBR). These service classes divide the traffic between guaranteed traffic and so-called "best effort traffic", the latter being the traffic which utilizes the remaining bandwidth after the guaranteed traffic has been served.
One possible solution for the best effort traffic is to use ABR (Available Bit Rate) flow control. The basic idea behind ABR flow control is to use special cells, so-called RM (Resource Management) cells, to adjust source rates. ABR sources periodically probe the network state (factors such as bandwidth availability, the state of congestion, and impending congestion) by sending RM cells intermixed with data cells. The RM cells are turned around at the destination and sent back to the source. Along the way, ATM switches can write congestion information on these RM cells. Upon receiving returned RM cells, the source can then increase, decrease, or maintain its rate according to the information carried by the cells.
In TCP over ATM networks, the source and the destination are interconnected through an IP/ATM/IP sub-network. Figure 1 illustrates a connection between a TCP source A and a TCP destination B in a network, where the connection path goes through an ATM network using ABR flow control. When congestion is detected in the ATM network, ABR rate control becomes effective and forces the edge router R1 to reduce its transmission rate to the ATM network. Thus, the purpose of the ABR control loop is to command the ATM sources of the network to reduce their transmission rate. If congestion persists, the buffer in the router will reach its maximum capacity. As a consequence, the router starts to discard packets, resulting in the reduction of the TCP congestion window (the congestion window concept will be explained in more detail later).
From the point of view of congestion control, the network of Figure 1 comprises two independent control loops: an ABR control loop and a TCP control loop. However, this kind of congestion control, which relies on dual congestion control schemes on different protocol layers, may have an unexpected and undesirable influence on the performance of the network. To put it more accurately, the inner control loop (ABR loop) may cause unexpected delays in the outer control loop (TCP loop). An alternative approach to support the best effort traffic is to use
UBR service with sufficiently large buffers and let the higher layer protocols, such as TCP, handle overload or congestion situations. Figure 2 illustrates this kind of network, i.e. a TCP over UBR network. The nodes of this kind of network comprise packet discard mechanisms which discard packets or cells when congestion occurs. When a packet is discarded somewhere in the network, the corresponding TCP source does not receive an acknowledgment. As a result, the TCP source reduces its transmission rate.
The UBR service employs no flow control and provides no numerical guarantees on the quality of service; it is therefore also the least expensive service to provide. However, because of its simplicity, plain UBR without adequate buffer sizes provides poor performance in a congested network. To eliminate this drawback, more sophisticated congestion control mechanisms have been proposed. One is the so-called early packet discard (EPD) scheme. According to the early packet discard scheme, an ATM switch drops entire packets prior to buffer overflow. In this way the throughput of TCP over ATM can be much improved, as the ATM switches need not transmit cells of a packet with corrupted cells, i.e. cells belonging to packets in which at least one cell is discarded (these packets would be discarded during the reassembly of packets in any case). Another advantage of the EPD scheme is that it is relatively inexpensive to implement in an ATM switch. For those inter- ested in the subject, a detailed description of the EPD method can be found, for example, in an article by A. Romanow and S. Floyd, Dynamics of TCP Traffic over ATM Networks, Proc. ACM SIGCOMM '94, pp. 79-88, August 1994.
However, the EPD method still deals unfairly with the users. This is due to the fact that the EPD scheme discards complete packets from all connections, without taking into account their current rates or their relative shares in the buffer, i.e. without taking into account their relative contribution to an overload situation. To remedy this drawback, several variations for selective drop policies have been proposed. One of these is described in an article by Rohit Goyal, Performance of TCP/IP over UBR+, ATM_Forum/96-1269. This method uses a FIFO buffer at the switch and performs some per-VC accounting to keep track of the buffer occupancy of each virtual circuit. In this way only cells from overloading connections can be dropped, whereas the underloading connections can increase their throughput. On the road towards a broadband telecommunication infrastructure and with the envisaged growth in Internet services one recent step has been that of examining how to utilize the conventional subscriber line (the metal wire pair) for high speed data transmission. One of the results of these measures is the ADSL (Asymmetrical Digital Subscriber Line) technology which offers new possibilities for high-rate data and video transmission along the wire pair of a telephone network to the subscribers' terminals.
The ADSL transmission connection is asymmetrical in that the transmission capacity from network to subscriber is considerably higher than from subscriber to network. This is due to the fact that the ADSL technique is intended mainly for high data rate applications which are asymmetric in nature. For example, video-on-demand, home shopping and Internet access all fea- ture high data rate demands in downstream direction (from network to subscriber), but relatively low data rate demands in upstream direction (from subscriber to network).
In this kind of situation where most services require much higher rates in one direction, high effective asymmetries can result because of bidirectional traffic. In other words, aside from the fact that these systems have a certain inherent bandwidth asymmetry, even higher asymmetries can be experienced if the access traffic to the network is bidirectional. Bidirectionality means that the slower upstream link is shared both by data packets sent up- stream and by acknowledgment packets which acknowledge data packets received from the downstream connection. Thus, since the rate at which the acknowledgments arrive on the backward channel controls the packet rate on the forward channel, congestion on the backward channel may lead to poor throughput on the forward channel. The above-described prior art congestion control mechanisms cannot tackle this problem as they are based on the assumption that the forward link is the bottleneck and are therefore intended to prevent packet fragmentation on the forward link. Hence, in a network with high asymmetry the performance of a connection can decrease considerably as a result of congestion on the backward link.
Summary of the invention
The purpose of the invention is to alleviate the above-described drawback and to create a method by means of which it is possible, using a simple implementation, to effectively improve the throughput in an asymmetric environment, both in TCP over ATM networks and in IP networks.
This goal can be attained by using the solution defined in the independent patent claims.
The basic idea of the invention is to exploit a packet discard mecha- nism on a backward path of a link or a connection. By applying a packet discard mechanism on the backward path the acknowledgment packets can be discarded in a controlled manner. This leads to reduced packet fragmentation on the backward path, which in turn leads to improved throughput on the forward path. Thus, by means of the invention the performance of asymmetric links can be significantly improved. Moreover, the traffic source can be informed at an early stage that the network is becoming overloaded.
In an environment where the asymmetry is not fixed but the level of asymmetry can vary due, for example, to reallocation of bandwidth resources, it is advantageous to have a discard mechanism on both the forward and backward links, and to activate the packet discard method on the backward link only when asymmetry is sufficiently high.
Brief description of the drawings
In the following, the invention and its preferred embodiments are described in closer detail with reference to examples shown in the appended drawings, wherein
Figure 1 illustrates a TCP connection path through an ABR-based ATM subnetwork, Figure 2 illustrates a TCP connection path through a UBR-based ATM subnetwork, Figure 3 illustrates an embodiment of the invention for an environment where user data is transferred in only one direction on a connection with fixed asymmetry, Figure 4 is a flow diagram illustrating the flow control mechanism of the embodiment of Figure 3, Figure 5 illustrates an alternative embodiment of the invention for an environ- ment where data traffic is bidirectional and the level of asymmetry can vary in time domain, and Figure 6 is a flow diagram illustrating the flow control mechanism of the embodiment of Figure 5.
Detailed description of the invention
Figure 3 illustrates the application of the invention for a single connection in a TCP over ATM network. The figure shows schematically a traffic source, a traffic destination, and one intermediate node. In the example of the figure, it is assumed that the data traffic is unidirectional so that host A sends TCP segments to host B through forward link FL and host B acknowledges correctly received segments by sending acknowledgment packets to host A through backward link BL. It is further assumed that the asymmetry of the connection is fixed, the forward link having a much higher transmission capacity than the backward link. The term "segment" refers to the unit of information passed by TCP to IP (Internet Protocol). As shown in Figure 3, the user data is read out from the traffic source through a socket buffer SB. At the transport layer, host A first adds headers to user data units to form TCP segments. Then, at the network layer, host A further adds an IP header to each TCP segment to form IP datagrams. These datagrams are then converted in a known manner into standard ATM cells in an access node AN1 located at the edge of the ATM network. The cells of the datagrams are then routed through the ATM network to the access node AN2 of host B. On their way the to the destination the cells pass through a forward buffer FB of an intermediate node N1. The access node of host B reconstructs the original IP datagrams from the arriving cells and sends the reconstructed datagrams to host B. Host B removes the IP header to reveal the TCP segment from each datagram. If an individual segment is received correctly, host B sends an acknowledging TCP segment back to host A through the backward link BL. In this way host B acknowledges each segment received correctly. On their way to host A along the backward link, the cells containing acknowledg- ments pass through backward buffer BB. Then, the access node AN1 and host B perform the above steps to extract the acknowledging TCP segments. After the source has received the acknowledgments, it can send more data continuously.
According to the invention, traffic load is measured on the backward path of an asymmetric connection, and cells or packets are discarded there when the measured traffic load exceeds a predetermined threshold level. The measurement can be realized, for example, by measuring the fill rate of the backward buffer BB. If the load measurement unit LMU of node N1 detects that a certain predetermined fill rate has been exceeded, it commands the dis- card unit PDU to start dropping cells (or packets). The discard mechanism can be any known mechanism. However, it is preferable to use a mechanism which discards cells so that entire acknowledgment packets are discarded, i.e. so that the integrity of the packets is protected as efficiently as possible.
If no packet discard mechanism were used on the backward link, the backward buffer would eventually overflow, which would cause serious packet fragmentation problems on the backward link. This, in turn, would degrade the data throughput significantly, as the source uses the acknowledgments to control its output rate in the forward direction. By discarding the acknowledgments on the backward link, the integrity of the acknowledgment packets can be protected, i.e. packet fragmentation can be decreased, and the arrival of the acknowledgments can be stabilized. In this way the invention is able to prevent the degradation of the data throughput of asymmetric connections.
As mentioned above, cells can be discarded according to any packet discard mechanism which can protect the integrity of the acknowledgment packets, for example, according to the above-mentioned EPD method. Cells can be discarded, for example, so that if an acknowledgment packet is to be discarded, all of its cells are discarded, except the last cell. A bit in the cell header indicates which is the last cell formed from an acknowledgment packet. This bit is the third bit in the PTI field of the cell header. It is preferable not to discard the last cell in order to be able to detect the border between two suc- cessive packets. If an acknowledgment packet includes only the TCP and IP headers (i.e. no payload), two cells are needed to carry the packet.
Figure 4 is a flow diagram showing the steps performed in the embodiment of Figure 3. It is to be noted that if the asymmetry is fixed, as it is in the example of Figure 3, no packet discard mechanism is needed on the for- ward path of the connection. In other words, instead of monitoring traffic and discarding data packets (i.e. packets carrying user data) on the forward path, traffic is monitored and acknowledgment packets are discarded on the backward path.
Figure 5 illustrates schematically another implementation example of the present invention. This time it is assumed that the data traffic is birectional so that there is one TCP connection from host A to host B (connection 1) and another TCP connection from host B to host A (connection 2). Furthermore, it is assumed that the level of asymmetry can vary, for example, due to reallocation of bandwidth. (This is called dynamic asymmetry.) For the simplicity of the figure, it is further assumed that the exemplary network is an IP network, i.e. instead of ATM cells IP datagrams are transferred.
In the embodiment of Figure 5, data packets and acknowledgment packets are stored in their dedicated buffers. For this purpose, each input port of an intermediate node is provided with a traffic splitter (TS1 and TS2), which directs data packets to packet buffers and acknowledgment packets to acknowledgment buffers. Traffic splitter TS1 on the forward path of connection 1 directs data packets traveling from host A to host B to data buffer DB1 (the forward buffer of connection 1) and acknowledgment packets traveling from host A to host B to acknowledgment buffer AB2 (the backward buffer of connection 2). Traffic splitter TS1 on the forward path of connection 2 in turn di- rects data packets traveling from host B to host A to data buffer DB2 (the forward buffer of connection 2) and acknowledgment packets traveling from host B to host A to acknowledgment buffer AB1 (the backward buffer of connection
1)-
Let us now define that Sπ represents the service rate (in data units per time unit) from the forward buffer of connection i (i=1or 2) and Sbi the service rate from the backward buffer of connection i. Thus, the service rate indicates the current rate at which information is transmitted out from the associated buffer.
Further, we define the asymmetry of the connections by defining k., as the ratio of the transmission rate from the forward buffer of connection 1 to the transmission rate from the backward buffer of connection 1 and by defining k2 as the ratio of the transmission rate from the forward buffer of connection 2 to the transmission rate from the backward buffer of connection 2, i.e. k.,=Sf1/Sb1 and k2=Sf2/Sb2. Thus, k., is a variable representing the current asym- metry of connection 1 , and k2 is a variable representing the current asymmetry of connection 2.
In the embodiment of Figure 5, the packet discard mechanism used has two different modes of operation for both connections, a first mode for the forward link and a second mode for the backward link. The load measurement unit of the intermediate node N1 monitors the values of Sf1, Sb1, S^, and Sb2 by measuring the transmission rate from each buffer. On the basis of the measured values, the load measurement unit then calculates the values of k1 and k2 for connections 1 and 2, respectively. If , is smaller than or equal to a predetermined threshold K1 , the packet discard mechanism operates in the first mode for connection 1. Correspondingly, if k2 is smaller than or equal to a predetermined threshold K2 (which typically equals K1), the packet discard mechanism operates in the first mode for connection 2. In the first mode, the load measurement unit LMU measures the fill rates of the forward buffers DB1 (connection 1) and DB2 (connection 2). If the fill rate of buffer DB1 exceeds a predetermined value TH1 , the data packets of connection 1 are discarded by the packet discard unit PDU. Correspondingly, if the fill rate of buffer DB2 exceeds a predetermined value TH2, the data packets of connection 2 are discarded. Thus, the first mode is similar to known packet discard mechanisms.
On connection i (i=1 or 2), whenever k, is greater than a predeter- mined threshold Ki, the load measurement unit inactivates the first mode of operation and activates the second mode of operation. As a result of this, the packet discard unit discards entire acknowledgment packets on the backward link when the fill rate of the backward buffer ABi exceeds a given threshold value. Thus, acknowledgment packets are dropped only when the asymmetry of the connection is high enough and the fill rate of the acknowledgment buffer exceeds a predetermined value.
As is obvious from the above, Figure 5 shows a general situation regarding symmetry, i.e. a situation in which the link can be either symmetric or asymmetric and in which the degree of asymmetry can vary. If the underlying network is an ATM network, the packet discard mechanism discards cells so that entire packets are discarded. As mentioned above, the discard mechanism can operate according to the known EPD method, for example.
Figure 6 is a flow diagram illustrating the above principles whereby packets are discarded either on the forward or on the backward link. The de- gree of asymmetry of an individual connection is continuously monitored (phase 60). If there is no asymmetry or the degree of asymmetry is low, the packet discard mechanism is used only on the forward link, i.e. data packets are discarded on the forward link when the load level on the forward link exceeds a predetermined first threshold (phases 61 and 63). However, if the asymmetry of the connection exceeds a certain threshold, the packet discard mechanism is used only on the backward link (phases 62 and 64).
Although the invention has been described here in connection with the examples shown in the attached figures, it is clear that the invention is not limited to these examples, as it can be varied in several ways within the limits set by the attached patent claims. The following describes briefly some possible variations.
Although TCP is used as an example of the protocol, any other window-based protocol in which the arrival of acknowledgments controls the size of the window (output rate) could also be used in the network. Furthermore, although connection-specific buffers are shown in Figure 5, buffers shared by multiple connections could as well be used. In that case the load measurement unit could function so that it measures the overall output rates from the common data and acknowledgment buffers and discards the data units (packets or cells) of all connections in a similar manner. However, it is also possible to calculate the relative shares of the different connections in a shared buffer, and to discard only the data units of connections whose relative share exceeds a threshold value. Different kinds of variables can also be used to describe the degree of asymmetry. Data and acknowledgment packets (of a two-way connection between two stations) can also be stored in a common buffer. In the case of Figure 5, this would mean that buffers DB1 and AB2 form one forward buffer, and buffers DB2 and AB1 form one backward buffer. The ratio of the output rate of the forward buffer to the output rate of the backward buffer would then determine whether the packet discard mechanism is used on the forward or on the backward path. If it is used on the backward path, only acknowledgments would be discarded from the common buffer. Furthermore, the connections are not necessarily wireline connections; for example, the user terminals can have wireless access to the network. As is also obvious from the above, the packets can be transmitted and buffered as different kinds of data units (segments or cells), depending on the type of transmission links. Thus, packets can be transmitted and buffered as smaller data units (such as cells). These smaller data units are discarded so that the integrity of the packets is protected. There can also be separate load measurement units and/or separate packet discard units for the forward path and the backward path. It is also possible that the load measurement means are located in a different network node than the packet discard means.

Claims

Patent claims
1. A method for controlling overload in a packet switched network comprising traffic sources (A), traffic destinations (B), and network nodes (AN, N1), the method comprising the steps of
- sending data packets along a forward path from a traffic source to a traffic destination,
- sending an acknowledging packet along a backward path from the destination to the source, if a data packet is received correctly at the destina- tion, and
- measuring load level in at least one network node, characterized in that when the ratio of the transmission capacity on the forward path to the transmission capacity on the backward path is greater than a predetermined threshold by: - measuring load level on the backward path and
- discarding acknowledging packets when the measured load level on the backward path exceeds a predetermined level.
2. A method according to claim ^characterized by
- estimating a value for a variable representing said ratio, - using a packet discard mechanism on the forward path when the estimated value is lower than said threshold, and
- using a packet discard mechanism on the backward path when the estimated value has reached said threshold.
3. A method according to claim 1 in a network where said ratio is permanently greater than said predetermined threshold, characterized by discarding only acknowledging packets on the backward path.
4. A method according to claim 2, wherein data packets are transmitted in two transmission directions between two sources, characterized in that in each transmission direction the data packets are stored in at least one data buffer and the acknowledging packets in at least one acknowledgment buffer.
5. A method according to claim 4, characterized in that
- at least part of the data buffers are connection-specific buffers and at least part of the acknowledgment buffers are connection-specific buffers, whereby said value is calculated for all the connections having connection specific buffers.
6. A method according to claim 5, characterized by
- measuring the output transmission rate from the data and acknowledgment buffers of an individual connection and
- calculating said value as a ratio of the measured output rate from the data buffer to the measured output rate from the acknowledgment buffer.
7. A method according to claim 1 in a network comprising ATM links, characterized in that packets are discarded by selectively discarding cells of individual packets.
8. A packet switched telecommunications network including - nodes interconnected by transmission lines (TL1 , TL2),
- user terminals (UT) connected to the nodes, said user terminals acting as traffic sources which send data packets and as traffic destinations which (a) receive data packets and (b) send acknowledgment packets in response to correctly received data packets, whereby the data packets travel along a forward path from the source to the destination and the acknowledgment packets travel along a backward path from the destination to the source, and
- measuring means (LMU) for measuring current load level in a node, characterized in that - the measuring means are arranged to measure the load level on the backward path, and
- the backward path includes packet discard means (PDU), responsive to the measuring means (LMU), for discarding acknowledgment packets when the measured load level on the backward path exceeds a predetermined value.
9. A network according to claim 8, characterized in that the network comprises means (LMU) for estimating a value for a variable which represents the ratio of the transmission capacity on the forward path to the transmission capacity on the backward path.
10. A network according to claim 9, characterized in that the measurement means are also arranged to measure the load level on the forward path, whereby the packet discard means are responsive to the measurement means for discarding data packets traveling along the forward path when the measured load level on the forward path exceeds a predetermined value.
11. A node arrangement in a packet switched telecommunications network, the node arrangement comprising
- input ports and output ports,
- buffering means for buffering data units traveling from an input port to an output port,
- measuring means (LMU) for measuring the current load level in the node, and
- packet discard means (PDU), responsive to the measuring means (LMU), for discarding packets when the measured load level exceeds a pre- determined level, characterized in that the buffer means comprise at least one forward buffer for buffering data packets traveling from a traffic source to a traffic destination and at least one backward buffer for buffering acknowledgment packets traveling from said destination to said source,
- the measuring means are arranged to measure the load level in the backward buffer, and
- the packet discard means (PDU) are arranged to discard acknowledgment packets.
12. A node arrangement according to claim 11, characterized in that it further comprises estimation means for estimating a value for a variable which represents the ratio of the transmission capacity on the forward path to the transmission capacity on the backward path.
13. A node arrangement according to claim 11, characterized in that the measuring means are also arranged to measure the load level in the forward buffer, and packet discard means (PDU) are also arranged to discard data packets.
EP99915776A 1998-04-09 1999-04-09 Congestion control in a telecommunications network Withdrawn EP1066703A2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FI980826 1998-04-09
FI980826A FI980826A (en) 1998-04-09 1998-04-09 Control of congestion in telecommunications networks
PCT/FI1999/000303 WO1999053655A2 (en) 1998-04-09 1999-04-09 Congestion control in a telecommunications network

Publications (1)

Publication Number Publication Date
EP1066703A2 true EP1066703A2 (en) 2001-01-10

Family

ID=8551508

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99915776A Withdrawn EP1066703A2 (en) 1998-04-09 1999-04-09 Congestion control in a telecommunications network

Country Status (4)

Country Link
EP (1) EP1066703A2 (en)
AU (1) AU3422899A (en)
FI (1) FI980826A (en)
WO (1) WO1999053655A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1252795A1 (en) * 2000-01-30 2002-10-30 Celox Networks, Inc. Device and method for packet inspection
US6810031B1 (en) 2000-02-29 2004-10-26 Celox Networks, Inc. Method and device for distributing bandwidth
WO2014100973A1 (en) * 2012-12-25 2014-07-03 华为技术有限公司 Video processing method, device and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1104686C (en) * 1996-05-10 2003-04-02 富士通网络通信公司 Method and apparatus for enabling flow control over multiple networks having disparate flow control capability
US6078564A (en) * 1996-08-30 2000-06-20 Lucent Technologies, Inc. System for improving data throughput of a TCP/IP network connection with slow return channel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9953655A3 *

Also Published As

Publication number Publication date
WO1999053655A2 (en) 1999-10-21
AU3422899A (en) 1999-11-01
FI980826A (en) 1999-10-10
WO1999053655A3 (en) 1999-12-02
FI980826A0 (en) 1998-04-09

Similar Documents

Publication Publication Date Title
US6882624B1 (en) Congestion and overload control in a packet switched network
AU745204B2 (en) Flow control in a telecommunications network
JP4436981B2 (en) ECN-based method for managing congestion in a hybrid IP-ATM network
US6490251B2 (en) Method and apparatus for communicating congestion information among different protocol layers between networks
US5983278A (en) Low-loss, fair bandwidth allocation flow control in a packet switch
US7046631B1 (en) Method and apparatus for provisioning traffic dedicated cores in a connection oriented network
US7046630B2 (en) Packet switching network, packet switching equipment and network management equipment
CA2179618C (en) Data link interface for packet-switched network
Labrador et al. Packet dropping policies for ATM and IP networks
US6587437B1 (en) ER information acceleration in ABR traffic
AU1050601A (en) Method and system for discarding and regenerating acknowledgment packets in ADSL communications
EP0920236A2 (en) Controlling ATM layer transfer characteristics based on physical layer dynamic rate adaptation
KR100411447B1 (en) Method of Controlling TCP Congestion
EP0884923B1 (en) Packet switching network, packet switching equipment, and network management equipment
EP1066703A2 (en) Congestion control in a telecommunications network
EP1068766B1 (en) Congestion control in a telecommunications network
Goyal Traffic management for TCP/IP over Asynchronous Transfer Mode (ATM) networks
AU717162B2 (en) Improved phantom flow control method and apparatus
Fang et al. TCP performance in ATM networks: ABR parameter tuning and ABR/UBR comparisons
FI104602B (en) Flow control in a telecommunications network
Iliadis Performance of TCP traffic and ATM feedback congestion control mechanisms
Kara et al. Towards a framework for performance evaluation of TCP behaviour over ATM networks
JP2000253018A (en) Atm priority control ip gateway device, atm priority control ip router and method for processing them
Vandalore et al. Simulation study of World Wide Web traffic over the ATM ABR service
Vandalore et al. Worst case buffer requirements for TCP over ABR

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20001009

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL PAYMENT 20001009;LT PAYMENT 20001009;LV PAYMENT 20001009;MK PAYMENT 20001009;RO PAYMENT 20001009;SI PAYMENT 20001009

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NOKIA CORPORATION

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20030520