AU751285B2 - Method and system for data communication - Google Patents

Method and system for data communication Download PDF

Info

Publication number
AU751285B2
AU751285B2 AU58929/99A AU5892999A AU751285B2 AU 751285 B2 AU751285 B2 AU 751285B2 AU 58929/99 A AU58929/99 A AU 58929/99A AU 5892999 A AU5892999 A AU 5892999A AU 751285 B2 AU751285 B2 AU 751285B2
Authority
AU
Australia
Prior art keywords
host
data packets
network entity
receiving
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU58929/99A
Other versions
AU5892999A (en
Inventor
Christofer Kanljung
Jan Kullander
Anders Svensson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of AU5892999A publication Critical patent/AU5892999A/en
Application granted granted Critical
Publication of AU751285B2 publication Critical patent/AU751285B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • H04L47/225Determination of shaping rate, e.g. using a moving window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/267Flow control; Congestion control using explicit feedback to the source, e.g. choke packets sent by the destination endpoint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0273Traffic management, e.g. flow control or congestion control adapting protocols for flow control or congestion control to wireless environment, e.g. adapting transmission control protocol [TCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • H04W80/06Transport layer protocols, e.g. TCP [Transport Control Protocol] over wireless

Description

WO 00/21231 PCT/SE99/01479 1 METHOD AND SYSTEM FOR DATA COMMUNICATION FIELD OF INVENTION The present invention relates generally to data communication between computers. More particularly the invention relates to the problem of efficiently communicating data packets over a network including both a sub-network, which is substantially error-immune and has a low latency, and an access link, which is comparatively error-prone and has a substantial latency.
DESCRIPTION OF THE PRIOR ART TCP (Transmission Control Protocol) is today the most commonly used transport layer protocol for communicating data over an internet. This protocol is optimised for wired connections, which, have a high transmission quality and a low latency.
However, TCP is not very efficient for transmitting data over links that are error prone, have long delays and/or a high latency. Wireless links constitute typical examples of such non-optimal links. Mobile communication typically imposes a wireless link. Thus, two computers, of which at least one is mobile, cannot communicate efficiently via a standard TCPconnection, since the transmission algorithms in TCP postulate a much higher link quality than what a wireless connection normally can offer. Therefore, the comparatively poor quality of the wireless connection, in most cases, severely degenerates the performance of the connection. This is particularly true if the wireless link is a high-speed link with a considerable latency.
To ensure a safe transmission of data packets from a sending to a receiving host, the protocol prescribes that a piece of information indicating the status of received data packets must be fed back from the receiving host to the sending host.
A simple positive acknowledgement protocol awaits an acknowledgement for each particular data packet before sending SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 2 another data packet. Naturally, such a protocol wastes a substantial amount of network bandwidth while the sending host waits for acknowledgements. An example of a more efficient protocol is the so-called sliding window protocol.
Figure 1 illustrates a known method of using a sliding window protocol to make it possible for a sending host to transmit multiple data packets DPl DP3 before obtaining feed-back information Ackl Ack3 on the status of the transmitted data packets from a receiving host. In figure 1 the sending host is represented to the left and the receiving host to the right. A time scale is symbolised vertically, with increasing time downwards. In this example a congestion_window has the size W of three data packets. This means that three data packets DPI DP3 may leave the sending host before a first status message Ackl from the receiving host arrives. Once such a message Ackl has come, the congestion_window slides one data packet and a fourth data packet DP4 may be sent.
In order to assure delivery, each data packet is associated with a retransmission timer. The retransmission timer is started when a data packet leaves the sending host. At the expiry of the retransmission timer the sending host retransmits the data packet. The protocol may also be defined such that the receiving host returns a negative acknowledgement message for a data packet, if the data packet has been received, however incorrectly. The data packet is, of course, retransmitted also when such a negative acknowledgement message arrives at the sending host. Thus a data packet will be retransmitted either at reception of a negative acknowledgement message or when the retransmission timer expires, whichever happens first.
The procedure is then repeated for all data packets in the message until the sending host has obtained positive acknowledgement messages for each data packet in the message.
The size W of the congestionwindow thus corresponds to the sUBSTrrUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 3 number of data packets that may be sent out unacknowledged into the network between the sending and the receiving host.
By gradually increasing the congestionwindow, it is possible to eliminate the idle time in the network completely. In the steady state, the sending host can thus transmit data packets as fast as the network can transfer them. Consequently, a well-tuned sliding window protocol keeps the network completely saturated with data packets and obtains substantially higher throughput than a simple positive acknowledgement protocol (which is also known under the name stop-and-wait protocol).
Modern TCP applies four different algorithms for controlling the transmission of data packets over the Internet. According to a first algorithm termed Slow Start the congestion window is gradually increased as described above. Slow Start is applied whenever a new connection is set up or when packet loss has been detected by the retransmission timer, e.g. after a period of congestion. The congestion_window is initially set to one data packet. The congestionwindow is then increased to two data packets at reception of the first acknowledgement message. The sending host then sends two more data packets and awaits the corresponding acknowledgement messages. When those arrive they each increase the congestionwindow by one, so that four data packets may be sent unacknowledged, and so on.
The term Slow Start may sometimes be a misnomer, because under ideal conditions, the transmission rate is ramped up exponentially.
If capacity limitations of the network do not stop this exponential increase, the receiving host always has a window limit, a so-called advertised window, which ultimately restricts the transmission rate. Once this limit has been reached the congestion_window cannot be increased any further.
To avoid increasing the congestion_window too quickly and thereby causing congestion, TCP includes one additional SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 4 restriction. A third window, usually referred to as the allowed window, is applied for this purpose. The size of the allowed window is determined by the following expression: allowedwindow min(advertisedwindow, congestion_window).
The congestion_window in its turn is set according to the following strategy. In steady state, on a non-congested connection, the congestionwindow and the advertisedwindow are of equal size. The congestion_window is reduced by half (down to a minimum of one data packet) upon loss of data packets. For those data packets that remain in the allowed window, the retransmission time is reduced exponentially.
If the advertised window does not restrain the packet rate to a level that the network is able to sustain, the advertised window is larger than a window size, which corresponds to the entire available capacity in the network, the congestion_window will be increased until a packet is lost due to congestion. The congestionwindow will then be reduced radically to decrease the total load on the network. After that, the congestion_window will once more be increased until a data packet loss occurs and so on. In this case, a steady state will never be reached.
Congestion Avoidance is a second algorithm included in TCP, which is applied after Slow Start. Whenever a new connection is set up between two hosts the congestion_window is increased until either a steady state is reached or (ii) a data packet is lost. The communicating hosts may be informed of the loss of a data packet in one of two alternative ways. Either because a retransmission timer expires or because a third algorithm called Fast Retransmit is activated. This algorithm will be described after discussing the different methods applied upon data packet loss detection.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01 479 If a data packet loss is discovered through expiration of a retransmission timer, the congestion_window is immediately reduced to one data packet. The congestion window is thereafter increased under the Slow Start algorithm until it reaches one half of the size it had before the retransmission timer expired. Then, the Congestion Avoidance algorithm isactivated. During Congestion Avoidance the congestion window is increased by one data packet only when all the packets in one window have been positively acknowledged.
If the loss of a data packet is discovered through the Fast Retransmit algorithm, the congestionwindow is instantaneously decreased to half the size it had before the data packet was lost. The Congestion Avoidance algorithm is then activated and applied as described above.
The Fast Retransmit algorithm will be illustrated by means of an example. Suppose a sending host transmits 10 data packets to a receiving host. All these data packets arrive correctly.
As a consequence of this, the receiving host returns a positive acknowledgement message for data packet number 10 to the sending host. This message indicates to the sending host that all 10 data packets have been received correctly. The sending host then transmits data packet number 11. This data packet is however lost somewhere in the network. Later, the sending host transmits data packet number 12, which reaches the receiving host correctly. Since the positive acknowledgement messages are cumulative, the receiving host cannot now return a positive acknowledgement message for data packet number 12. Instead, another positive acknowledgement message for data packet number 10 is sent. The sending host then transmits data packet number 13. This data packet also reaches the receiving host correctly. As a response, the receiving host feeds back yet another positive acknowledgement message for data packet number 10. When the sending host thus has received a third positive acknowledgement message for the SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 6 data packet, this is interpreted as a loss of data packet number 11. This data packet is therefore retransmitted and if it reaches the receiving host correctly, a positive acknowledgement message for data packet number 13 will be returned. A general and more detailed description of the Fast Retransmit algorithm can be found in W. Stevens, "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms", Internet RFC 2001, Network Working Group, NOAO, January 1997.
To sum up, the three algorithms described above, Slow Start opens up the congestionwindow exponentially. Congestion Avoidance, on the other hand, opens up the congestion window linearly. Fast Retransmit is an algorithm for detecting the loss of data packets.
Forward Acknowledgement (FACK) is a fourth control mechanism included in TCP, which actually is a developed version of the Congestion Avoidance and the Fast Retransmit algorithms.
Through FACK however, the outstanding data packets in the network may be more accurately controlled. FACK is also less bursty and can recover from periods of heavy loss better.
Further details regarding FACK can be found in M, Mathis et al, "Forward Acknowledgement: Refining TCP Congestion Control", Proceedings of ACM Sigcomm '96, Stanford, USA, August 1996.
H. Balakrishnan et al describe a fifth mechanism under TCP, which is called Snoop, in the document "Improving TCP/IP Performance over Wireless Networks", Proceedings of ACM Mobicom '95, November 1995. Snoop is a protocol that improves TCP in wireless networks. The protocol modifies network-layer software mainly at a base station, while preserving the endto-end TCP semantics. The general idea of the protocol is to cache data packets at the base station and perform local retransmissions across the wireless link.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/0 i 479 7 A further adaptation of TCP to wireless connections is presented in S. Biaz et al, "TCP over Wireless Networks Using Multiple Acknowledgements", Department of Computer Science at Texas A&M University, Technical Report 97-001, January 1997.
Unnecessary retransmissions are here avoided in the network by feeding back a partial acknowledgement for a data packet that has reached the base station, if it experiences difficulties on the wireless link. The base station is responsible for retransmissions on the wireless link, while it delays timeout of the retransmission timer via the partial acknowledgement.
The Slow Start-, the Congestion Avoidance-, the Fast Retransmit- and the Fast Recovery-algorithms are all applied in most modern implementations of TCP. The FACK- and the Snoop algorithms however, are still not very frequently used.
The patent document EP, A2, 0 695 053 discloses an asymmetric protocol for wireless data communications with mobile terminals, according to which the terminals only transmit acknowledgement messages and requests for retransmission upon inquiry or when all data packets within a data block have been received. According to the protocol, base stations store channel information for the wireless links and status information of received and transmitted data packets. A base station may also combine acknowledgements for multiple data packets into a single acknowledgement code, in order to reduce the power consumption in the mobile terminals.
Generally speaking, wireless connections cause longer roundtrip delays than wired connections. As result of the longer round-trip delays, the transmission rate increase for a wireless connection must, according to the prior art transport protocols, be much slower than for a corresponding wired connection. This is particularly true for wireless connections where the bandwidth-delay product is comparatively high. The prior art protocols always seek to increase the throughput of a connection as far as the interconnecting networks permit. In SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01 479 8 addition to this, the possible transmission rate increase for a particular connection is inversely correlated to the roundtrip delay for the connection. Hence, the shorter the roundtrip delay, the faster the increase rate. The combination of characteristics typical for a wireless connection of high bandwidth-delay product and long round-trip delay consequently also constitutes a problem.
SUMMARY OF THE INVENTION The present invention relates generally to data communication between host computers, and more particularly to the problems discussed above. The means of solving these problems according the present invention are summarised below.
As indicated earlier, problems occur when the networks, which connect a sending host with a receiving host suffer from both a high random loss of data packets and a high bandwidth-delay product.
Accordingly, it is an object of the present invention to unravel the above-mentioned problem.
Particularly, it is an object of the invention to increase the efficiency of a network with a high a bandwidth-delay product, being connected to an access link, which causes a high random loss of data.
Another object of the invention is to minimise the influence of data packet losses in relatively error-prone access links, in a substantially less error-prone network.
One further object of the present invention is to increase the efficiency of the less error-prone links of a data communication system, and thereby admit transmission of a larger amount of data through the total system.
The proposed method for communicating data packets over a packet switched network via at least one wireless access link SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 9 includes the following assumptions and steps. The packet switched network is presumed to offer a connectionless delivery of data packets. The protocol used by the sending host and the receiving host is a reliable sliding window transport protocol, through which data packets, whenever necessary, may be retransmitted from the sending to the receiving host. The communicating hosts also take measures to protect the packet switched network from congestion. The receiving host generates status messages indicating the condition of received data packets to the sending host. In response to the status messages the sending host takes appropriate data flow control measures. A buffering network entity interfaces both the wireless access link and the packet switched network. During communication of data packets the buffering network entity performs the following steps. First, receiving data packets from the sending host. Second, either explicitly or implicitly, notifying the sending host of which data packets that have been received correctly by the buffering network entity, and in case of a lost or erroneously received data packet, indicating whether the data packet was lost or erroneously received over the access link or in the packet switched network. Third, storing the correctly received data packets. Fourth, forwarding the stored data packets to the receiving host. Finally, the buffering network entity performs local retransmissions of the stored data packets to the receiving host, whenever that becomes necessary. Typically such retransmissions occur after the expiry of a retransmission timer or at reception of an explicit or implicit notification of loss from the receiving host.
A method of communicating data packets between two hosts according to the invention is hereby characterised by what is apparent from claim 1.
A proposed system includes a buffering network entity, which interfaces both a wireless access link and a packet switched network and thus directly or indirectly brings a sending host SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 in contact with a receiving host. The sending and the receiving hosts are assumed to operate according to a reliable sliding window transport protocol through which lost or erroneously received data packets may be retransmitted from the sending host to the receiving host. The packet switched network is further assumed to offer a connectionless delivery of data packets. The buffering network entity includes a receiving means for receiving data packets. This means also generates and returns to the sending host a first status message, which indicates whether a particular data packet must be retransmitted or not and (ii) indicates whether a data packet has been lost or erroneously received in the packet switched network. The buffering network entity also includes a means for storing correctly received data packets and a means for retrieving the stored data packets and transmitting them to the receiving host. Moreover, the buffering network entity includes a processing means for receiving the first status message from the receiving means and for receiving a second status message from the receiving host. The processing means generates a third status message in response to the first and second status messages and returns the third status message to the sending host. The stored data packets are retransmitted to the receiving host whenever that proves to be necessary.
Typically, such retransmission is initiated after the expiry of a retransmission timer or at reception of an explicit or implicit notification of loss.
The system according to the invention is hereby characterised by the features set forth in the characterising clause of claim 9.
The present invention thus prevents congestion control algorithms from being activated in a substantially errorimmune network when data packets are lost in thereto connected and comparatively error-prone access links for other reasons than congestion. This of course, increases the efficiency not only in the substantially error-immune network per se, but SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 11 also in systems, which include both substantially error-immune links and comparatively error-prone access links.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates the known method of using a sliding window protocol being described above; Figure 2 Figure 3 Figures 4a-d illustrates the general method according to the invention by means of a sequence diagram; depicts a block diagram over a proposed system; illustrate embodiments of the proposed method for communicating data packets and generating status information; shows a flow diagram over an embodiment of the proposed method being performed by a buffering network entity; depicts a block diagram over a system according to the invention.
Figure 5 Figure 6 The invention will now be described in more detail with reference to preferred exemplifying embodiments thereof and also with reference to the accompanying drawings.
DESCRIPTION OF PREFERRED EMBODIMENTS A sequence diagram in figure 2 gives a general illustration of the method according to the invention. A sending host is here represented to the left in the diagram and a receiving host is represented to the right. The part of the connection between the sending host and a buffering network entity IWU is referred to as a first leg A and the part of the connection between the buffering network entity and the receiving host is identified as a second leg B. A time scale is symbolised vertically, with increasing time downwards.
Data packets DPn up to a number n are assumed to have reached the receiving host correctly. The receiving host therefore SUBSTTUE -SHEET-(RULE 26) WO 00/21231 PCT/SE99/01479 12 returns a positive acknowledgement message Ack n indicating this to the buffering network entity IWU. The buffering network entity IWU receives the positive acknowledgement message Ack n. In this example, a data packet DPm with number m arrives correctly at the buffering network entity IWU a few moments later. The buffering network entity IWU subsequently feeds back to the sending host a first status information message S(An, indicating that the receiving host has received all data packets DPn up to number n correctly, i.e.
no errors or losses have occurred for any of those n data packets, neither in the first leg A nor the second leg B. The buffering network entity IWU may also, simultaneously with this or at any other time after reception of the data packet DPm, send back a second status information message Ack m, which indicates to the sending host that all data packets DPm up to number m have reached the buffering network entity IWU successfully. The first and second data packet status information messages S(An, Bn); Ack m are preferably effectuated according to a selective acknowledgement algorithm, such as SACK. A detailed description of this algorithm can be found in M. Mathis et al, "TCP Selective Acknowledgement Options", Internet RFC 2018, Network Working Group, October 1996.
If a data packet is received erroneously or if a data packet is lost over any of the legs A or B, the buffering network entity IWU feeds back data packet status information messages indicating this fact to the sending host. The buffering network entity IWU would, in response to a lost or degraded data packet in the first leg A, return to the sending host data packet a status information message S(A, B) which indicates the transmission error on this particular leg A. In case of a loss or degeneration of a data packet over the second leg B, the buffering network entity IWU would return at least a data packet status information message S(A, B) 1] indicating the transmission error on this leg B.
Optionally, the buffering network entity IWU could already SUBSTITUTE SHEET (RULE 26) WO 00/21231 pCT/SE99/01479 13 have fed back a positive acknowledgement message Ack+, through which correct reception of the data packet at the buffering network entity IWU was announced.
A block diagram over a proposed system is depicted in figure 3. A first mobile host 305 is here connected to a first base station 315 via a first access link 310. The access link 310 is typically constituted by one or more wireless radio links in a cellular system. However, it may be an arbitrary kind of connection, which is suitable for the specific application.
The access link 310 may, for instance, be a satellite link, an optical link, a sonic link or a hydrophonic link. The first base station 315 is further connected to a first buffering network entity 320, generally termed IWU (InterWorking Unit).
The first base station 315 can either be separated from the first buffering network entity 320 (as shown in the figure 3) or be co-located with it, whichever is technically and/or economically the most appropriate. The first buffering network entity 320 also interfaces a packet switched network 325. The packet switched network 325 is presupposed to offer a connectionless delivery of data packets on a best-effort basis. This means briefly, that every data packet that is technically feasible to deliver will be delivered as soon as possible. The Internet is a well-known example where many networks together provide a connectionless, best-effort datagram delivery. Further details as to the definition of the best-effort datagram can be found in the Internet RFC 1504.
In addition, at least one fixed host 330, at least one second buffering network entity 335 and a third base station 365 may be connected to the packet switched network 325. The second buffering network entity 335 interfaces a second base station 340 and a second mobile host 350 via a second access link 345 in a manner corresponding to what has been described in connection with the first buffering network entity 320 above.
The third base station 365, which communicates with a third mobile host 355 over a third access link 360, may either be SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 14 directly connected to the packet switched network 325 or be connected via a unit, which is not a buffering network entity.
The above-described system makes it possible to exchange data packets in any direction between any of the hosts 305, 330, 5 350 and 355 according to a reliable sliding window transport protocol. Consequently, a sending host may be arbitrary host 305, 330, 350; 355 and receiving host(s) may be one or more of the other hosts 305, 330, 350; 355. Preferably, the reliable sliding window transport protocol used by the sending and the receiving hosts is of a TCP-type (specified in the Internet RFC793) or of a type specified in the standard document IS08073. Yet, any alternative sliding window transport protocol is naturally workable.
The sending host 310, 330, 350; 355 is notified of a data packet status for each of its transmitted data packets via a specific status message fed back from the buffering network unit 320; 335. The status message (which e.g. may be TCP acknowledgement) is generated at the buffering network unit being closest to the sending host, i.e. the buffering network unit 320 or 335 respectively. Thus, when the first mobile host 305 sends data packets the first buffering network unit 320 generates the status information. When the fixed host 330 sends data packets to one of the mobile hosts 305 or 350 either the first 320 or the second 335 buffering network unit generates the status information depending on which host 305 or 350 is the receiving host. If however, the fixed host 330 should send data packets to the third mobile host 355, no such status information would be generated. When the host 350 sends data packets the buffering network unit 335 generates the status information. If the mobile host 355 sends data packets the status information in question will only be generated if the receiving host is connected to the packet switched network 325 via a buffering network unit such as 320 or 335. The data packet status information may e.g. be communicated to the sending host according to a selective acknowledgement SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 algorithm (SACK) or according to a so-called TP4 algorithm.
Further description of the TP4 algorithm can be found in the standard specification ISO 8073.
In order to furthermore illustrate the invention, four different data communication examples will now be described with reference to figures 4a through 4d.
In a first data communication example, being illustrated in figure 4a, the fixed host FH is assumed to be the sending host and the second mobile. host MH2 is assumed to be the receiving host. Data packets DP thus pass from the fixed host FH through the packet switched network to the second buffering network entity IWU2. This part of the connection will be referred to as a first leg B. The second buffering network entity IWU2 then forwards the data packets DP to the second mobile host MH2 via the second base station and the second access link.
This part of the connection will be referred to as a second leg C.
The second buffering network entity IWU2 here functions as the end receiver of the data packets DP from the packet switched network's point-of-view. This means that once a data packet DP has succeeded in reaching the second buffering network entity IWU2 correctly it will not be retransmitted from the fixed host FH. The second buffering network entity IWU2 returns a status message S(B, C) indicating this fact to the fixed host FH.
If, on the other hand, a data packet DP is lost on the second access link between the second base station and the second mobile host MH2, that data packet DP will be retransmitted from the second buffering network entity IWU2 until the data packet DP has been received correctly at the second mobile host MH2. Such a loss of a data packet is also indicated to the fixed host FH by the status message S(B, C) 1] fed back from the second buffering network entity IWU2.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 16 Data packets DP that are lost or degenerated after having left the fixed host FH, but before reaching the second buffering network entity IWU2, will be retransmitted from the fixed host FH. If the second buffering network entity IWU2 registers a loss or a degeneration of a data packet DP, it notifies the fixed host FH via the status message S(B, C) According to the invention, a first part of the status message S(B, here corresponding to the first leg B, indicates if a data packet DP has been lost or degenerated in the packet switched network. A second part of the status message S(B, C), here corresponding to the second leg C, indicates if a data packet DP has been lost or degenerated over the second access link. In case the loss or degradation occurred in the packet switched network, the data packet rate from the fixed host FH will be reduced via at least one data flow control algorithm, otherwise not.
In a second data communication example illustrated in figure 4b, the first mobile host MH1 is the sending host and the fixed host FH is the receiving host. Data packets DP hence leave the first mobile host MHl over the first access link and pass via the first base station to the first buffering network entity IWUl. This part of the connection will be referred to as a first leg A. The first buffering network entity IWUI then forwards the data packets DP to the fixed host FH through the packet switched network, which corresponds to a second leg B of the connection.
The occasionally poor quality of the access link between the first mobile host MHl and the first buffering network entity IWUl may lead to degeneration or loss of data packets DP. Such degenerated or lost packets DP must naturally be retransmitted from the first mobile host MHl to the first buffering network entity IWUI. Nevertheless, once a data packet DP has reached the first buffering network entity IWUl correctly it never has to be retransmitted from the first mobile host MH1.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 17 However less likely, the first buffering network entity IWUl may also have to retransmit data packets DP to the fixed host FH. This is the case when, for instance, congestion in the packet switched network has caused the retransmission timer to expire.
A status message S(A, B) related to each data packet DP is generated in the first buffering network entity IWUl and returned to the first mobile host MHI. A first part of the status message S(A, B) here corresponding to the second leg B, indicates if a data packet DP has been lost or degenerated in the packet switched network, while a second part, here corresponding to the first leg A, indicates if a data packet DP has been lost or degenerated in over the first access link.
In case the loss or degradation occurred in the packet switched network, the data packet rate from the first mobile host MHl will be reduced via at least one data flow control algorithm. The loss or degradation of a data packet DP over the first access link will, however, not influence the data packet rate from the first mobile host MHI in this way.
In a third data communication example, illustrated in figure 4c, the first mobile host MHI is the sending host and the second mobile host MH2 is the receiving host. The first mobile host MHl now transmits data packets DP over the first access link, via the first base station to the first buffering network entity IWUl. This part of the connection will be referred to as a first leg A. The first buffering network entity IWUl subsequently passes the data packets DP via the packet switched network to the second buffering network entity IWU2. This part of the connection will be referred to as a second leg B. Finally, the second buffering network entity IWU2 sends the data packets DP to the second mobile host MH2 through the second base station and the second access link.
This last part of the connection constitutes a third leg C.
In this case, data packets DP may, if necessary, be retransmitted over any of the legs A, B and C respectively. A SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 18 data packet DP can either be retransmitted from the first mobile host MHl to the first buffering network entity IWUl, from the first buffering network entity IWUl to the second buffering network entity IWU2 or from the second buffering network entity IWU2 to the second mobile host MH2. Regardless of what caused the loss or degrading of a particular data packet DP, retransmission of the data packet will always only be performed over the leg A, B; C where the data packet DP was either lost or degenerated.
Status messages S(A, S(B, C) indicating the status of each communicated data packet DP are generated in both the buffering network entities IWUl; IWU2. The status messages S(A, S(B, C) indicate in which leg A, B; C a loss or degrading of a particular data packet DP has occurred. If the status messages S(A, S(B, C) announce that a data packet DP has been lost or degenerated in the packet switched network, here leg B, at least one data flow control algorithm will be triggered. The data packet rate from the first mobile host MHl will, as a result of such an algorithm, be decreased.
A loss or degradation of a data packet DP in any of the other legs A or C will, on the other hand, not lead to a reduction of the data transmission rate from the first mobile host MH1.
A fourth data communication example will now be discussed with reference to figure 4d. The first mobile host MHl is here the sending host and the third mobile host MH3 is the receiving host. The first mobile host MHl this time transmits data packets DP over the first access link, via the first base station to the first buffering network entity IWUI. This part of the connection will be referred to as a first leg A. The first buffering network entity IWUl then passes the data packets DP via the packet switched network to the third mobile host MH3, via the third base station and the third access link. This part of the connection will be referred to as a second leg B.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 19 A retransmission of a lost or degenerated data packet DP can here either be carried out over the first leg A or over the second leg B. A status message S(A, B) indicating the status of each communicated data packet DP is generated in the first buffering network entity IWUI and returned to the first mobile host MHl. The status message thus indicates S(A, B) if a certain data packet DP has been lost or degenerated over the first access link, i.e. leg A, or somewhere between the first buffering network entity IWUl and the third mobile host MH3, i.e. leg B. Only the loss or degeneration of a data packet DP in leg B will trigger data flow control algorithms and thus decrease the data packet rate from the first mobile host MH1.
Such loss or degrading is most likely to depend on poor quality of the third access link between the third base station and the third mobile host MH3, but since this fact is impossible to verify data flow control algorithms will nevertheless be activated.
If, regardless of the communication case, a buffering network entity over a certain period of time receives more data packets via one of its interfaces than what can be delivered over its other interface, the superfluous data packets will be discarded by the buffering network entity. The buffering network entity will then feed back status messages S(X, Y) to the sending host indicating that the superfluous data packets were lost in the packet switched network. This will in its turn trigger at least one data flow control algorithm, which directs the sending host to reduce its data packet rate. The rate will thus be gradually reduced until it meets the transmission capacity of the limiting interface at the buffering network entity.
Figure 5 illustrates an embodiment of the inventive method being carried out in a buffering network entity, when data packets are transmitted from a sending host to a receiving host via the buffering network entity. The figure illustrates the possible fate of a particular data packet DP or a certain SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCTSE99/01479 group of data packets DPs when passing through the buffering network entity. It is nevertheless important to bear in mind that since the reliable sliding window transport protocol allows many data packets to be outstanding in the network between the sending and the receiving host, the buffering network unit will at any given moment be carrying out many of the following steps simultaneously and in parallel. The procedure will thus be in different steps with regard to different data packets DPs.
In a first step 500 the buffering network entity receives one or more data packets DP(s) from the sending host. A following step 505 checks whether the data packet(s) DP(s) is(are) correct. If so, the procedure continues to step 520.
Otherwise, a request is made in a step 510 for retransmission of the incorrectly received data packet(s) DP(s). A status message indicating the erroneous reception of the data packet(s) DP(s) is fed back to the sending host in a subsequent step 515. The procedure then returns to step 505 in order to determine whether the retransmitted data packet(s) DP(s) arrive(s) correctly.
In practice the steps 510 and 515 are most efficiently being carried out as one joint step, where the status message per se is interpreted as a request for retransmission. The steps 510 and 515 may, of course, also be carried out in reverse order or in parallel. Their relative order is nevertheless irrelevant for the result.
In case the buffering network entity in the steps 500 and 505 receives data packets DP(s) in an out-of-sequence order, so that a loss of earlier data packet(s) DP(s) likely to have occurred a retransmission of the assumedly lost data packet(s) is requested in the steps 510 and 515.
The step 520 checks whether the connection at the buffering network entity's output interface has enough bandwidth BW, i.e. can transport data packets DPs at least as fast as data SUBSTITUTE SHEET (RULE WO 00/21231 PCT/SE99/01479 21 packets DPs arrive to the buffering network entity at the input interface. In case of insufficient bandwidth BW one or more data packets DP(s) are discarded in a step 525. The procedure then returns to the step 510, where retransmission is requested for the discarded data packet(s). If, on the other hand, in the step 520 it is found that the output interface has sufficient bandwidth BW the procedure continues to a step 530.
In the step 530 a status message indicating correct reception of the data packet(s) DP(s) may be fed back to the sending host. If there is a packet switched network between the sending host and the buffering network entity, a status message regarding the result of the transmission is regularly fed back from the buffering network entity to the sending host. If, however, there is no packet switched network between the sending host and the buffering network entity (but e.g. an access link) the step 530 may be empty. Step 530 namely provides information to the sending host, which is necessary to control the data flow from the sending host, and such information need only be communicated if the data packet(s) DP(s) has/have been transmitted over a packet switched network.
In an ensuing step 535 the correctly received data packet(s) DP(s) is/are stored in the buffering network entity. A thereafter following step 540 forwards the data packet(s) DP(s) to the receiving host. A step 545, checks whether a status message relating to the sent the data packet(s) DP(s) has been returned from the receiving host. If such a status message reaches the buffering network entity before the expiration of a retransmission timer a step 555 checks whether the status message indicates correct or incorrect reception of the data packet(s) DP(s). A step 550 determines if the retransmission timer has expired, and in case the timer is still running the procedure is looped back to the step 545.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 22 If however, no status message has been received when the retransmission timer expires or if it in step 555 is found that one or more data packets DP(s) have been received incorrectly, the data packet(s) DP(s) in question are retransmitted in a step 560. A following step 565 returns a status message S(X, Y) to the sending host indicating the fact that retransmission of data packet(s) DP(s) was necessary. If there is a packet switched network between the buffering network entity and the receiving host, flow control algorithms triggered by the status message S(X, Y) will cause the sending host to reduce its data flow, otherwise the data flow from the sending host will not be affected.
In alternative embodiments of the inventive method, the steps 560 and 565 are carried out reverse order or in parallel.
Their relative order is irrelevant for the effect of the steps.
After the step 565 the procedure returns to the step 545, which checks whether a status message for the retransmitted data packet(s) DP(s) has been received. As soon as a status message indicating correct reception of the data packet(s) has/have reached the buffering network entity the procedure continues for this/these data packets DP(s) from the step 545, via the step 555, to a final step 570. This final step 570 feeds back a status message S(X, Y) to the sending host, which indicates that the data packet(s) has/have been received correctly by the receiving host.
Figure 6 depicts a block diagram over an arrangement according to the invention. A sending host, which may be either fixed or mobile, is here represented by a first general interface 600.
A receiving host, which likewise may be either fixed or mobile, is correspondingly represented by a second general interface 650. A buffering network entity IWU interfaces both the first and the second interface 600; 650.
SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 23 The buffering network entity IWU in its turn includes a means 610 for receiving data packets DP from the first interface 600, a storage means 620 for storing data packets DP, a means 630 for transmitting data packets DP over the second interface 650 and a processing means 640 for generating status messages S(X, Y) and controlling the over all operation of the entity buffering network IWU in accordance with the method described in connection with figure 5 above.
Apart from receiving data packets DP the receiving means 610 also determines whether the data packets DP are received correctly and generates in response to the condition of the received data packets DP a first status message Furthermore, the receiving means 610 may have to discard data packets DP on instruction from the processing means 640. This happens if the processing means has found that the bandwidth capacity over the interface 600 exceeds the capacity over the interface 650. The discarded data packets DP are regarded as lost data packets. A first status message indicating such a loss is therefore generated and returned after having discarded data packets in the receiving means 610.
This first status message is returned to the sending host. The first status message is also forwarded to the processing means 640. Furthermore, the means 610 generates requests for retransmission of data packets DP whenever that becomes necessary. As mentioned earlier the status message itself may, of course, be interpreted as a requests for retransmission at the sending host. Data packets DP having been received correctly by the means 610 are passed on to the storage means 620 for temporary storage.
The means for transmitting 630 retrieves data packets DP from the storage means 620 and sends them over the interface 650 to the receiving host. In response to the sent data packets DP the receiving host returns a second status message e.g. in the form of a positive or a negative acknowledgement message Ack± indicating the condition of the data packets DP at the SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 24 receiving host. The processing means 640 receives the second status message Ack± and generates a third and combined status message S(X, which is determined from the content of the first status message and the second status message Ack±. The third status message S(X, Y) thus gives a total representation of how successfully a certain data packet or a certain group of data packets was passed over the respective communication legs before and after the buffering network entity. The third status message S(X, Y) is transmitted from the processing means back to the sending host over the interface 600.
A particular data packet DP having been temporarily stored in the storage means 620 may be deleted as soon as a second status message Ack+ has been received for the data packet DP indicating that the data packet has been received correctly by the receiving host. The data packet DP may of course also be deleted at any later, and perhaps more suitable, moment.
The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.
SUBSTITUTE SHEET (RULE 26)

Claims (10)

1. Method for communicating data packets (DP) between a first host (305) and second host (330) over at least one access link (310) and at least one packet switched network (325), wherein a buffering network entity (320) interfaces both the access link (310) and the packet switched network (325), the packet switched network (325) offers a connectionless delivery of data packets and the first host (305) and second host (330) communicate data packets (DP) according to a reliable sliding window transport protocol, through which a receiving host (650) feeds back data packet status information (Ack±) indicating the condition of received data packets and lost or erroneously received data packets (DP) are retransmitted from a sending host (600), the method comprising the steps of: receiving (500) data packets (DP) from the sending host (600) in the buffering network entity (IWU), notifying (520, 525) the sending host (600) of the condition of the received data packets (DP), storing (530) in the buffering network entity (IWU) each correctly received data packet (DP), forwarding (535) each stored data packet (DP) from the buffering network entity (IWU) to the receiving host (650), and performing retransmission (555) of a stored data packet (DP) from the buffering network entity (IWU) to the receiving host (650), in case the data packet (DP) is lost before reaching the receiving host (650) or if the data packet (DP) is received erroneously by the receiving host (650), c h a r a c t e r i s e d in that the method further comprises the step of: SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 26 returning (560, 565) a status message to the sending host (600), which for each lost or erroneously received data packet (DP) indicates whether the data packet (DP) was lost or erroneously received over the access link (310) or in the packet switched network (325).
2. Method according to claim 1, c h a r a c t e r i s e d in further comprising the steps of: checking in the sending host (600) the status message and performing congestion control actions in the sending host (600) only if the data packet (DP) was lost or erroneously received in the packet switched network (325).
3 Method according to claim 1 or 2, c h a r a c t e r i s e d in that: the sending host (600) is a fixed host (330), the receiving host (650) is mobile host (305), the reliable sliding window protocol is of TCP-type, and the buffering network entity (320) receives data packets (DP) from the sending host (600) via the packet switched network (325), notifies the sending host (600) of which data packets (DP) that have been received correctly at the buffering network entity (320) according to a selective acknowledgement algorithm, and forwards correctly received data packets (DP) to the receiving host (650) according to a local retransmission algorithm.
4. Method according to claim 1 or 2 c h a r a c t e r i s e d in that: the sending host (600) is a fixed host (330), the receiving host (650) is a mobile host (305), the reliable sliding window protocol is of IS08073-type, the buffering network entity (320): SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 27 receives data packets (DP) from the sending host (600) via the packet switched network (325), notifies the sending host (600) of which data packets (DP) that have been received correctly by the buffering network entity (320) according to a TP4 algorithm, and forwards correctly received data packets (DP) to the receiving host (650) according to a local retransmission algorithm.
Method according to claim 1 or 2, c h a r a c t e r i s e d in that the sending host (600) is a mobile host (305), the receiving host (650) is fixed host (330), the buffering network entity (320): receives data packets (DP) from the sending host (600), notifies the sending host (600) of which data packets (DP) that have been received correctly by the buffering network entity (320), and forwards correctly received data packets (DP) to the receiving host (540) according to a selective acknowledgement algorithm.
6. Method according to claim 1 or 2, c h a r a c t e r i s e d in that the sending host (600) is a mobile host (305), the receiving host (650) is a mobile host (350), a first buffering network entity (320): receives data packets (DP) from the sending host (600), notifies the sending host (600) of which data packets (DP) that have been received correctly, forwards correctly received data packets (DP) to the packet switched network (325), and a second buffering network entity (335): receives data packets (DP) from the packet switched network (325), and SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 28 forwards the received data packets (DP) to the receiving host (650) according to a local retransmission algorithm.
7. Method according to any of the claims 1 6, c h a r a c t e r i s e d in that given the status message the buffering network entity (IWU) performing the steps of: estimating a first rate at which data packets (DP) are being communicated through the at least one access link (310), estimating a second rate at which data packets (DP) may be communicated through the at least one packet switched network (325), and discarding superfluous data packets (DP) if the first rate exceeds the second rate.
8. Method according to any of the claims 1 7, c h a r a c t e r i s e d in that given the status message the sending host (600) performing the step of: estimating a number of data packets (DP) being outstanding in the at least one packet switched network (325).
9. A system for communicating data packets (DP) between a first host and second host according to a reliable sliding window transport protocol, through which a receiving host (650) generates status information indicating the condition of received data packets and lost or erroneously received data packets (DP) are retransmitted from a sending host (600), the system comprising: at least one access link to which the first host is connected, at least one packet switched network offering a connectionless delivery of data packets (DP) to which the second host is connected, and at least one buffering network entity (IWU), which interfaces both the access link and the packet switched SUBSTITUTE SHEET (RULE 26) WO 00/21231 PCT/SE99/01479 29 network, ch a r a c t e r i s e d in that the buffering network entity (IWU) comprises: a means (610) for receiving data packets generating a first status message indicating whether the received or missing data packets (DP) must be retransmitted or not and returning the first status message to the sending host (600), a means (620) for storing data packets (DP) being received correctly by the means (610) for receiving, a means (630) for retrieving data packets from the means (620) for storing and for transmitting the data packets (DP) to the receiving host (650), and a processing means (640) for receiving the first status message from the means (610) for receiving and a second status message (Ack±) from the receiving host (650), generating a third status message in response to the first and the second (Ack±) status messages, and transmitting the third status message to the sending host (600).
10. A system according to claim 9, c h a r a c t e r i s e d in that the third status message indicates whether a particular data packet (DP) has been lost or erroneously received over the access link or in the packet switched network. SUBSTITUTE SHEET(RULE 26)
AU58929/99A 1998-10-07 1999-08-27 Method and system for data communication Ceased AU751285B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SE9803423 1998-10-07
SE9803423A SE513327C2 (en) 1998-10-07 1998-10-07 Systems and method of data communication
PCT/SE1999/001479 WO2000021231A2 (en) 1998-10-07 1999-08-27 Method and system for data communication

Publications (2)

Publication Number Publication Date
AU5892999A AU5892999A (en) 2000-04-26
AU751285B2 true AU751285B2 (en) 2002-08-08

Family

ID=20412869

Family Applications (1)

Application Number Title Priority Date Filing Date
AU58929/99A Ceased AU751285B2 (en) 1998-10-07 1999-08-27 Method and system for data communication

Country Status (6)

Country Link
EP (1) EP1119954A2 (en)
JP (1) JP2002527935A (en)
AU (1) AU751285B2 (en)
CA (1) CA2346715A1 (en)
SE (1) SE513327C2 (en)
WO (1) WO2000021231A2 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9929882D0 (en) * 1999-12-18 2000-02-09 Roke Manor Research TCP/IP enhancement for long latency links
GB0018119D0 (en) * 2000-07-24 2000-09-13 Nokia Networks Oy Flow control
JP3377994B2 (en) 2000-11-14 2003-02-17 三菱電機株式会社 Data distribution management device and data distribution management method
GB2375001A (en) * 2001-04-06 2002-10-30 Motorola Inc Re-transmission protocol
WO2003040735A1 (en) 2001-11-07 2003-05-15 Cyneta Networks Inc. Resource aware session adaptation system and method for enhancing network throughput
WO2003041334A1 (en) 2001-11-07 2003-05-15 Cyneta Networks, Inc. Gb PARAMETER BASED RADIO PRIORITY
SE0103853D0 (en) * 2001-11-15 2001-11-15 Ericsson Telefon Ab L M Method and system of retransmission
EP1472831A1 (en) * 2002-01-25 2004-11-03 Cyneta Networks, Inc. Packet retransmission in wireless packet data networks
WO2003069870A2 (en) * 2002-02-15 2003-08-21 Koninklijke Philips Electronics N.V. Modifications to tcp/ip for broadcast or wireless networks
US8533307B2 (en) 2002-07-26 2013-09-10 Robert Bosch Gmbh Method and device for monitoring a data transmission
DE10234348B4 (en) * 2002-07-26 2018-01-04 Robert Bosch Gmbh Method and device for monitoring a data transmission
US7385926B2 (en) * 2002-11-25 2008-06-10 Intel Corporation Apparatus to speculatively identify packets for transmission and method therefor
US7693058B2 (en) * 2002-12-03 2010-04-06 Hewlett-Packard Development Company, L.P. Method for enhancing transmission quality of streaming media
US20040264368A1 (en) * 2003-06-30 2004-12-30 Nokia Corporation Data transfer optimization in packet data networks
KR101086397B1 (en) * 2003-12-02 2011-11-23 삼성전자주식회사 IP packet error handling apparatus and method using the same, and computer readable medium on which program executing the method is recorded
EP1681792B1 (en) * 2005-01-17 2013-03-13 Nokia Siemens Networks GmbH & Co. KG Secure data transmission in a multi-hop system
US7787463B2 (en) * 2006-01-26 2010-08-31 Broadcom Corporation Content aware apparatus and method
US8238242B2 (en) 2006-02-27 2012-08-07 Telefonaktiebolaget Lm Ericsson (Publ) Flow control mechanism using local and global acknowledgements
GB0812770D0 (en) 2008-07-11 2008-08-20 Zbd Displays Ltd A display system
CN101631065B (en) 2008-07-16 2012-04-18 华为技术有限公司 Method and device for controlling congestion of wireless multi-hop network
US10374757B2 (en) 2014-11-06 2019-08-06 Nokia Solutions And Networks Oy Improving communication efficiency

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALAKRISHNAN H. ETAL:"IMPROVING RELIABLE TRANSPORT & HANDOFF PERFORMANCE IN CELLULAR WIRELESS NETWORKS",WIRELESS NETWORKS VOL 1, NO.4 1 DECEMBER 1995 *

Also Published As

Publication number Publication date
SE9803423L (en) 2000-04-08
SE513327C2 (en) 2000-08-28
WO2000021231A3 (en) 2000-07-27
JP2002527935A (en) 2002-08-27
WO2000021231A2 (en) 2000-04-13
EP1119954A2 (en) 2001-08-01
CA2346715A1 (en) 2000-04-13
AU5892999A (en) 2000-04-26
SE9803423D0 (en) 1998-10-07

Similar Documents

Publication Publication Date Title
AU751285B2 (en) Method and system for data communication
US7600037B2 (en) Real time transmission of information content from a sender to a receiver over a network by sizing of a congestion window in a connectionless protocol
US7277390B2 (en) TCP processing apparatus of base transceiver subsystem in wired/wireless integrated network and method thereof
EP1671424B1 (en) Fec-based reliability control protocols
US7835273B2 (en) Method for transmitting data in mobile ad hoc network and network apparatus using the same
US20040052234A1 (en) Method and system for dispatching multiple TCP packets from communication systems
KR100785293B1 (en) System and Method for TCP Congestion Control Using Multiple TCP ACKs
EP1195966B1 (en) Communication method
EP2681880B1 (en) Controlling network device behavior
US20030023746A1 (en) Method for reliable and efficient support of congestion control in nack-based protocols
JP2013507826A (en) Efficient application layer automatic retransmission request retransmission method for reliable real-time data streaming in networks
WO2000055640A1 (en) Dynamic wait acknowledge for network protocol
US8018846B2 (en) Transport control method in wireless communication system
JP2006506866A (en) Data unit transmitter and control method of the transmitter
CN111193577A (en) Network system communication method and communication device using transmission timeout
KR100392169B1 (en) Method and apparatus for conveying data packets in a communication system
WO2018155406A1 (en) Communication system, communication device, method, and program
JP4531302B2 (en) Packet relay apparatus and method thereof
KR100913897B1 (en) Method for controlling congestion of TCP for reducing the number of retransmission timeout
JP2006237968A (en) System and method for communication
CN116566920A (en) Data transmission control method and related device
JP2003198612A (en) File transferring method in packet communication network
Zhizhao et al. Improving TCP performance over wireless links using link-layer retransmission and explicit loss notification
Henderson et al. Internet Engineering Task Force Mark Allman, Editor INTERNET DRAFT Spencer Dawkins File: draft-ietf-tcpsat-res-issues-06. txt Dan Glover Jim Griner
JPH10200556A (en) Packet communication system

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)