WO2007004885A1 - A method for buffering data - Google Patents

A method for buffering data Download PDF

Info

Publication number
WO2007004885A1
WO2007004885A1 PCT/NO2005/000367 NO2005000367W WO2007004885A1 WO 2007004885 A1 WO2007004885 A1 WO 2007004885A1 NO 2005000367 W NO2005000367 W NO 2005000367W WO 2007004885 A1 WO2007004885 A1 WO 2007004885A1
Authority
WO
WIPO (PCT)
Prior art keywords
packets
memory
buffer
packet
data
Prior art date
Application number
PCT/NO2005/000367
Other languages
French (fr)
Inventor
Geir Robert Svelmoe
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to EP05799865A priority Critical patent/EP1897287A1/en
Publication of WO2007004885A1 publication Critical patent/WO2007004885A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the present invention relates to data communication in general, where data is transferred between two parties, the data transmission rate of the one party being different from the rate at which the receiving part may receive the data, the data stream being buffered in a buffer in an interface bridging the parties.
  • a node may receive data packets at one interface for forwarding on another interface following another protocol. Often the transmission rate at the first interface is different from the transmission rate at the other interface. A problem may arise when the node is receiving traffic at a faster rate on the first interface than it may deliver at the other one.
  • nodes include buffers on all interfaces for storing packets for forwarding. Under heavy network load, the node receiving a steady stream of data, the buffer on the receiving interface will eventually go full. There is a high probability that at least one fragment will be dropped, and this may make the packet worthless for the destination. Depending on the protocol carried in the packet a retransmission of the whole packet may happen and a fragment once more be dropped.
  • the present invention relates to an improved method for buffering packets in a communication system including the steps of receiving a number of data packets, storing said packets in said buffer memory, check if N+S>2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling, if so drop the oldest unscheduled packets until N+S ⁇ 2k-2, schedule k-2 packets for transmission, receive more packets, and so on, each step being executed during subsequent pollings of the memory, as claimed in the appended claim 1.
  • Fig. 1 is a schematic overview over an exemplary traffic node, in which the invention may be implemented
  • Fig. 2 shows the symbols used in the following figures
  • Fig. 3#1 - 3#6 shows the content of a buffer before and after the scheduling algorithm has been executed
  • Fig. 4 illustrates how packets are wrapped in the receive buffer.
  • the Ethernet device ETD includes three ports; an Ethernet port ethO for communication with the network inside the node (siteLAN), an Ethernet traffic port epO connected to a physical Ethernet interface FCC1 , and an HDLC port eoeO (Ethernet over E1) connected to an external HDLC interface FCC3.
  • the object of the device is to bridge traffic between the ports.
  • the bold lines illustrate the bi-directional dataflow among the several logical modules in the Ethernet device ETD.
  • epO handles all incoming data on FCC1 and passes the data on to a circular buffer, to be described later, which again passes the data on to the eoeO and/or ethO depending on the type of data received.
  • Incoming data on the HDLC interface FCC3 is forwarded only to epO.
  • the same goes for outgoing data on ethO interface, whose data is only forwarded to epO.
  • ethO interface Normally the traffic rates on the Ethernet lines are higher than on the HDLC line (100 Mbs versus 31 Mbs). This may result in that the epO outgoing queue overflows. This implies that the device must be able to drop outgoing packets to ensure correct operation.
  • the ETD will control the data flow on both the Ethernet interface and on the HDLC interface. This will allow the two interfaces to write/read data to/from the same data memory and thus provide zero copy forwarding.
  • the receive handling on one interface is highly coupled to the transmit handling on the other interface to obtain efficient buffer handling.
  • the receive and transmit handling on the same interface operates somewhat more independent.
  • the idea is to use the receive data buffer of one interface as the transmit buffer of the other.
  • the real buffer used is a circular buffer with a finite size
  • the buffer handling algorithm is best explained using an infinite buffer as an example.
  • Receive rate is constant at five packets per poll (1 ms interval).
  • Transmit rate is constant at 1 V 2 packet per poll (1 ms interval).
  • FIG. 2 shows the symbols used to track buffer states.
  • the buffer state is shown before and after the scheduling algorithm has been executed.
  • the algorithm used for handling the buffer may be expressed in a general way as:
  • a packet may appear with the first part located at the end of the physical buffer memory and the second part at the start of the physical buffer memory. This means that one packet for every round in the receive buffer must be handled as two fragments and that all code that handles packets must check for, and handle, wrapped packets. The method is illustrated in Fig. 4.
  • the ETD will at the receiving side ensure that the packet is located in continuous physical memory. This is done by expanding the receive buffer with 1500 bytes after the memory pointed to by the very last buffer descriptor.
  • the receive functionality will copy the second part of the packet from the start of the receive buffer and to the extra memory adjacent to the end of the receive buffer. This makes the packet data to be in continuous physical memory.
  • the buffer descriptor that corresponds to the start of the packet is updated with the total length and the status from the last buffer descriptor corresponding to the packet in question. Then all descriptors except the first one are "released.”
  • the buffer descriptor manipulation is also applied to all the other packets that are naturally in continuous physical memory.

Abstract

A method for controlling the content of a circular buffer memory, including: receiving a number of data packets, storing said packets in said buffer memory, check if N+S>=2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling, if so drop the oldest unscheduled packets until N+S≤2k-2, schedule k-2 packets for transmission, receive more packets, and so on, each step being executed during subsequent pollings of the memory.

Description

A METHOD FOR BUFFERING DATA
Field of the invention
The present invention relates to data communication in general, where data is transferred between two parties, the data transmission rate of the one party being different from the rate at which the receiving part may receive the data, the data stream being buffered in a buffer in an interface bridging the parties.
Technical background
Most traffic nodes include several network interfaces for traffic following different communication protocols. A node may receive data packets at one interface for forwarding on another interface following another protocol. Often the transmission rate at the first interface is different from the transmission rate at the other interface. A problem may arise when the node is receiving traffic at a faster rate on the first interface than it may deliver at the other one. Commonly, nodes include buffers on all interfaces for storing packets for forwarding. Under heavy network load, the node receiving a steady stream of data, the buffer on the receiving interface will eventually go full. There is a high probability that at least one fragment will be dropped, and this may make the packet worthless for the destination. Depending on the protocol carried in the packet a retransmission of the whole packet may happen and a fragment once more be dropped.
From Chinese patent application CN-1549108 there is known a communication system using a single buffer for storing data in transfer. This so-called zero-copy message queue is realized as a circulating buffer area shared by both the input and output interface.
There is known solutions in which the receiving node is adapted to issue pause frames to the transmitting part causing it to gracefully stop its transmitter for a period of time as specified in the pause frame.
Summary of the invention
The present invention relates to an improved method for buffering packets in a communication system including the steps of receiving a number of data packets, storing said packets in said buffer memory, check if N+S>2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling, if so drop the oldest unscheduled packets until N+S≤2k-2, schedule k-2 packets for transmission, receive more packets, and so on, each step being executed during subsequent pollings of the memory, as claimed in the appended claim 1.
Brief overview of the drawings
The inventive method will now be explained in detail in reference to the appended drawings, in which:
Fig. 1 is a schematic overview over an exemplary traffic node, in which the invention may be implemented,
Fig. 2 shows the symbols used in the following figures,
Fig. 3#1 - 3#6 shows the content of a buffer before and after the scheduling algorithm has been executed,
Fig. 4 illustrates how packets are wrapped in the receive buffer.
Detailed description
The invention will now be described in way of an example, implemented in an Ethernet Device controlling the flow of data on an Ethernet interface an HDLC interface in a traffic node in a cellular network system. This particular setup illustrated in fig. 1 allows the two interfaces to write/read data from the same memory and thus provide zero copy forwarding of data between the two interfaces.
Structural description
As illustrated in Fig. 1 , the Ethernet device ETD includes three ports; an Ethernet port ethO for communication with the network inside the node (siteLAN), an Ethernet traffic port epO connected to a physical Ethernet interface FCC1 , and an HDLC port eoeO (Ethernet over E1) connected to an external HDLC interface FCC3. The object of the device is to bridge traffic between the ports. The bold lines illustrate the bi-directional dataflow among the several logical modules in the Ethernet device ETD. epO handles all incoming data on FCC1 and passes the data on to a circular buffer, to be described later, which again passes the data on to the eoeO and/or ethO depending on the type of data received. Incoming data on the HDLC interface FCC3 is forwarded only to epO. The same goes for outgoing data on ethO interface, whose data is only forwarded to epO. Normally the traffic rates on the Ethernet lines are higher than on the HDLC line (100 Mbs versus 31 Mbs). This may result in that the epO outgoing queue overflows. This implies that the device must be able to drop outgoing packets to ensure correct operation.
To obtain better performance the ETD will control the data flow on both the Ethernet interface and on the HDLC interface. This will allow the two interfaces to write/read data to/from the same data memory and thus provide zero copy forwarding.
The receive handling on one interface is highly coupled to the transmit handling on the other interface to obtain efficient buffer handling. The receive and transmit handling on the same interface operates somewhat more independent.
Functions and performance
To be able to forward packets between two interfaces of different speed the ETD must take care of the following:
• Try to avoid that the circular buffer is completely filled in order to prevent loss of data (for statistics.)
• Do not send packets that are older than one second.
• Packets should be processed in the order they are received to avoid buffer fragmentation.
To avoid copying of data the idea is to use the receive data buffer of one interface as the transmit buffer of the other.
Although the real buffer used is a circular buffer with a finite size, the buffer handling algorithm is best explained using an infinite buffer as an example.
For the purpose of the example we assume the following:
• All packets are of the same size
• Receive rate is constant at five packets per poll (1 ms interval).
• Transmit rate is constant at 1 V2 packet per poll (1 ms interval).
• Must schedule 2 ms worth of data in addition to packet currently in transmission. • Packets scheduled may not be older than eight packets duration at time of scheduling.
Figure 2 shows the symbols used to track buffer states. In each of the following figures 3 - 6 the buffer state is shown before and after the scheduling algorithm has been executed.
Poll #1 :
At this poll we have currently no data in transmission and we have received five new packets. Scheduling three packets ensures that the transmitter is occupied at least until the next poll.
Poll #2:
Another five new packets have been received and the transmitter have completed the transmission of one packet and started on the next one. The space occupied by the transmitted packet is released.
Since we do not know how much of the currently transmitting packet remains in the buffer we do not consider this packet in the scheduling calculations. The current buffer situation does not indicate that we have an excessive amount of data received so we schedule the next two packets.
Poll #3:
Another five new packets have been received and the transmitter have completed the transmission of two packets and started on the next one. The current buffer situation indicates that we have an excessive amount of data received so we will have to drop three packets and then schedule the next two. In the figure we can count that the sum of S and N marked packets are eight. The locations corresponding to the dropped packets are not set to empty before all the preceding packets have been set to empty.
Poll #4:
Another five new packets have been received and the transmitter have completed the transmission of one packet and started on the next one. The current buffer situation indicates that we once again have an excessive amount of data received so we will have to drop four packets and then schedule the next one. In the figure we can count that the sum of S and N marked packets are eight.
Poll #5
At this point we see that the pattern starts to repeat itself with the given input and output rates. Another five new packets have been received and the transmitter have completed the transmission of two packets and started on the next one. The current buffer situation indicates that we have an excessive amount of data received so we will have to drop three packets and then schedule the next two. In the figure we can count that the sum of S and N marked packets are eight.
Poll #6:
Another five new packets have been received and the transmitter have completed the transmission of one packet and started on the next one. The current buffer situation indicates that we have an excessive amount of data received so we will have to drop four packets and then schedule the next one. In the figure we can count that the sum of S and N marked packets are eight.
The algorithm used for handling the buffer may be expressed in a general way as:
Receive packets,
Check if N+S>2k-2, k being the maximum input rate in packets per polling,
If so drop the oldest unscheduled packets until N+S≤2k-2,
Schedule k-2 packets for transmission,
Receive packets, and so on.
We will emphasize that the buffering method outlined above is an example only. It has a much wider application, and may in fact be used in any buffer where the output transmission rate may be lower than the input rate.
Handling of packet wrapping in receive buffer
Since the receive buffer is not a true circular buffer a packet may appear with the first part located at the end of the physical buffer memory and the second part at the start of the physical buffer memory. This means that one packet for every round in the receive buffer must be handled as two fragments and that all code that handles packets must check for, and handle, wrapped packets. The method is illustrated in Fig. 4.
To simplify most of the packet handling throughout the driver the ETD will at the receiving side ensure that the packet is located in continuous physical memory. This is done by expanding the receive buffer with 1500 bytes after the memory pointed to by the very last buffer descriptor. When such a split packet is received the receive functionality will copy the second part of the packet from the start of the receive buffer and to the extra memory adjacent to the end of the receive buffer. This makes the packet data to be in continuous physical memory.
The buffer descriptor that corresponds to the start of the packet is updated with the total length and the status from the last buffer descriptor corresponding to the packet in question. Then all descriptors except the first one are "released." The buffer descriptor manipulation is also applied to all the other packets that are naturally in continuous physical memory.
The extra work implied by the packet copying is at most 1500 bytes once every second. As such this is considered to be a small sacrifice compared to the expected reduction in packet handling complexity.

Claims

C l a i m s
1. A method for controlling the content of a circular buffer memory, c h a r a c t e r i z e d i n that said method includes the steps of:
• receiving a number of data packets, storing said packets in said buffer memory,
S • check if N+S>2k-2, N being new packets received in the buffer, S being packets scheduled for transmission, k being the maximum input rate in packets per polling,
• if so drop the oldest unscheduled packets until N+S≤2k-2,
• schedule k-2 packets for transmission,
o • receive more packets, and so on,
each step being executed during subsequent pollings of the memory.
2. A method as claimed in claim 1 , wherein the circular memory is realized as a continuous physical memory with a range of ordinary memory positions, when a first part of a packet has been written to the last ordinary memory position, the second 5 part of the packet will be written to the first ordinary memory position, the memory including a number of extra memory positions following the last ordinary memory position, said method including the additional steps of copying the second part of said packet to the extra memory position(s).
3. A method as claimed in claim 3, wherein the memory includes a buffer 0 descriptor field containing buffer descriptors, said method including the additional steps of updating the descriptor for the first part of the packet with the total length of the buffer and a status from the last descriptor corresponding to the last part of the packet, and releasing all other buffer descriptors for rewriting except the updated descriptor.
PCT/NO2005/000367 2005-06-30 2005-09-30 A method for buffering data WO2007004885A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05799865A EP1897287A1 (en) 2005-06-30 2005-09-30 A method for buffering data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NO20053190A NO20053190D0 (en) 2005-06-30 2005-06-30 Data buffering procedure.
NO20053190 2005-06-30

Publications (1)

Publication Number Publication Date
WO2007004885A1 true WO2007004885A1 (en) 2007-01-11

Family

ID=35295312

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NO2005/000367 WO2007004885A1 (en) 2005-06-30 2005-09-30 A method for buffering data

Country Status (3)

Country Link
EP (1) EP1897287A1 (en)
NO (1) NO20053190D0 (en)
WO (1) WO2007004885A1 (en)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ELLOUMI O ET AL: "Issues in multimedia scaling: the MPEG video streams case", EMERGING TECHNOLOGIES AND APPLICATIONS IN COMMUNICATIONS, 1996. PROCEEDINGS., FIRST ANNUAL CONFERENCE ON PORTLAND, OR, USA 7-10 MAY 1996, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 7 May 1996 (1996-05-07), pages 46 - 49, XP010164870, ISBN: 0-8186-7585-3 *
KESSELMAN, Z. LOTKER, Y. MANSOUR: "Buffer Overflow Management in QoS Switches", INTERNET ARTICLE, 31 October 2000 (2000-10-31), pages 1 - 19, XP002366784, Retrieved from the Internet <URL:http://citeseer.ist.psu.edu/rd/80986126%2C443150%2C1%2C0.25%2CDownload/http://citeseer.ist.psu.edu/cache/papers/cs/22054/http:zSzzSzwww.math.tau.ac.ilzSz%7EmansourzSzpaperszSz01-stoc.pdf/kesselman01buffer.pdf> [retrieved on 20060208] *
PAPAYIANNIS S ET AL: "Implications of proactive datagram caching on TCP performance in wireless/mobile communications", COMPUTER COMMUNICATIONS, ELSEVIER SCIENCE PUBLISHERS BV, AMSTERDAM, NL, vol. 26, no. 2, 1 February 2003 (2003-02-01), pages 79 - 89, XP004401225, ISSN: 0140-3664 *

Also Published As

Publication number Publication date
EP1897287A1 (en) 2008-03-12
NO20053190D0 (en) 2005-06-30

Similar Documents

Publication Publication Date Title
EP2050199B1 (en) Expedited communication traffic handling apparatus and methods
US8819265B2 (en) Managing flow control buffer
US8009569B2 (en) System and a method for maintaining quality of service through a congested network
JP2526013B2 (en) Packet switching communication method
AU772524B2 (en) Method and system for discarding and regenerating acknowledgment packets in ADSL communications
JP4658142B2 (en) Communication apparatus and frame control method
EP1882336B1 (en) Queue aware flow control
US7120113B1 (en) Systems and methods for limiting low priority traffic from blocking high priority traffic
US7948974B2 (en) Creating a low bandwidth channel within a high bandwidth packet stream
CN102132535B (en) Method for transferring data packets in communication network and switching device
US7106696B1 (en) Systems and methods for limiting the rates of data to/from a buffer
CN109714267B (en) Transmission control method and system for managing reverse queue
US8018851B1 (en) Flow control for multiport PHY
CN103731409B (en) The distributed measurement device of embedded type automobile acquisition equipment for accelerating with TCP
US20050138238A1 (en) Flow control interface
US20050259575A1 (en) Dynamic flow control support
US8838782B2 (en) Network protocol processing system and network protocol processing method
US7539142B1 (en) Ethernet LAN interface for T3 network
WO2007004885A1 (en) A method for buffering data
JP5094532B2 (en) Communication device
WO2007118594A1 (en) Method for aggregating a plurality of data packets into a unified transport data packet and machine for performing said method
JP2008113327A (en) Network interface device
JPH03135248A (en) Selection-type packet disusing system in packet exchange network
JPWO2002051101A1 (en) TCP / IP network system
CN115695295A (en) Scalable E2E network architecture and components supporting low latency and high throughput

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REEP Request for entry into the european phase

Ref document number: 2005799865

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2005799865

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE