GB2321820A - A method for dynamically allocating buffers to virtual channels in an asynchronous network - Google Patents

A method for dynamically allocating buffers to virtual channels in an asynchronous network Download PDF

Info

Publication number
GB2321820A
GB2321820A GB9701011A GB9701011A GB2321820A GB 2321820 A GB2321820 A GB 2321820A GB 9701011 A GB9701011 A GB 9701011A GB 9701011 A GB9701011 A GB 9701011A GB 2321820 A GB2321820 A GB 2321820A
Authority
GB
United Kingdom
Prior art keywords
buffers
channel
memory
data
channels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9701011A
Other versions
GB2321820B (en
GB9701011D0 (en
Inventor
Tadhg Creedon
Vincent Gavin
Anne O'connell
Eugene O'neill
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CONNELL ANNE O
NEILL EUGENE O
3Com Technologies Ltd
Original Assignee
CONNELL ANNE O
NEILL EUGENE O
3Com Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CONNELL ANNE O, NEILL EUGENE O, 3Com Technologies Ltd filed Critical CONNELL ANNE O
Priority to GB9701011A priority Critical patent/GB2321820B/en
Publication of GB9701011D0 publication Critical patent/GB9701011D0/en
Publication of GB2321820A publication Critical patent/GB2321820A/en
Application granted granted Critical
Publication of GB2321820B publication Critical patent/GB2321820B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

In an asynchronous transfer mode (ATM ) network addressable buffers are used to store data packets prior to their transmission over a virtual channel. A memory pool comprising a plurality of addressable buffers in the form of a shared dynamic random access memory (DRAM) is used. The total number of buffers in use is monitored and provided this remains below a threshold level Tg buffers will be allocated when required. When the threshold Tg is exceeded the allocation of buffers to each virtual channel is restricted to the corresponding threshold level T1, T2, T3. Any new data for a channel operating above its threshold will be discarded or flow control initiated. This prevents heavily loaded comparatively slow channels from using too many buffers and faster channels from becoming starved of memory. The thresholds can be programmed and may be selected on the basis of channel usage and priority. Each channel may operate one or more thresholds.

Description

METHOD AND APPARATUS FOR BUFFER MANAGEMENT IN VIRTUAL CIRCUIT SYSTEMS.
Cross-references to related applications This application relates to subject matter related to the subject matter of the following co-pending applications all of which have a common assignee: METHOD OF SUPPORTING UNKNOWN ADDRESSES IN AN ASYNCHRONOUS TRANSFER MODE INTERFACE - O'Connell et al. - Serial No. filed of even date herewith.
METHOD FOR DISTRIBUTING AND RECOVERING BUFFER MEMORIES IN AN ASYNCHRONOUS TRANSFER MODE EDGE DEVICE - O'Neill et al. Serial No. filed of even date herewith.
METHOD FOR ALLOCATING NON-BUS CHANNELS FOR MULTI-MEDIA TRAFFIC IN ASYNCHRONOUS TRANSFER MODE - O'Connell et al. - Serial No. filed of even date herewith.
METHOD FOR SELECTING VIRTUAL CHANNELS BASED ON ADDRESS PRIORITY IN AN ASYNCHRONOUS TRANSFER MODE DEVICE - Casey et al. Serial No. filed of even date herewith.
Background to the invention This invention relates to communication networks which can support a multiplicity of virtual circuit connections on a single physical connection or port, such as asynchronous transfer mode networks. The invention is particularly concerned with a problem that arises in a system wherein a single physical port can support a very large number of virtual channels. An example is an interface between a data bus which can receive data from a multiplicity of sources and transmit data from the sources on respective virtual channels in an asynchronous transfer mode.
In a typical multi-port communications system, the number of ports is predictable - either a fixed number, or in the case of a modular system where a predetermined number of ports may be added. Available system memory may be centralised and divided by some algorithm amongst existing ports, or distributed physically with ports, or hybrid of both schemes.
In the case of a terminal for an asynchronous transfer mode network, i.e. a device which receives data packets from sources such as lans and transmits them on virtual channels in atm, a single physical port can support an arbitrary number of virtual channels - potentially several thousand. It is not immediately clear how best to allocate memory in this case; for example, it may be inefficient to allocate a fixed amount per channel, because in order to have adequate memory for each channel, the total memory requirement would become too great for many applications.
A typical solution to this problem is to use a memory pool which comprises a multiplicity of addressable buffers. The buffers may be constituted by a large dynamic random access memory. Data packets which require transmission are allotted according to their vc numbers to buffers as required. The buffers are queued and as the data packets in them are transmitted the buffers are 'returned' to a pool. That is to say, they are made available for allocation to fresh data packets for the same or another channel. For normal traffic patterns this at first sight may appear adequate. However a number of dynamics in larger networks involving virtual channels, such as in the case of asynchronous transfer mode networks, can lead to problems not immediately obvious.
One of these is the number of channels active at any one time: this can vary from a few channels to several thousand, in a dynamic fashion, needing a flexible adaptive buffer management system. Another is the aggregate data-rate on a particular channel: with many virtual-channel networks, congestioncontrol is built-in, such that the data-rate can fluctuate on a particular virtual channel. It is known in asynchronous transfer mode networks to provide an available bit rate congestion control mechanism.
One of the problems the fluctuation on data-rates can cause in simpler bufferpool systems, is the risk that slow channels may consume all available memory, by continually acquiring buffers to hold data arising from faster sources. The result is that 'normal' fast channels can become 'starved' of available memory.
In a properly-functioning system, the data-rate on a particular channel should be unaffected or affected minimally by activity on other channels.
The idea is to provide a comprehensive suite of watermarks or thresholds on a per-channel basis, plus a global or system threshold. Below a system-wide threshold, buffers are made available as needed. This means the system can cater for bursts of traffic from fast sources destined for slow destinations without losing data. It also allows flexibility in number of channels supported, by providing no restrictions on buffer availability 'for normal cases' whether there are few or many channels active.
When system memory becomes utilised above the global threshold, deemed to be a memory 'critical' situation, a second set of thresholds comes into action - one per virtual channel. When system memory is critical, as further data arrives for a particular channel, and new buffers are requested from a pool to handle this data, the per-channel threshold is examined for this channel. A second mechanism is also required - the number of buffers of data outstanding (received but not transmitted) for a particular channel. If this number exceeds the respective threshold, the packet of data is either discarded (in which case upper-networklayer protocols must recover it by retransmission later), or flow-control mechanisms invoked to hold-up the source of this traffic until the number of buffers outstanding is less (by a chosen amount) than the per-channel watermark or threshold.
This technique can be implemented for little or no cost. Typically a system must cater for thousands of virtual channels, and will have perchannel information stored in a memory area - information such as channel number, client identification number if local area network emulation is involved, channel priority, and so on. Adding a watermark mechanism and a 'buffers outstanding' up-down counter function to this list of information is therefore not costly, as a small number of additional registers in memory per channel makes little or no difference to system cost.
Other features of the invention will be apparent from the detailed description which follows by way of example of the invention.
Brief Description of the Drawings Figure 1 illustrates in schematic form an apparatus in which the present invention may be performed; Figure 2 is an illustration of the organisation of a content-addressable memory; Figure 3 is a diagram explaining the management of buffers in the system of Figure 1; and Figure 4 is a diagram showing the organisation of a queuing system for memory buffers in the apparatus shown in Figure 1.
Description of the Preferred Embodiment Figure 1 illustrates schematically the important function elements of an apparatus 10, conventionally termed a 'down-link interface' which provides a connection and signal processing between a data bus 11 and virtual communication channels in a system 12 operated in an asynchronous transfer mode. Connection to the data bus 11 is an asic (application-specific-integrated circuit 13) of which the manner of operation is not directly relevant to the present invention but which will be broadly described in order to provide the context within which the present invention may be more easily understood.
The main components of the apparatus 10 comprise, in addition to the asic 13, a content-addressable memory 14, a static random-access memory 15, an internal data bus 16 connecting the memories 14 and 15 to the asic 13, a microprocessor 17 and an associated memory 18, both connected to the asic 13 by means of an internal data bus 19, an internal data bus 20, a segmentation and reassembly system 21 connected to the asic 13, a large-tapacity dynamic random-access memory 22 and a memory data address switch 23 which provides an interface between the bus 20 and the memory 22. Typically the memory 22 contains at least two mega-bytes of memory space (and optionally substantially more).
The purpose of the content-addressable memory 14, the operation of which will be described in more detail later, is to provide look-ups which will translate, or map, an address contained in a data packet received by way of the bus 11 into a number identifying the virtual channel on which the cells of that packet will be transmitted in asynchronous transfer mode.
The asic 13 provides a variety of data paths for data packets and in particular will enable data packets arriving on the data bus 11 to be processed as necessary and transferred to the asynchronous transfer mode. Likewise it will provide a converse path for data arriving on virtual channels to be processed as necessary and transmitted on the bus 11. It provides the necessary bidirectional connections between the central processing end and the asynchronous transfer mode model and it will support, as explained later, sixteen work groups and, using a priority bit per work group, thirty-two 'emulated' local area networks.
The asic 13 controls access to the memory 22. There is a variety of primary and secondary control functions which the asic 13 will provide but which are not directly relevant to the invention.
Brief Functional View The present invention will be described by way of a specific example wherein an asynchronous transfer mode network emulates a multiplicity of local area networks. This emulation is desirable for the following reasons.
Much existing data transmission currently arises from the use of local-area networks, of two types, collision-detection multiple access (e.g. Ethernet) or token ring. These networks differ from asynchronous transfer mode in that the messages are connectionless, broadcast messages (i.e. messages to all members of a local area network) are easily accomplished and destination addresses, usually called medium access control address addresses, are independent of the topology of the network.
There is currently a vast base of existing software that is particular to local area networks and in order to allow the continued use of such software and to enable existing users of local area networks to continue their use of a mode of communication which is highly convenient, yet provide the advantages of asynchronous transfer mode, it is desirable to provide a service or mode of operation in which, among other aspects, end systems such as work-stations, servers, bridges etc can be connected to an asynchronous transfer mode network while the software which the local area network uses acts as if it were used in an ordinary local area network. In other words the asynchronous transfer mode system is transparent to the users of the local area network or networks to which it is connected.
In some circumstances it may be necessary or desirable to define a multiplicity of distinct domains within a single network. This leads to the concept of an emulated local area network, which comprises a group of asynchronous transfer mode devices but is analogous to a group of local area network stations connected to a segment of a local area network, which may in general be either cdma or token ring.
Each emulated local area network is composed of a set of 'local area network emulation clients', each of which may be part of an asynchronous transfer mode end station, and a single local area network emulation service.
A local area network emulation client is the entity which performs data forwarding, address resolution and other control functions. A local area network emulation server needs to provide a facility for resolving addresses expressed in terms of local area network addresses (and called herein media access control addresses) and route descriptors to asynchronous transfer mode channel numbers.
The interface shown in Figure 1 performs several basic operations. First is the establishment of a particular virtual channel for the transmission of data between clients of an emulated local area network. Additionally it provides a means for data packets to be' broadcast' to all the members of an emulated local area network. Further, it will handle the temporary storage, in the host memory, of data packets which are to be transmitted, whether in unicast, multicast or broadcast mode, before and during the transmission of the data packets over the asynchronous transfer mode network. As will become apparent, it also provides a means for the transmission of multi-cast messages on channels other than channels used (as explained later) for messages which have no specific or known destination. It has a facility for restricting the usage of asynchronous transfer mode channels.
When a data packet from a particular 'client' is first received by the interface, no particular virtual channel will have been allotted to it. If the packet is to be broadcast to all the members of an emulated local area network, it will be transmitted over a virtual channel which is prescribed for broadcast transmission. Such a channel is termed herein a 'broadcast and unknown server', and more conveniently by the acronym BUS. In essence, the BUS handles data sent by a 'local area network emulation client' to a 'broadcast' media access control address. This address is used for multicast messages and initial unicast messages (i.e. messages intended for a multiplicity of destinations and a single destination respectively). In the latter case it is important to use the BUS to send over the emulated local area network the message to all possible destinations to enable the address of the message to be resolved, i.e. allotted to a specific virtual circuit channel.
When therefore a 'client' has data (normally in the form of a packet) to send and the asynchronous transfer mode address for the destination specified in the packet (the media access control address) is unknown, the 'client' needs to request an address resolution protocol (ARP). Once a emulation client provides a reply to the request for address resolution, a point to point virtual channel connection can be established so that the established virtual channel connection is used to send all subsequent data to that destination from the original 'client'.
For the transfer of data packets from a local area network coupled to the data bus, the asic 13 performs a look-up to determine the parameters, and identification numbers, of the respective emulated local area network and the appropriate parameters of the asynchronous access mode. As further explained hereinafter, it will provide support for address resolution protocol using the content addressable memory. It will allow the processor to build data packets in memory buffers in the host memory and allow these buffers to be added to the transmit segmentation queues for transmission on respective virtual channels.
The Content Addressable Memory The contents addressable memory is a convenient form of an address look-up data base. One example suitable for use in the present system is an MU9C1480, produced by MUSIC Semiconductors. The memory may be extended by, for example, cascading memories or adding an external state machine to perform look-up. As will be seen, the present system extends the content addressable memory by means of the static random access memory 15.
The content addressable memory 14 and its associated static random access memory 15 provide several important features of the interfacing system and it is convenient to review them before the details of data transfer are described.
These features are (i) the use of the content addressable memory for supporting unknown address; (ii) the extension of the content addressable memory by means of pointers based on priority; (iii) the handling of multi-media traffic which is unsuitable for transmission over a dedicated BUS channel.
Figure 2 illustrates the organisation of the content addressable memory 14 and the associated pointer tables which are maintained in the static random access memory 15.
Each entry in the content addressable memory comprises the following: There is an address field (media access control address) into which the media access control address of an ethernet packet (in this example a 48-bit address) is written when a packet having that address is first received from the data bus 11.
It may in some cases be convenient to pre-load the memory with some addresses.
If the packet is a token ring packet, it may be given a pseudo-address which for example may comprise three four-bit fields defining the next three local area network numbers, a four-bit field identifying the next bridge number, and thirty two bits of padding.
The next field WG is a field (in this example four bits) defining a work group.
This, together with a priority field P (in this example a one-bit field) define the channel number of the BUS allotted to the emulated local area network to which the 'client' belongs and on which all broadcast messages for that emulated local area network will be transmitted. The thus defined BUS channel is also the channel on which all uncast messages for a currently unknown address will be transmitted until a virtual channel connection channel has been established by an address resolution protocol between the source client and the specific destination.
The age field (in this example six bits) serves two purposes. First, it indicates whether a request for an address resolution is pending and it also indicates the age of the current entry.
Pointer Tables The static random access memory contains pointer tables from which the virtual circuit channel number is derived. There is a pointer table for each level of priority so that for each mac address in the content addressable memory there is a pointer in each of the pointer tables 151 and 152. Each pointer defines a respective channel number. However, it is feasible to use a pointer in the table 152 to point to the same channel as the pointer in the table 151.
The pointer tables enable the asic 13 to direct packets into memory buffers ready for transmission on the appropriate virtual channel.
Thus, if there is a match between the destination address of the packet and an entry in the content addressable memory, the address of the matching location is read from the content addressable memory and along with the data bus priority bit, or in the case of a token ring this bit or a priority bit contained in a pseudo header, is used as an index into the respective pointer table in the static random access memory. In turn, the pointer table provides a pointer to an entry in the lan emulation table also stored in the static random access memory. This local area network emulation table provides a local area network emulation client identifier for the packet, a transmit segmentation ring number and some header information for the asynchronous transfer mode.
Unknown Addresses If no match is detected for a unicast packet and the content addressable memory is not full, the destination address, work group and data bus priority bit will be written into the next free location in the content addressable memory. The age field will be written with a particular value to indicate that an ARP is pending for that media access control address. The entry will be marked as 'permanent'. The address of the location will be read from the content addressable memory and used to access the pointer table.
The processor will have previously set up the pointer table entry. The value of that entry is written onto the table on initialisation and each time a media access control address is deleted because it is too old or for any other reason. When the asic reads the value, it knows that it should send the packet to the BUS because the media access control address is currently being the subject of an ARP. The relevant status bit in a register will be set indicating that an unknown destination address has been detected. If no match is detected for a unicast packet and the content addressable memory is full, the packet must be discarded.
If, when it reads the pointer table, the asic 13 detects a value that indicates an ARP pending, and the system is in a mode that allows it, the asic 13 will decrement the special value called herein C10 that forms part of the pointer table entry (e.g. the last four bits). The work group and priority bits are used to access the work group BUS table as before and thereby find the BUS channel for transmission of the packet.
If when it reads the pointer table the asic 13 detects a value that indicates an ARP is pending and the C10 value has been decremented to zero, indicating that the relevant number packets have already been sent to the BUS, the packet will be discarded.
The processor can search the content addressable memory for all packets requiring address resolution by searching the content addressable memory using the age field. When it finds a packet requiring address resolution, the processor can set an 'ARP-seen' bit. Further searches of the content addressable memory with the age field set as described will not reveal those entries. For local area network emulation the processor must regularly search the content addressable memory for all ARP pending media access control addresses and by reading the location of the media access control addresses in the content addressable memory, access the pointer table for that address and reload the C10 value if it has decremented to zero. The processor uses this scheme to guarantee that not more than a specific number of packets are sent to the BUS in a specific time period (between reloads of the C10 value) while an ARP is pending.
Use of Non-Bus Channels for Multi-Media Traffic Since a BUS channel is used for broadcast and multicast traffic and is used to transmit all packets with new addresses which are unknown, i.e. do not yet have assigned to them specific virtual channel numbers, the throughput on a given bus channel can be high.
The network needs to be capable of handling multi-media traffic, which in general may include data from a broadband source such as television. Typically, data packets from such a source must be transmitted over a communication channel which has a constant latency, i.e the transmission time through the channel should be constant. However, the latency of a BUS channel in asynchronous transfer mode will not be defined or constant and the traffic will very depending on the detection of new addresses and the incidence of multicast transmissions.
A solution is to define for a particular class of traffic a 'multicast' address in the content addressable memory, and to associate a particular virtual channel with it.
Thus, if a thus defined multicast packet is received, the interface will, as explained hereinafter, search the content addressable memory for the destination address. If the packet is an ordinary multicast packet the destination address will not be located in the content addressable memory and the packet will be directed to a BUS channel. If the address is in the content addressable memory it will be defined as a multicast address and will be associated with a pointer which directs packets to a virtual channel connection which is not a dedicated BUS channel.
As part of the look-up process, a count of all the transmit buffers currently in use by the segmentation and reassembly unit for a particular virtual channel is returned to a host memory transmit control (within the asic 13) along with a programmable threshold value as is described later. The host memory transmit control evaluates these parameters to check that the current packet can be transmitted. Also returned as part of the look-up process is a transmit segmentation queue number that is to be used to transmit the packet.
If a packet is to be transmitted, the host memory transmit control will take the data from the internal random access memory and write it into data buffers in the dynamic random access memory. The host memory transmit control is responsible for fetching free buffers from a queue using a pointer dQStP as described later and filling the corresponding data buffers with the data to be transmitted, including adding header bytes as necessary.
The host memory transmit control within the asic 13 will check the transmit free buffer count before transmitting a frame. This count is evaluated by the control by a comparison of finish and start pointers dQFnP and dQStP (Figure 4). If there are not enough free buffers, the frame will be discarded.
When the host memory transmit control has finished moving the data and descriptive information to the buffers in the dynamic random access memory for a particular frame, it will check the status of the transmit segmentation ring on which the frame should be transmitted. The information concerning which segmentation ring to use has been returned as part of the look-up process. If the segmentation ring is full, the frame will be discarded. If there is space available on the ring, indicating by checking the 'own bit' for zero, the host memory transmit control will add an entry to the ring. It does this by writing the pointer of the first buffer to the segmentation ring as well as setting the own bit to one.
Further, it will increment the count of transmission buffers for the respective virtual channel. The free buffer count, represented by the difference between dQFnP and dQStP (Figure 4) is automatically decremented.
The host memory control will assert various dynamic random access control signals that control writing of data into the dynamic random access memory and the reading of data from it for accesses from both the interface and the segmentation and reassembly unit. The host memory control also generates the dynamic random access memory address, handling row and column address switching, and the refreshing of the memory.
As indicated above, the host memory transmit control adds the first buffer pointer of a frame to an appropriate transmit segmentation queue. It also writes the transmit segmentation ring number to the transmit queue register in the segmentation and reassembly unit.
The central processing unit must also add entries into the transmit segmentation queues and it also has to write to the transmit queue register in the segmentation and reassembly unit. This unit, on finding these entries on the segmentation queues, will transmit the data. It then returns, namely surrenders control, of the buffers and indicates their return in the transmit completion queue by writing the buffer pointer and the own bit to a zero on the completion queue.
Host Memory Transmit Segmentation Ring Structure The segmentation and reassembly unit allows, typically, up to two thousand different transmit 'packet segmentation rings' in the host memory. Each ring may take one of four values, typically sixteen, thirty-two, sixty-four and two hundred and fifty-six. The host memory reserves a long word block and stores the list of current finish and head pointers for the transmit segmentation rings.
As part of the transmit look-up process, the host memory transmit control is given the segmentation ring number. This number is used as an offset for a base pointer to point to a particular segmentation finish pointer. The long word address for the dynamic random access memory to a particular segmentation pointer is generated from the transmit segmentation pointer and the transmit segmentation number.
The SAR 21 returns buffers to the queue 40 (Figure 4) using a recovery pointer rQFnP and clears the 'own' bit.
Transmit Free Pool Controller The asic includes a control function, described herein as the transmit free pool controller, which is required to perform two tasks. First, it has to manage the return of available buffers once they have been used by the segmentation and reassembly unit. It will monitor by means of the pointer dQFnP an integrated distribution and recovery queue 40 and if the 'own' bit is clear it will set the 'own' bit so that the buffer can be reused by the host memory transmit control. Second, the free pool controller decrements for each virtual channel, the number of transmit buffers currently waiting for their data to be transmitted by the segmentation and reassembly unit.
Channel Thresholds Preferably the static random access memory is used to maintain for each virtual channel a count of the number of buffers in the dram having data packets for transmission on that channel. Also, there is maintained in static random access memory a count of the total number of buffers which have not been returned to the free pool and are therefore not yet available for use in sending new packets.
Figure 3 represents the counter which maintains the 'global' count as the counter MC. This contains a number N. The other counters (three in this simple example) contain counts N1, N2 and N3 respectively. Each of these counts represents the buffers currently used for the transmission of data for the respective virtual channel and not yet returned to the free pool. Every time a buffer is returned to the free pool the respective channel count and the global count is decremented accordingly.
If the total number of buffers in use (i.e. not available for storing fresh packets) is below a threshold limit, denoted Tg in Figure 3, no action is required. This limit may be programmable. However, if the global limit is exceeded, the limits for each channel become operative. Thus if the limit T1 is exceeded for channel 1, then either any new data packet intended for that channel may be discarded (and possibly recovered later by comparatively slow channel uses too many buffers and that other channels, particularly normal fast channels become starved of buffers for their traffic. The feature allows a programmable limit on the number of queued buffers per channel.
The individual channel limits may be selected on the basis of a variety of criteria, for example channel usage1 priority, multi-media usage and so on.
The technique can in practice be employed for virtually no cost. Typically an asynchronous transfer mode system must cater for thousands of virtual channels and must in practice contain (such as in the static random access memory) information pertaining to each channel. Such information may include a channel number, an identification number of the 'local area network emulation client' channel priority and so on. Adding an up-down count function and a threshold value is not particularly costly in terms of software or hardware.
Integrated Queuing System Another feature of the system of control of the memory buffers is the integrating of a recovery queue and a buffer distribution queue into one single queue space and optimising the pointers to control these two integrated queues. In a system where free buffers are taken from a buffer distribution queue by various devices, for example the host memory control and the central processing unit, the buffers, when used, are typically returned to a buffer recovery queue in a random order.
The buffer recovery queue is monitored by a central resource and the buffers are moved from this queue to the free buffer distribution queue for reuse by the devices. Such scheme typically uses two pointers (say start and finish pointers) to control each queue. The buffer recovery queue has to be at least the size of the buffer distribution queue to guarantee that the devices can return the used buffers to the buffer recovery queue when they have finished using the buffers.
By integrating both queues into one queue, the space required is only that of a buffer distribution queue. The pointers are more efficient, in that by incorporating the own bit; only one pointer is used by the buffer recovery queue.
The issue of the size of the buffer distribution queue is also removed as the buffer recovery queue is overlaid on the buffer distribution queue. This guarantees that there is always space for used buffers to be returned to the buffer recovery queue.
Figure 4 is a table representing a queue 40 of buffer locations and pointers to them.
This queue ,as indicated earlier, is an integrated distribution and recovery queue which works as follows: initially a special bit, herein called 'own', bit of all buffer locations in the queue are set to 1. The free buffer list is entered in the queue between dQFnP and dQStP. The rQFnP pointer is initialised to the same place as dQFnP. Free buffers are taken by the devices using the dQStP pointer (provided dQFnP - dQStP > 0). Used buffers are returned by the devices at rQFnP, clearing the own bit as well. A separate process detects these returned buffers by observing the own bit = 0 at dQFnP. The own bit is set to a one, allowing the free buffer to be reused. The size of the integrated queue can be fixed to one value, allowing the maximum number of free buffers required for all situations to be accommodated on the queue. Any number of free buffers less than this maximum number can be used, by programming the initial buffer pointers up to the value of dQFnP. This simplifies any fifo pointer roll-over mechanism, because the queue is a standard size. The difference between the dQFnP and dQStP pointers at any time is the number of free buffers available.
It will be understood that the foregoing is given by way of example only and that a variety of modifications may be made within the spirit and scope of the claims that follow.

Claims (3)

WE CLAIM:
1. A method for the transfer of data from a multiplicity of sources to respective virtual channels, employing a multiplicity of addressable memory buffers for the temporary storage of data packets prior to the transmission of said data packets on said channels, the method comprising: allocating buffers to specified virtual channels; maintaining a count of buffers in use for the storage of packets generally; maintaining a count of buffers in use for the storage of packets for each virtual channel; defining a channel threshold count for each virtual channel; defining a global threshold count for the total number of buffers in use; detecting when the total number of buffers in use exceeds the global threshold and in response thereto restricting the storage of packets for any channel wherein the number of buffers in use exceeds the respective channel threshold.
2. A method according to claim 1 and further comprising transmitting data on different ones of said virtual channels at substantially different data rates.
3. A method according to claim 1 and further comprising selectively disabling at least one of said thresholds.
GB9701011A 1997-01-17 1997-01-17 Method and apparatus for buffer management in virtual circuit systems Expired - Fee Related GB2321820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9701011A GB2321820B (en) 1997-01-17 1997-01-17 Method and apparatus for buffer management in virtual circuit systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9701011A GB2321820B (en) 1997-01-17 1997-01-17 Method and apparatus for buffer management in virtual circuit systems

Publications (3)

Publication Number Publication Date
GB9701011D0 GB9701011D0 (en) 1997-03-05
GB2321820A true GB2321820A (en) 1998-08-05
GB2321820B GB2321820B (en) 1999-04-14

Family

ID=10806196

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9701011A Expired - Fee Related GB2321820B (en) 1997-01-17 1997-01-17 Method and apparatus for buffer management in virtual circuit systems

Country Status (1)

Country Link
GB (1) GB2321820B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2339371A (en) * 1998-05-12 2000-01-19 Ibm Rate guarantees through buffer management
EP1037499A2 (en) * 1999-03-16 2000-09-20 Fujitsu Network Communications, Inc. Rotating buffer refresh
GB2353172A (en) * 1999-08-04 2001-02-14 3Com Corp Network switch with portions of output buffer capacity allocated to packet categories and packet discard when allocation is exceeded
EP1079660A1 (en) * 1999-08-20 2001-02-28 Alcatel Buffer acceptance method
WO2001067672A2 (en) * 2000-03-07 2001-09-13 Sun Microsystems, Inc. Virtual channel flow control

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0706298A2 (en) * 1994-10-04 1996-04-10 AT&T Corp. Dynamic queue length thresholds in a shared memory ATM switch
WO1997043869A1 (en) * 1996-05-15 1997-11-20 Cisco Technology, Inc. Method and apparatus for per traffic flow buffer management

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0706298A2 (en) * 1994-10-04 1996-04-10 AT&T Corp. Dynamic queue length thresholds in a shared memory ATM switch
WO1997043869A1 (en) * 1996-05-15 1997-11-20 Cisco Technology, Inc. Method and apparatus for per traffic flow buffer management

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2339371A (en) * 1998-05-12 2000-01-19 Ibm Rate guarantees through buffer management
US6377546B1 (en) 1998-05-12 2002-04-23 International Business Machines Corporation Rate guarantees through buffer management
GB2339371B (en) * 1998-05-12 2003-10-08 Ibm Rate guarantees through buffer management
EP1037499A2 (en) * 1999-03-16 2000-09-20 Fujitsu Network Communications, Inc. Rotating buffer refresh
EP1037499A3 (en) * 1999-03-16 2004-04-21 Fujitsu Network Communications, Inc. Rotating buffer refresh
GB2353172A (en) * 1999-08-04 2001-02-14 3Com Corp Network switch with portions of output buffer capacity allocated to packet categories and packet discard when allocation is exceeded
GB2353172B (en) * 1999-08-04 2001-09-26 3Com Corp Network switch including bandwidth allocation controller
US6680908B1 (en) 1999-08-04 2004-01-20 3Com Corporation Network switch including bandwidth allocation controller
EP1079660A1 (en) * 1999-08-20 2001-02-28 Alcatel Buffer acceptance method
WO2001067672A2 (en) * 2000-03-07 2001-09-13 Sun Microsystems, Inc. Virtual channel flow control
WO2001067672A3 (en) * 2000-03-07 2002-02-21 Sun Microsystems Inc Virtual channel flow control

Also Published As

Publication number Publication date
GB2321820B (en) 1999-04-14
GB9701011D0 (en) 1997-03-05

Similar Documents

Publication Publication Date Title
US6151323A (en) Method of supporting unknown addresses in an interface for data transmission in an asynchronous transfer mode
US9083659B2 (en) Method and apparatus for reducing pool starvation in a shared memory switch
US6466580B1 (en) Method and apparatus for processing high and low priority frame data transmitted in a data communication system
US6320859B1 (en) Early availability of forwarding control information
US6788671B2 (en) Method and apparatus for managing the flow of data within a switching device
US6990114B1 (en) System and method for deciding outgoing priority for data frames
US6504846B1 (en) Method and apparatus for reclaiming buffers using a single buffer bit
US6463032B1 (en) Network switching system having overflow bypass in internal rules checker
US6163541A (en) Method for selecting virtual channels based on address priority in an asynchronous transfer mode device
US6762995B1 (en) Network switch including hysteresis in signalling fullness of transmit queues
US9361225B2 (en) Centralized memory allocation with write pointer drift correction
US7110405B2 (en) Multicast cell buffer for network switch
US6636524B1 (en) Method and system for handling the output queuing of received packets in a switching hub in a packet-switching network
US6208662B1 (en) Method for distributing and recovering buffer memories in an asynchronous transfer mode edge device
US6501734B1 (en) Apparatus and method in a network switch for dynamically assigning memory interface slots between gigabit port and expansion port
US6895015B1 (en) Dynamic time slot allocation in internal rules checker scheduler
US6850999B1 (en) Coherency coverage of data across multiple packets varying in sizes
US6335938B1 (en) Multiport communication switch having gigaport and expansion ports sharing the same time slot in internal rules checker
US6526452B1 (en) Methods and apparatus for providing interfaces for mixed topology data switching system
US20070201360A1 (en) Network switch
US6336156B1 (en) Increased speed initialization using dynamic slot allocation
US7295562B1 (en) Systems and methods for expediting the identification of priority information for received packets
US6480490B1 (en) Interleaved access to address table in network switching system
US20060165055A1 (en) Method and apparatus for managing the flow of data within a switching device
GB2321820A (en) A method for dynamically allocating buffers to virtual channels in an asynchronous network

Legal Events

Date Code Title Description
730 Substitution of applicants allowed (sect. 30/1977)
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20020117