CA2325135A1 - Asynchronous transfer mode layer device - Google Patents

Asynchronous transfer mode layer device Download PDF

Info

Publication number
CA2325135A1
CA2325135A1 CA002325135A CA2325135A CA2325135A1 CA 2325135 A1 CA2325135 A1 CA 2325135A1 CA 002325135 A CA002325135 A CA 002325135A CA 2325135 A CA2325135 A CA 2325135A CA 2325135 A1 CA2325135 A1 CA 2325135A1
Authority
CA
Canada
Prior art keywords
manager
transfer mode
asynchronous transfer
queue
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002325135A
Other languages
French (fr)
Inventor
David W. Cornfield
James A. Gilderson
Marc Levesque
Amal Khailtash
Aneesh Dalvi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spacebridge Semiconductor Corp
Original Assignee
Spacebridge Networks Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CA002288513A external-priority patent/CA2288513A1/en
Application filed by Spacebridge Networks Corp filed Critical Spacebridge Networks Corp
Priority to CA002325135A priority Critical patent/CA2325135A1/en
Publication of CA2325135A1 publication Critical patent/CA2325135A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Abstract

This disclosure encompasses the presentation of an ATM layer device addressing ATM layer queuing and routing functionality pre- and post- switching. A
layered apparatus is presented as a method of isolating the arrival and departure processes of queuing, while separating network layer communication from ATM layer operation. The structure examined is useful in the ingress and egress segments of input and/or output-queued ATM
switching applications and offers benefits associated with parallel processing and process independence.

Description

PAD-0Ol-0001 _ 1 _ ~Attomey Docket No.: PAT 1994-0 ASYNCHRONOUS TRANSFER MODE LAYER DEVICE
FIELD OF THE INVENTION
This invention relates to telecommunications networks. In particular, this invention relates to route management and congestion management in fast packet switches, such as Asynchronous Transfer Mode (ATM) switches.
BACKGROUND OF THE INVENTION
A key component of any communications network is the switching sub-system that provides interconnectivity between users. Fast packet switching requires a switching sub-system that can provide, in an efficient and fair manner, the various qualities of service (QoS) required by different applications. In the field of fast packet switching, the interconnection of multiple inputs to a single output of a fabric must handle an aggregate input rate greater than its output rate. Solutions to this problem can be generally classified into the following 1 S categories: input queuing to reduce the aggregate input rate in the event of fabric congestion;
fabric speed-up to increase throughput to match or exceed the aggregate input rate; and output queuing to increase congestion tolerance at the bottleneck. Typically a sub-set of the preceding solutions are implemented for a given fast packet switch.
Queuing describes two simultaneously occurnng processes: an arnval process and a departure process. The function of the arnval process is to distribute cells into queues on the bases of priority and destination. Since queues have finite depth, feedback is required to determine if a cell must enter a queue or whether it can be discarded. The function of the departure process is to decide which of the queues is to be served next. The decision must consider service fairness in light of a requested QoS.
In general, all processes should be independent of one another, because the dependency of one process imposes unnecessary restrictions on the operation of the others.
Furthermore, the problems in one process can have a compounding effect on other processes, ultimately resulting in larger problems with an unidentifiable source. For example, a delay in one process can impose delays in neighboring processes.
Difficulties arise because independent processes typically have to communicate with one another. The co-ordination of accurate and reliable communication is no easy task and effective management of this communication is essential in order~ to successfully isolate PAD-001-0001 _ 2 _ Attorney Docket No.: PAT 1994-0 processes. The most common means of achieving such communication is to use a shared storage resource. This is the fundamental concept of a queue. However, queue access is typically subject to contention between processes. Thus the successful negotiation of access becomes key to effective communications management.
In queuing, the arrival and departure processes contend with one another for storage resources. First, access to queue memory has to be time-shared between processes. Secondly, processes contend in a similar manner for access to occupancy information stored in memory.
This information is required in the arnval process as feedback to make a decision as to whether or not a queue should be entered. It is also required by the departure process in the decision of which queue should be serviced next. The careful management of access to these two areas of shared memory are key in isolating the arnval and departure processes.
Prior solutions do not suitably address the isolation of the arrival and departure processes in respect of contention. For example, a switching device described in U.S. Patent No. 5,528,592 to Schibler et al., entitled "Method and Apparatus for Route Processing Asynchronous Transfer Mode Cells", fails to mention this aspect of contention.
One weakness of the switching device of Schibler et al. is that there is no scheduling performed across priorities and destinations, nor is there any selective cell discard on the basis of feedback. Both the arrival and departure processes must contend for access to a single queue:
ICELL memory. In the arrival process of this device, the cell loader stores cells in the ICELL
memory based on pointers found in a free cell First In First Out (FIFO) buffer. In the departure process, cells are scheduled for transmission by an active chain managed by an ingress controller in a call table. An active chain is a linked list of cells in ICELL memory.
The ingress controller establishes the links as a low priority function by modifying a next pointer field of each continuation-of message cell stored in ICELL memory. The additional shared memory access required for the establishment of links in the departure process impinges on the operation of the arnval process. The result is that in conditions of heavy traffic, primarily composed of large packets, data will likely be lost in the arrival process as the departure process monopolizes the ICELL memory with link establishment.
Another weakness of the switching device of Schibler et al. is that processes responsible for complex decisions are typically made using a computer processing unit (CPU) to lower cost. The problem here is that processes must contend for CPU
processing time.
Thus processes must also vie for processing time, which becomes more problematic as PAD-001-OOOI _ 3 - ~Attomey Docket No.: PAT 1994-0 decisions become more complex. Each queuing process requires a decision. In the arnval process, a decision is required to drop or pass a cell. In the departure process, a decision is required to slate the next queue for servicing in a fair manner. These decisions are non-trivial, but very manageable when presented with the appropriate information to assess.
For example, a scheduling decision generally requires only information about the present occupancy of queues and the current state of congestion. A typical discard decision requires only information on the occupancy of the destination queue and cell eligibility. In the switching device of Schibler et al. these queuing decisions are made by the CPU. This approach counteracts the isolation of each of the processes, since the unit becomes another point of resource contention in need of management. The overhead of this management generally causes performance degradation.
Another example of an ATM layer device is disclosed in U.S. Patent No.
5,889,778 to Huscroft et al., entitled "ATM Layer Device", in which the scope of queuing is limited and selective cell discard is not performed. Again, a centralized processor is used that controls various stages of cell processing in conjunction with other portions of the ATM layer device.
In particular, the cell processor is responsible for several functions, such as an external random access memory (RAM) address look-up, microprocessor RAM arbitration, microprocessor interface, microprocessor cell buffering, and auxiliary cell FIFO buffering.
Thus, the cell processor must essentially manage all devices of contention management (across queuing processes and across layers) in addition to performing route re-mapping.
It is, therefore, desirable to provide an ATM layer device that offers greater isolation of the arrival and departure processes.
SUMMARY OF THE INVENTION
It is an object of this invention to obviate or mitigate at least one disadvantage of previous methods and systems. Accordingly, it is an object of this invention to provide an improved ATM layer device for use in an ingress or egress configuration that has higher reliability and better performance than many of its predecessor designs. The increase in quality is directly accomplished through greater segregation of queuing processes.
In a first aspect, the present invention provides an asynchronous transfer mode layer device for interfacing between a plurality of physical layer devices and an asynchronous transfer mode switch fabric. The asynchronous transfer mode layer device comprises an PAD-001-0001 - 4 _ 'Attorney Docket No.: PAT 1994-0 asynchronous transfer mode processing sub-layer for performing asynchronous transfer mode cell processing functions, and a device management sub-layer for performing network layer communication functions. The asynchronous transfer mode processing functions are implemented by independent managers. The independent managers include a route manager, a queue manager, a discard manager, a buffer manager and a scheduler. The network layer communication functions are provided by an address-mapped distributed-database structure, corresponding to the managers, in conjunction with a configuration-and-control interface.
In a further aspect, the present invention provides a method for processing cells in an asynchronous transfer mode layer of an asynchronous transfer mode switch. The method consists of performing asynchronous transfer mode cell processing functions in a asynchronous transfer mode processing sub-layer; and performing network layer communication functions in a device management sub-layer. The sub-layers have the configuration described above, such that routing, discarding, queuing, scheduling and buffering functions are segregated and independent.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
FIG. 1 is a basic block diagram of a typical ATM switch;
FIG. 2 is a block diagram illustrating the layering structure of the present ATM layer device in the context of the Open Systems Interconnection (OSI) reference model and ATM
Forum recommendations;
FIG. 3 is a block diagram illustrating the interconnection of the present ATM
layer device in the context of a fast packet switch, implementing both input and output queuing;
FIG. 4 is a block diagram of the ATM processing sub-layer of the present invention;
FIG. 5 is a mapping diagram illustrating the mapping of the ATM processing layer functions to the ATM device management sub-layer of the ATM layer device of the present invention.
DETAILED DESCRIPTION
Before a description on the ATM layer device and method of the present invention is presented, a brief explanation of an ATM switch is presented. This invention, however, is by PAD-OOl-0001 _ 5 _ Attorney Docket No.: PAT 1994-0 no means restricted to use in an ATM switch. Furthermore, descriptions of embodiments of the invention are made with reference to ATM terminology and ATM switching equipment for switching ATM cells, but, as will be understood by those of skill in the art, the scope of the invention includes other fast packet switching systems for switching other types of data units. Therefore, the term "cell" in this description is used generally to mean any type of fixed length data unit with a header, an example of which is an ATM cell. The switching systems referred to herein are intended to refer to satellite, terrestrial and other data switching systems.
A block diagram of a typical ATM switching system 20 is shown in FIG. 1.
Packet switching in modern high-speed telecommunication networks is generally implemented in a switching system having one or more input (ingress) interface cards 22, a switch fabric 24, and one or more output (egress) interface cards 26. The ingress card is responsible for processing incoming traffic of ATM cells arnving at the input ports 28 for internal routing.
Prior to routing traffic through the forward path 30, the ingress card 22 appends additional information onto the header portion of the ATM cell, such as its internal identifier (ID), cell type, cell class, and the designated egress cards) 26. This information is typically stored in a table that is indexed by the external ID of the cell. The ingress card 22 performs other functions as well, such as the buffering, scheduling, queuing, monitoring, and discarding of cells. The ATM switch fabric 24 is primarily responsible for cross-connecting all traffic arnving from the ingress cards) 22 to the egress cards) 26. The switch fabric 24 can also performs other functions such as the buffering, scheduling and queuing of cells, depending on the system architecture. Finally, the egress card 26 is responsible for processing the traffic received from the switch fabric for onward transmission through the output ports 32. This process involves the removal of the information appended to the cell header and the reinsertion of new information for delivery to the next destination. In addition, the egress card 26 performs management functions similar to the ingress card 22, and can also send information to the switch fabric 24 (and ultimately to the ingress card 22) regarding traffic congestion at the egress card 26 through a feedback path 34. Ingress and egress cards 22 and 26 can be separate cards, but can also be the same card operating in ingress and egress modes, or both, as appropriate.
In any switching network several packets intended for a single destination may arrive at the switch 20 simultaneously over a plurality of input ports 28. For example, in FIG. 1, a PAD-001-0001 - 6 - ~Attomey Docket No.: PAT 1994-0 plurality of cells arriving at the ingress card 22 via separate input ports 28 may be destined for a single output port 32 whose transmission capacity may only handle one cell at a time.
The other entering cells must therefore be stored temporarily in a queue(s), or buffer(s), 36.
The ATM layer device 40 of the present invention resides in the ingress and egress cards 22 and 26, and is primarily concerned with the cell buffering, scheduling, routing, queuing, monitoring, and discarding functions that are performed in the ingress and egress cards 22 and 26.
FIG. 2 shows the location and basic configuration of the ATM layer device 40 of the present invention in relation to the Open Systems Interconnection (OSI) model, and the model recommended by the ATM Forum. In the OSI model, data streams arnve and depart from a physical layer 50. The conversion of data streams into packets is a progression from the physical layer 50 to a data link layer 52. Packets are associated with a destination (or set of destinations, in the case of multicast and broadcast connections) and are switched in the data link layer 52. For switch nodes, traffic management, namely, the acceptance and routing, or rejection of connections based on loading, occurs in a network layer 54.
Higher layers (not shown) are primarily for reformatting information in edge devices to standardize communication between source and destination. As used herein, an edge device is a device that connects the source or destination of information to the network. (ie., a telephone set is an edge device in a telecommunications network). In ATM networks, fixed-length packets, called cells are used. As such, the data link layer 52 is sub-divided into two sub-layers: an ATM Adaptation Layer (AAL) 56 and an ATM layer 58. The AAL 56 is primarily used in edge devices for the segmentation and re-assembly (SAR) of packets into ATM
cells.
The ATM layer 58 processes cells. Primarily, this involves the implementation of route management, congestion management and queuing functionality. In ATM
switches, the ATM layer 58 also presents loading information to the network layer 54 for the purpose of traffic management. The ATM layer device 40, in this context, whether in ingress or egress mode, is responsible for presenting a management information base (MIB) to the network layer 54 for traffic management.
Since cell processing is a repetitive local task, and traffic management involves complex global decisions, the ATM layer device 40 according to the present invention is configured as a layered device within the ATM layer 58. An ATM processing sub-layer 60 is responsible for implementing route management, congestion management, and queuing PAD-001-0001 _ ~ _ Attorney Docket No.: PAT 1994-0 functionality, while a device management sub-layer 62 is responsible for managing the communications between the network layer 54 and the ATM processing sub-layer 60.
Refernng to FIG. 3, the interconnection of the present ATM layer device 40 in the context of a generalized fast packet switch 66, implementing both input and output queuing, is shown. Data arrives and departs from the switch 66 over transmission pipes 70.
Transmission pipes 70 are the physical links between network nodes. Incoming data streams arnving over a transmission pipe 70 are received and converted into ATM cells by a physical layer device 72.
Cells are passed to an ingress ATM layer device 40a (i.e. ATM layer device 40 operating in ingress mode) over a Universal Test and OPerations Interface for ATM
(UTOPIA) cell bus 74. The ingress ATM layer device 40a is responsible for queuing in the event of congestion, and for inserting routing information and passing cells to the ATM
switch fabric 24 over cell bus 74. The ATM switch fabric 24 is responsible for routing cells from the ingress ATM layer device 40a to an egress ATM layer device 40b (i.e.
an ATM
layer device 40 operating in egress mode) in accordance with the information contained in the cell header.
Cells are passed to the egress ATM layer device 40b via cell bus 74. The egress ATM
layer device 40b is responsible for queuing to reduce the likelihood of congestion and for routing cells to the required physical layer device 72. Cells are passed from the egress ATM
layer device 40b to the physical layer device 72, again over UTOPIA cell bus 74. Outgoing cell streams are framed and transmitted over a transmission pipe 70 by the physical layer device 72. The ATM layer device 40a, 40b, in this context, whether in ingress or egress mode, is responsible for implementing route management, congestion management and queuing functionality.
Referring to FIG. 4, there is shown a block diagram of the ATM processing sub-layer 62 of the present invention. Generally, the ATM processing sub-layer 62 divides route management, congestion management and queuing functionality amongst a cluster of five managers in the ATM processing sub-layer 62 with the intention of isolating queuing processes. Route management functionality occurs strictly in a route manager 80. Congestion management is a selective discard, based on feedback. The decision to discard is made by a discard manager 82, while feedback is provided in the form of queue occupancy from a queue manager 84. Queuing functionality occurs across three managers. A scheduler 86 decides PAD-001-OOOI _ g _ ~Attomey Docket No.: PAT 1994-0 which queue to service next in the departure process. The decision is based on all queue occupancies presented by the queue manager 84. A buffer manager 88 is responsible for managing the arnval and departure processes that contend for access to buffer FIFOs 90.
Note also that the queue manager 84 is responsible for managing access contention for arrival and departure process queue occupancy.
As shown in the mapping diagram of FIG. 5, the communication between the ATM
processing sub-layer 60 and the ATM device management sub-layer 62 is bi-directional and dual-facetted. For general device management (configuration), a distributed database structure (DDS) 100 is used to communicate error flags upward and tuning parameters downward. For traffic management (control), loading information is communicated upward via MIB statistics integrated into the DDS 100, while downward communication is done via routing table entries. The DDS 100 is a bank of registers 80r, 82r, 84r, 86r, and 88r that are distributed according to the different managers 80, 82, 84, 86, 88 in the ATM
processing sub-layer 60, with each register corresponding to a manager occupying a portion of the address space and only communicating information pertinent to its local operations.
The content of information exchanged will be presented later in conjunction with the ATM
processing sub-layer 60 functionality. A configuration and control (CC) interface 102 is used to present the address space to the network layer 54 via a serial bit-stream 104. The message, in this embodiment, is transmitted with the most significant bit (MSB) first, and formatted to include a start-bit, a read/write bit, a 24-bit address, a 3-bit message length field and a data field generally varying between 0 and 120 bits.
The operation of the ATM layer device 40 of the present invention will now be described with reference to FIGS. 4 and 5. In the arnval process, with the ATM
layer device 40 in ingress mode, a cell is passed from an input interface 120 to the route manager 80 via UTOPIA cell bus 74. The route manager 80 performs a table look-up based on the Virtual Path Identifier/Virtual Connection Identifier (VPI/VCI) values and maps in a new cell route R' 124, consisting of a connection state vector S, a destination queues) for the cell, and the service class and, optionally, other quality of service (QoS) conditions for the cell, from an external connection memory 126. Simultaneously, a request is made by the route manager 80 to the discard manager 82 to determine if the cell should be dropped on the basis of the volume of presently queued traffic for the given service class and output.
This request is made via an eligibility request signal E 128 that, in a presently preferred embodiment, PAD-001-OOOI _ 9 _ r(ttorney Docket No.: PAT 1994-0 consists of a the connection state vector S, Cell Loss Priority (CLP), Early Packet Discard (EPD), and Partial Packet Discard (PPD) eligibility vectors for the current connection, and an end of packet (EOP) marker. The discard manager 82 returns a sendldiscard flag and a modified state vector S' 130 to the route manager 80. The route manager 80 drops the cell if the discard manager 82 asserts a discard flag in response to the eligibility request signal 128.
Otherwise, the route manager 80 passes the cell, via cell bus 74, to the buffer manager 88 for queuing in the external buffer FIFOs 90. Since discarding is connection-based, modified state vector S' 130 is returned to the external connection memory 126 via the route manager 80.
The route manager 80 also sends a mutual exclusion signal 158 to the CC
interface 102. The route manager 80 flags erroneous routes to its respective address space 80r in DDS 100 by storing the erred address in last recently used (LRU) fashion.
In order to make a connection-based discard decision, the discard manager 82 requires eligibility information and queue occupancy feedback. The eligibility information is dispatched directly from the route manager 80, as described above for eligibility request signal E 128. The queue occupancy feedback is dispatched as a vector from the queue manager 84: Qocc signal 134. The feedback information selected is based on the destination queue(s), provided by route manager 80 as a queue signal Q 136 when the request is initiated.
Queue signal Q 136 consists of an encoded queue identification and a request from route manager 80. If discard manager 82 determines that a cell for a given connection is eligible and the occupancy of the destination queue exceeds some threshold that would otherwise compromise QoS, then the route manager 80 is advised to drop the cell. The discard manager 82 does so by asserting a discard. In the event that all queues are full, as indicated by an overflow flag asserted by the queue manager 84 in conjunction with the Qocc signal 134, the discard manger 82 automatically asserts a discard. The CLP, EPD and PPD "on"
and "off' thresholds for each queue are stored in the DDS 100 for tuning purposes.
Furthermore the discard manager 82 stores counts of transmitted cells (total and per port) and discarded cells (total and per queue on the basis of CLP, EPD and PPD eligibility) in the it respective address space 82r in DDS 100 for traffic management purposes.
In the case where a cell is not dropped, it is passed from the route manager 80 to the buffer manager 88 for queuing via the cell bus 74. The buffer manager 88 extracts the destination queue and accordingly stores the cell in the buffer FIFOs 90 by writing over the FIFO bus 138. Cell addresses are manipulated in a FIFO manner, and the cell is written to a PAD-OOl-0001 _ 10 _ Attorney Docket No.: PAT 1994-0 common external memory space. Optionally, an error correction field can be appended to the cell to protect against memory errors. The change in queue occupancy is dispatched to the queue manager 84 via the ~Q signal 140 signal, where 0Q includes the encoded queue identification and an increment/decrement signal. FIFO boundaries are stored by the buffer manager 88 in its respective address space 88r in the DDS 100 to allow for queue depth tailoring at start-up while buffer FIFO 90 memory errors are flagged to the address space 88r in DDS 100 for tracking purposes as a count in conjunction with the erred address in LRU
fashion.
Concurrently in the departure process the buffer manager 88 requests a queue from the scheduler 86 via a request signal 142. The queue slated for transmission is then supplied by the scheduler 86 via a dispatch Q signal 144 signal, where Q represents the queue slated for transmission. The buffer manager 88 subsequently fetches a cell from the buffer FIFOs 90 via the FIFO bus 138 and then passes the cell to an output interface 150 via cell bus 74. The change in queue occupancy is then dispatched to the queue manager 84 via the ~Q signal 140 by asserting a decrement in conjunction with Q. Optionally, the decrement assertion can be routed to the scheduler 86 as an acknowledgement to prompt recalculation.
Furthermore, an error check is performed in the case where an error correction field is appended to the cell in the arrival process. As previously mentioned, FIFO boundaries are stored in the DDS 100 to allow for queue depth tailoring at start-up, while buffer FIFO 90 memory errors are flagged to the DDS 100 for tracking purposes as a count in conjunction with the erred address in LRU
fashion.
The scheduler 86 then decides which queue to service next. To assess fairness in light of requested QoS, the scheduling decision requires prioritization information in conjunction with the current queue occupancy information. The queue manager 84 permanently presents this per-queue occupancy information to the scheduler 86 via a Qstatus signal 154. Tunable service class and output prioritization parameters are presented as weights stored in the DDS
100. Furthermore the decision considers downstream congestion feedback, as indicated by a one-hot encoded 8-bit congestion signal 156. The scheduler 86 supplies the next queue slated for transmission on the dispatch Q signal 144 upon assertion of the request signal 142 by the buffer manager 88.
As will be apparent to those of skill in the art, the FIFO bus 138 can be a point of resource contention between processes. Also, the downward communication of routing table PAD-00l-0001 _ 1 1 _ Attorney Docket No.: PAT 1994-0 entries contends with the route manager 80 for access to connection memory 126. Therefore, the route manger 80 is given greater priority by default as a result of the layering, and is the master of connection memory access. The management of this inter-layer contention is achieved in one of two manners. First, contention management can be implemented as a part of the route manager 80, with the route table entries integrated into the DDS
100. Secondly, as illustrated in FIG. 4, contention management can be implemented as a memory access device on the back-end of one eighth the address space of CC interface 102, with the route manager 80 retaining a mutual exclusion signal indicating continued connection memory access. Optionally, a buffer FIFO memory access device can be similarly implemented for added test support.
The queue manager 84 can also be integrated in the scheduler 80 in the case where congestion management functionality does not require feedback. The use of the discard manager 82 is optional in the case where congestion management functionality is not required at all. Furthermore, iri such a case, the route manager 80 can be located after the buffer manager 88 in the departure process, making it an input queued device.
Optionally, queuing memory usage can be increased with sharing. Each queue is only guaranteed a small allocation of memory space, but may exploit an entire shared portion. The shared portion is the remaining memory after queue allocations, or any set of divisions thereof. In the case where sharing is implemented, the queue manager 84 is responsible for ensuring that the shared space does not overflow. This sharing function is not apparent in occupancy and is transparent to the operation of the other managers. To achieve this, a minimum queue size in conjunction with a maximum share size is held on a per queue basis in the queue manager address space 84r of the DDS 100.
Referring to FIG. 3 and FIG. 4, the input interface 120 and output interface couple the ATM layer device 40 to the ATM switch fabric 24 and the plurality of physical layer devices 70. In ingress mode, the input interface 120 is couplable to a plurality of physical layer devices 70 via a UTOPIA cell bus, while the output interface 150 is high speed cell bus 74 compatible with the ATM switch fabric 24. In egress mode, the input interface 120 is high speed cell bus 74 couplable to the switch fabric 24, while the output interface 150 is couplable to a plurality of physical layer devices 70. In a typical ATM
switch, the interfaces reformat cells to 60- bytes for internal use compatible with the ATM switch fabric 24, in which case the cell bus 74 is 30 data bits long and can be transmitted in conjunction PAD-001-0001 _ 12 _ ~Attomey Docket No.: PAT 1994-0 with a 50 MHz clock pulse and two parity bits, providing a total maximal throughput of 1.325 Gbps. In addition, an eighth of the address space of the CC interface 102 is dedicated to interfaces for flagging interfacing errors in the DDS 100 and configuring device mode.
The main advantage of the device and method of the present invention include the manner in which the isolation of processes is accomplished. The ATM layer device 40 of the present invention addresses the issue of storage resource contention by assigning a manager with the sole purpose of managing contention. The issue of contention in processing resources is addressed in a similar manner - a manager is assigned to each point of decision-making with the sole purpose of making such decisions. Coupled with a layered structure, this managerial assignment provides greater algorithmic independence in that the manner in which any one manger implements its functionality is completely independent from the way neighboring managers implement their respective functionality. The ensuing advantages of algorithmic independence are: independent and parallel development; ease of fault detection, isolation and handling pre- and post- deployment; algorithm substitution; and independent tuning pre- and post- deployment.
Furthermore, by assigning distinct managers to each decision point, the present invention eliminates processing resource contention. This feature offers the advantage of parallel processing since decisions are made concurrently, thereby increasing throughput.
Moreover, with the assignment of distinct managers to each point, the concurrent decisions can be made in an asynchronous manner. This asynchronous feature eases system development and integration efforts in that front-end and back-end designs remain uncoupled with respect to queuing processes.
The above-described embodiments are intended to be examples of the present invention. Alterations, modifications and variations may be effected to the particular embodiments by those of skill in the art without departing from the scope of the invention, which is defined solely by the claims.

Claims (22)

1. An asynchronous transfer mode layer device for interfacing between a plurality of physical layer devices and an asynchronous transfer mode switch fabric, the asynchronous transfer mode layer device comprising:
(a) a asynchronous transfer mode processing sub-layer for performing asynchronous transfer mode cell processing functions; and (b) a device management sub-layer, in communication with the asynchronous transfer mode processing sub-layer, for performing network layer communication functions.
2. The device according to claim 1, wherein the asynchronous transfer mode cell processing functions include a priority based queuing function.
3. The device according to claim 2, wherein the priority based queuing function includes a set of concurrent queuing processes, and wherein the asynchronous transfer mode processing sub-layer includes independent managers for segregating said concurrent queuing processes.
4. The device according to claim 1, wherein the network layer communication functions include an management information base statistics collection function for traffic management.
5. The device according to claim 1, wherein the asynchronous transfer mode cell processing functions include a route management function.
6. The device according to claim 1, wherein the asynchronous transfer mode cell processing functions include a congestion management function.
7. The device according to claim 6, wherein the congestion management includes a feedback-based selective discard in an arrival process.
8. The device according to claim 1, wherein the network layer communication functions include a manipulation function for manipulating performance tuning parameters.
9. The device according to claim 1, wherein the asynchronous transfer mode cell processing functions include a route management function and a congestion management function in a form of feedback-based selective discard in an arrival process, and wherein the network layer communication functions include a manipulation function for manipulating performance tuning parameters.
10. The device according to claim 9, wherein the asynchronous transfer mode processing sub-layer comprises independent managers including:
(a) a route manager for performing route management, (b) a buffer manager, in communication with the route manager, for resolving buffer first-in-first-out contentions;
(c) a queue manager, in communication with the route manager and the buffer manager, for resolving queue occupancy contentions;
(d) a discard manager, in communication with the route manager and the queue manager, for determining discarding decisions in the arrival process; and (e) a scheduler, in communication with the queue manager and the buffer manager, for determining scheduling decisions in a departure process.
11. The device according to claim 10, wherein the discard manager returns a cell discard flag to the route manager in response to an eligibility request signal from the route manager and a queue occupancy signal from the queue manager.
12. The device according to claim 10, wherein the scheduler determines a queue for transmission in response to a request from the buffer manager in view a queue status provided by the queue manager.
13. The device according to claim 10, wherein the route manager provides an queue identification for the cell to the queue manager.
14. The device according to claim 9, wherein network layer communication functions are provided by an address-mapped distributed-database structure in conjunction with a configuration-and-control interface.
15. A method for processing cells in an asynchronous transfer mode layer of an asynchronous transfer mode switch, comprising:
(i) performing asynchronous transfer mode cell processing functions in a asynchronous transfer mode processing sub-layer; and (b) performing network layer communication functions in a device management sub-layer.
16. The method according to claim 15, wherein the asynchronous transfer mode cell processing functions include a priority based queuing function.
17. The method according to claim 16, wherein the priority based queuing function includes segregating a set of concurrent queuing processes for processing by independent managers.
18. The method according to claim 15, wherein performing the network layer communication functions includes determining management information base statistics collection function for traffic management.
19. The method according to claim 15, wherein performing the asynchronous transfer mode cell processing functions includes route management.
20. The method according to claim 15, wherein performing the asynchronous transfer mode cell processing functions includes congestion management.
21. The method according to claim 20, wherein the congestion management includes a feedback-based selective discard in an arrival process.
22. The method according to claim 15, wherein performing the network layer communication functions includes manipulating performance tuning parameters.
CA002325135A 1999-11-05 2000-11-06 Asynchronous transfer mode layer device Abandoned CA2325135A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002325135A CA2325135A1 (en) 1999-11-05 2000-11-06 Asynchronous transfer mode layer device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CA002288513A CA2288513A1 (en) 1999-11-05 1999-11-05 Apparatus and method for an atm layer device
CA2,288,513 1999-11-05
CA002325135A CA2325135A1 (en) 1999-11-05 2000-11-06 Asynchronous transfer mode layer device

Publications (1)

Publication Number Publication Date
CA2325135A1 true CA2325135A1 (en) 2001-05-05

Family

ID=25681307

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002325135A Abandoned CA2325135A1 (en) 1999-11-05 2000-11-06 Asynchronous transfer mode layer device

Country Status (1)

Country Link
CA (1) CA2325135A1 (en)

Similar Documents

Publication Publication Date Title
JP4070610B2 (en) Manipulating data streams in a data stream processor
US5724358A (en) High speed packet-switched digital switch and method
CA2314625C (en) Telecommunications switches and methods for their operation
US7050440B2 (en) Method and structure for variable-length frame support in a shared memory switch
US7848341B2 (en) Switching arrangement and method with separated output buffers
US5917828A (en) ATM reassembly controller and method
EP1056307B1 (en) A fast round robin priority port scheduler for high capacity ATM switches
US5790545A (en) Efficient output-request packet switch and method
JP3347926B2 (en) Packet communication system and method with improved memory allocation
US5278828A (en) Method and system for managing queued cells
US6351466B1 (en) Switching systems and methods of operation of switching systems
US6430191B1 (en) Multi-stage queuing discipline
US20040151197A1 (en) Priority queue architecture for supporting per flow queuing and multiple ports
US6944170B2 (en) Switching arrangement and method
WO1998029993A1 (en) Output queueing in a broadband multi-media satellite and terrestrial communications network
US6876659B2 (en) Enqueuing apparatus for asynchronous transfer mode (ATM) virtual circuit merging
US6510160B1 (en) Accurate computation of percent utilization of a shared resource and fine resolution scaling of the threshold based on the utilization
CA2235135A1 (en) Improvements in or relating to an atm switch
US20050047338A1 (en) Scalable approach to large scale queuing through dynamic resource allocation
US20050190779A1 (en) Scalable approach to large scale queuing through dynamic resource allocation
US6137795A (en) Cell switching method and cell exchange system
US20020150047A1 (en) System and method for scheduling transmission of asynchronous transfer mode cells
Mhamdi et al. Practical scheduling algorithms for high-performance packet switches
CA2325135A1 (en) Asynchronous transfer mode layer device
CA2288513A1 (en) Apparatus and method for an atm layer device

Legal Events

Date Code Title Description
FZDE Discontinued